An Implementation of Transaction Logging and Recovery in a Main Memory Resident Database System

Total Page:16

File Type:pdf, Size:1020Kb

An Implementation of Transaction Logging and Recovery in a Main Memory Resident Database System An Implementation of Transaction Logging and Recovery in a Main Memory Resident Database System Abstract This report describes an implementation of Transaction Logging and Recovery using Unix Copy-On-Write on spawned processes. The purpose of the work is to extend WS-Iris, a research project on Object Oriented Main Memory Databases, with functionality for failure recovery. The presented work is a Master Thesis for a student of Master of Science in Computer Science and Technology. The work has been commissioned by Tore Risch, Professor of Engineering Databases at Computer Aided Engineering laboratory (CAElab), Linköping University (LiU/LiTH), Sweden. Keywords Transaction logging, logical logging, recovery, main memory database, copy- on-write, process forking Jonas S Karlsson CAElab, IDA, Linköping University CHAPTER 1 Introduction 5 1.1 WS-Iris 5 1.2 Reason & Goal 5 1.3 Contents 6 CHAPTER 2 WS-Iris 7 2.1 Internal Workings 7 2.1.1 The self-contained Lisp 8 2.1.2 Foreign functions 8 2.2 Image 8 2.3 Logging (Histories) 8 CHAPTER 3 Survey on Recovery 9 3.1 Introduction 9 3.2 Saved Images 10 3.2.1 Ping-Pong (Duplex strategy) 10 3.2.2 Fuzzy Checkpointing (Monoplex strategy) 11 3.2.3 Black/White Checkpointing 11 3.2.4 Copy-on-Update Checkpointing 12 3.3 Logging 12 3.3.1 Physical Page Logging 12 3.3.2 Physical Transition Page Logging 13 3.3.3 Transition Access logging 13 3.3.4 Logical Logging on Recordlevel 13 CHAPTER 4 Requirements 14 4.1 Important Facts 14 4.2 Main idea 15 4.2.1 Image 15 4.2.2 Logging 17 4.3 Conclusions 17 CHAPTER 5 Implementation 18 5.1 Symbols 18 5.2 Building a Recovery System 20 5.3 Language 23 5.3.1 Datatypes - Lisp 24 5.3.2 C language - Operating system interface 24 5.4 Storage-system 24 5.5 Image 26 5.5.1 Improvements 26 5.5.2 Algorithm 27 5.5.3 Options 28 Transaction Logging and Recovery for a Main Memory OODB 3 5.6 Logging 29 5.6.1 Improvements 29 5.6.2 Algorithm 29 5.6.3 Options 31 5.7 Recovery 31 5.7.1 Improvements 31 5.7.2 Algorithm for handling files 32 5.7.3 Algorithm for Recovery 33 5.7.4 Lisp Extensions 33 CHAPTER 6 Evaluation 36 6.1 Testing method 36 6.2 The tests 37 6.3 Possible Improvements 39 6.4 Software Engineering aspects 41 6.5 Acknowledgments 41 CHAPTER 7 Definitions 42 CHAPTER 8 References 45 Transaction Logging and Recovery for a Main Memory OODB 4 CHAPTER 1 Introduction In this section, an introduction to theWS-Iris database system and motivation for the work is given. 1.1 WS-Iris WS-Iris [Litwin-92] is a memory resident object oriented database system written by Tore Risch at the Database Technology Depart- ment, Hewlett-Packard Laboratories. 1.2 Reason & Goal The aim of this project is to enhance WS-Iris with recovery functions, which will be responsible for recovery after a crash. The reason for building a recovery-system is to secure data storage in the database. WS-Iris is a main memory resident database and thus there is no database on the disk that is constantly updated. Benefits thereby gained are tremendous at search and update time. The over- head for handling data can be minimized compared to a disk-based system. However, main memory is not stable memory, and the oper- ating system is not stable. Crashes will occur, and information stored in the computers memory will be lost forever. Data has to be saved (backuped) elsewhere, and this is the primary concern of this work. Transaction Logging and Recovery for a Main Memory OODB 5 Introduction The main causes for crashes are programming errors, operating sys- tem faults and hardware failures. These failures often lead to uncon- trolled process termination. This work is focused on handling data that has been committed to the database, and making that data persist- ent. 1.3 Contents Chapter 2 introduces the WS-Iris programming system. Aspects of WS-Iris directly concerned with the problem are described and exem- plified. Chapter 3 provides a short survey of well known methods used in research and commercial systems for recovery processing. In chapter 4 a specification is presented concerning the requirements of the work and the purpose of the work. Chapter 5 presents the chosen implementation and justifies the design decisions. Chapter 6 evaluates the implementation and the achieved results. Possible improvements are also briefly discussed. A compiled list of technical terms used in this thesis may be found in appendix A. Transaction Logging and Recovery for a Main Memory OODB 6 CHAPTER 2 WS-Iris This chapter describes functionality already present within the WS- Iris system that are of use when implementing a Recovery System. Further information about the Iris DBMS can be found in Object-Ori- ented Concepts, Databases, and Applications [Fishman-87]. 2.1 Internal Workings WS-IRIS is written and optimized with memory residency in mind. It is implemented in C giving a special version of Common Lisp under which Database Queries are executed using a transformation into ObjectLog that is interpreted by the Lisp. At the prototype stage, WS-IRIS was entirely written in Common Lisp but has later been migrated towards an implementation in C for reasons of efficiency, the interactive parts are still written with Lisp as a basis, however. Since WS-IRIS partly is implemented in C, and partly implemented in the Lisp that it implements, there is a powerful ”foreign” function interface utilities. This makes it possible to implement OSQL (Object Symbolic Query Language) and Lisp functions in C code. OSQL is used as the main database query language, and it uses the concepts of types, objects, and functions. Transaction Logging and Recovery for a Main Memory OODB 7 WS-Iris 2.1.1 The self-contained Lisp Lisp is good for prototyping and development of a system. One rea- son for not using C in this work, is that data is stored and created in the Lisp and therefore must be accessed using lisp primitives. This would mean that the C-code would not be plain C. So, since most structures and operators are easily accessible from Lisp, but only with effort from within C (one would effectively be writing Lisp in C) there are clearly advantages of using Lisp for implementation of algorithms. 2.1.2 Foreign functions Foreign functions are used as a means of extending the Lisp/OSQL language with various primitives, usually for interfacing to the oper- ating system and outside world. This is further explained in WS-Iris; A Main Memory Object Oriented DBMS [Risch-92]. 2.2 Image The data area called image in the WS-IRiS system contains all of the database and interfacing information for the database implementa- tion. When the image is stored on disk it is called a dump. When WS- Iris is started, a dump (amos.dmp) is automatically loaded. In WS-Iris there are functions for saving and loading a dump of the database. These functions are available from OSQL, Lisp as well as from C. An important issue is the consistency of the dump: a dump should only be performed on a committed database. This is ensured when the routines are called from OSQL, but care has to be exercised when they are called from Lisp. The dump also contains references to foreign functions. These are relocated at load-time. Lisp functions and variables available at save-time are also contained in the dump. 2.3 Logging (Histories) For logging purposes, there are history functionality that can be used. The data stored by the history function are undo and redo information specific for current, non-committed transactions. This information can be used at commit-time for creating a log. Transaction Logging and Recovery for a Main Memory OODB 8 CHAPTER 3 Survey on Recovery There has been quite a lot of work done in the database recovery area, mainly because of the fact that most database system still are manag- ing databases that resides on disk. Lately, however, there has been an amount of research on MMRDB (Main Memory Resident Data- Bases), and an identified problem is that of securing persistency. Dif- ferent systems for research on a variety of recovery algorithms has been written, Principles of Transaction-Oriented Database Recovery [Haerder-83] contains interesting material on the area of Recovery. 3.1 Introduction A MMRDB takes advantage of the residency fact, and can therefore achieve astonishingly fast results for creating and searching data. Also updates can be made at the same high-speed. But there is a prob- lem, since the data is stored in main memory it can also be lost quite easily. Therefore, in order to make sure that the changes made are persistent, they are often saved onto disk. Persistency means that the changes made to the database somehow can be recreated. It is often a a requirement that committed updates are persistent. The simplest way of accomplishing this is by using a log-file on disk on which, at each commit, the updates are written sequentially at the end. Having a log-file is sufficient, but not very economical. The time used for recreating the database, bringing it up-to-date using the log-file, will be linear to the size of the file. Therefore different strategies has been developed, called Checkpointing strategies. The obvious optimi- Transaction Logging and Recovery for a Main Memory OODB 9 Survey on Recovery zation is to try to keep a copy of the database image updated on disk.
Recommended publications
  • Rainblock: Faster Transaction Processing in Public Blockchains
    RainBlock: Faster Transaction Processing in Public Blockchains Soujanya Ponnapalli1, Aashaka Shah1, Souvik Banerjee1, Dahlia Malkhi2, Amy Tai3, Vijay Chidambaram1,3, and Michael Wei3 1University of Texas at Austin, 2Diem Association and Novi Financial, 3VMware Research Abstract Metric No state State: 10M Ratio We present RAINBLOCK, a public blockchain that achieves Time taken to mine txs (s) 1047 6340 6× " high transaction throughput without modifying the proof-of- # Txs per block 2150 833 2.5× # work consensus. The chief insight behind RAINBLOCK is that Tx throughput (txs/s) 28.6 4.7 6× # while consensus controls the rate at which new blocks are added to the blockchain, the number of transactions in each Table 1: Impact of system state on blockchain throughput. block is limited by I/O bottlenecks. Public blockchains like This table shows the throughput of Ethereum with proof-of- Ethereum keep the number of transactions in each block low work consensus when 30K txs are mined using three miners, so that all participating servers (miners) have enough time to in two scenarios: first, there are no accounts on the blockchain, process a block before the next block is created. By removing and in the second, 10M accounts have been added. Despite the I/O bottlenecks in transaction processing, RAINBLOCK al- no other difference, tx throughput is 6× lower in the second lows miners to process more transactions in the same amount scenario; we trace this to the I/O involved in processing txs. of time. RAINBLOCK makes two novel contributions: the RAIN- BLOCK architecture that removes I/O from the critical path of processing transactions (txs), and the distributed, multi- Bitcoin [1] and Ethereum, process only tens of transactions versioned DSM-TREE data structure that stores the system per second, limiting their applications [33, 65].
    [Show full text]
  • Let's Talk About Storage & Recovery Methods for Non-Volatile Memory
    Let’s Talk About Storage & Recovery Methods for Non-Volatile Memory Database Systems Joy Arulraj Andrew Pavlo Subramanya R. Dulloor [email protected] [email protected] [email protected] Carnegie Mellon University Carnegie Mellon University Intel Labs ABSTRACT of power, the DBMS must write that data to a non-volatile device, The advent of non-volatile memory (NVM) will fundamentally such as a SSD or HDD. Such devices only support slow, bulk data change the dichotomy between memory and durable storage in transfers as blocks. Contrast this with volatile DRAM, where a database management systems (DBMSs). These new NVM devices DBMS can quickly read and write a single byte from these devices, are almost as fast as DRAM, but all writes to it are potentially but all data is lost once power is lost. persistent even after power loss. Existing DBMSs are unable to take In addition, there are inherent physical limitations that prevent full advantage of this technology because their internal architectures DRAM from scaling to capacities beyond today’s levels [46]. Using are predicated on the assumption that memory is volatile. With a large amount of DRAM also consumes a lot of energy since it NVM, many of the components of legacy DBMSs are unnecessary requires periodic refreshing to preserve data even if it is not actively and will degrade the performance of data intensive applications. used. Studies have shown that DRAM consumes about 40% of the To better understand these issues, we implemented three engines overall power consumed by a server [42]. in a modular DBMS testbed that are based on different storage Although flash-based SSDs have better storage capacities and use management architectures: (1) in-place updates, (2) copy-on-write less energy than DRAM, they have other issues that make them less updates, and (3) log-structured updates.
    [Show full text]
  • Not ACID, Not BASE, but SALT a Transaction Processing Perspective on Blockchains
    Not ACID, not BASE, but SALT A Transaction Processing Perspective on Blockchains Stefan Tai, Jacob Eberhardt and Markus Klems Information Systems Engineering, Technische Universitat¨ Berlin fst, je, [email protected] Keywords: SALT, blockchain, decentralized, ACID, BASE, transaction processing Abstract: Traditional ACID transactions, typically supported by relational database management systems, emphasize database consistency. BASE provides a model that trades some consistency for availability, and is typically favored by cloud systems and NoSQL data stores. With the increasing popularity of blockchain technology, another alternative to both ACID and BASE is introduced: SALT. In this keynote paper, we present SALT as a model to explain blockchains and their use in application architecture. We take both, a transaction and a transaction processing systems perspective on the SALT model. From a transactions perspective, SALT is about Sequential, Agreed-on, Ledgered, and Tamper-resistant transaction processing. From a systems perspec- tive, SALT is about decentralized transaction processing systems being Symmetric, Admin-free, Ledgered and Time-consensual. We discuss the importance of these dual perspectives, both, when comparing SALT with ACID and BASE, and when engineering blockchain-based applications. We expect the next-generation of decentralized transactional applications to leverage combinations of all three transaction models. 1 INTRODUCTION against. Using the admittedly contrived acronym of SALT, we characterize blockchain-based transactions There is a common belief that blockchains have the – from a transactions perspective – as Sequential, potential to fundamentally disrupt entire industries. Agreed, Ledgered, and Tamper-resistant, and – from Whether we are talking about financial services, the a systems perspective – as Symmetric, Admin-free, sharing economy, the Internet of Things, or future en- Ledgered, and Time-consensual.
    [Show full text]
  • Failures in DBMS
    Chapter 11 Database Recovery 1 Failures in DBMS Two common kinds of failures StSystem filfailure (t)(e.g. power outage) ‒ affects all transactions currently in progress but does not physically damage the data (soft crash) Media failures (e.g. Head crash on the disk) ‒ damagg()e to the database (hard crash) ‒ need backup data Recoveryyp scheme responsible for handling failures and restoring database to consistent state 2 Recovery Recovering the database itself Recovery algorithm has two parts ‒ Actions taken during normal operation to ensure system can recover from failure (e.g., backup, log file) ‒ Actions taken after a failure to restore database to consistent state We will discuss (briefly) ‒ Transactions/Transaction recovery ‒ System Recovery 3 Transactions A database is updated by processing transactions that result in changes to one or more records. A user’s program may carry out many operations on the data retrieved from the database, but the DBMS is only concerned with data read/written from/to the database. The DBMS’s abstract view of a user program is a sequence of transactions (reads and writes). To understand database recovery, we must first understand the concept of transaction integrity. 4 Transactions A transaction is considered a logical unit of work ‒ START Statement: BEGIN TRANSACTION ‒ END Statement: COMMIT ‒ Execution errors: ROLLBACK Assume we want to transfer $100 from one bank (A) account to another (B): UPDATE Account_A SET Balance= Balance -100; UPDATE Account_B SET Balance= Balance +100; We want these two operations to appear as a single atomic action 5 Transactions We want these two operations to appear as a single atomic action ‒ To avoid inconsistent states of the database in-between the two updates ‒ And obviously we cannot allow the first UPDATE to be executed and the second not or vice versa.
    [Show full text]
  • What Is a Database Management System? What Is a Transaction
    What is a Database? Collection of data central to some enterprise Overview of Databases and Essential to operation of enterprise Transaction Processing Contains the only record of enterprise activity An asset in its own right Chapter 1 Historical data can guide enterprise strategy Of interest to other enterprises State of database mirrors state of enterprise Database is persistent 2 What is a Database Management What is a Transaction? System? When an event in the real world changes the A Database Management System (DBMS) state of the enterprise, a transaction is is a program that manages a database: executed to cause the corresponding change in the database state Supports a high-level access language (e.g. With an on-line database, the event causes the SQL). transaction to be executed in real time Application describes database accesses using A transaction is an application program that language. with special properties - discussed later - to DBMS interprets statements of language to guarantee it maintains database correctness perform requested database access. 3 4 What is a Transaction Processing Transaction Processing System System? Transaction execution is controlled by a TP monitor s DBMS database n o i Creates the abstraction of a transaction, t c a analogous to the way an operating system s n a r creates the abstraction of a process t DBMS database TP monitor and DBMS together guarantee the special properties of transactions TP Monitor A Transaction Processing System consists of TP monitor, databases, and transactions 5 6 1 System
    [Show full text]
  • SQL Server Protection Whitepaper
    SQL Server Protection Whitepaper Contents 1. Introduction ..................................................................................................................................... 2 Documentation .................................................................................................................................................................. 2 Licensing ............................................................................................................................................................................... 2 The benefits of using the SQL Server Add-on ....................................................................................................... 2 Requirements ...................................................................................................................................................................... 2 2. SQL Protection overview ................................................................................................................ 3 User databases ................................................................................................................................................................... 3 System databases .............................................................................................................................................................. 4 Transaction logs ................................................................................................................................................................
    [Show full text]
  • What Is Nosql? the Only Thing That All Nosql Solutions Providers Generally Agree on Is That the Term “Nosql” Isn’T Perfect, but It Is Catchy
    NoSQL GREG SYSADMINBURD Greg Burd is a Developer Choosing between databases used to boil down to examining the differences Advocate for Basho between the available commercial and open source relational databases . The term Technologies, makers of Riak. “database” had become synonymous with SQL, and for a while not much else came Before Basho, Greg spent close to being a viable solution for data storage . But recently there has been a shift nearly ten years as the product manager for in the database landscape . When considering options for data storage, there is a Berkeley DB at Sleepycat Software and then new game in town: NoSQL databases . In this article I’ll introduce this new cat- at Oracle. Previously, Greg worked for NeXT egory of databases, examine where they came from and what they are good for, and Computer, Sun Microsystems, and KnowNow. help you understand whether you, too, should be considering a NoSQL solution in Greg has long been an avid supporter of open place of, or in addition to, your RDBMS database . source software. [email protected] What Is NoSQL? The only thing that all NoSQL solutions providers generally agree on is that the term “NoSQL” isn’t perfect, but it is catchy . Most agree that the “no” stands for “not only”—an admission that the goal is not to reject SQL but, rather, to compensate for the technical limitations shared by the majority of relational database implemen- tations . In fact, NoSQL is more a rejection of a particular software and hardware architecture for databases than of any single technology, language, or product .
    [Show full text]
  • Oracle Nosql Database
    An Oracle White Paper November 2012 Oracle NoSQL Database Oracle NoSQL Database Table of Contents Introduction ........................................................................................ 2 Technical Overview ............................................................................ 4 Data Model ..................................................................................... 4 API ................................................................................................. 5 Create, Remove, Update, and Delete..................................................... 5 Iteration ................................................................................................... 6 Bulk Operation API ................................................................................. 7 Administration .................................................................................... 7 Architecture ........................................................................................ 8 Implementation ................................................................................... 9 Storage Nodes ............................................................................... 9 Client Driver ................................................................................. 10 Performance ..................................................................................... 11 Conclusion ....................................................................................... 12 1 Oracle NoSQL Database Introduction NoSQL databases
    [Show full text]
  • How to Conduct Transaction Log Analysis for Web Searching And
    Search Log Analysis: What is it; what’s been done; how to do it Bernard J. Jansen School of Information Sciences and Technology The Pennsylvania State University 329F IST Building University Park, Pennsylvania 16802 Email: [email protected] Abstract The use of data stored in transaction logs of Web search engines, Intranets, and Web sites can provide valuable insight into understanding the information-searching process of online searchers. This understanding can enlighten information system design, interface development, and devising the information architecture for content collections. This article presents a review and foundation for conducting Web search transaction log analysis. A methodology is outlined consisting of three stages, which are collection, preparation, and analysis. The three stages of the methodology are presented in detail with discussions of goals, metrics, and processes at each stage. Critical terms in transaction log analysis for Web searching are defined. The strengths and limitations of transaction log analysis as a research method are presented. An application to log client-side interactions that supplements transaction logs is reported on, and the application is made available for use by the research community. Suggestions are provided on ways to leverage the strengths of, while addressing the limitations of, transaction log analysis for Web searching research. Finally, a complete flat text transaction log from a commercial search engine is available as supplementary material with this manuscript. Introduction Researchers have used transaction logs for analyzing a variety of Web systems (Croft, Cook, & Wilder, 1995; Jansen, Spink, & Saracevic, 2000; Jones, Cunningham, & McNab, 1998; Wang, 1 of 42 Berry, & Yang, 2003). Web search engine companies use transaction logs (also referred to as search logs) to research searching trends and effects of system improvements (c.f., Google at http://www.google.com/press/zeitgeist.html or Yahoo! at http://buzz.yahoo.com/buzz_log/?fr=fp- buzz-morebuzz).
    [Show full text]
  • The Integration of Database Systems
    Purdue University Purdue e-Pubs Department of Computer Science Technical Reports Department of Computer Science 1993 The Integration of Database Systems Tony Schaller Omran A. Bukhres Ahmed K. Elmagarmid Purdue University, [email protected] Xiangning Liu Report Number: 93-046 Schaller, Tony; Bukhres, Omran A.; Elmagarmid, Ahmed K.; and Liu, Xiangning, "The Integration of Database Systems" (1993). Department of Computer Science Technical Reports. Paper 1061. https://docs.lib.purdue.edu/cstech/1061 This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact [email protected] for additional information. The Integration of Database Systems Tony Schaller, Omran A. Bukhres, Ahmed K. Elmagarmid and Xiangning Liu CSD-TR-93-046 July 1993 I I The Integration of Database Systems Tony Schaller Molecular Design Ltd. 2132 Farallon Drive San Leandro, CA 94577 Omran A. Bukhres, Ahmed K. Elmagarmid and Xiangning Liu DeparLment of Computer Sciences Purdue University West Lafayette, IN 47907 eJIlail: {bukhres,ake,xl} .es.purdue.edu 1 Introduction A database system is composed of two elements: a software program, called a database management system, and a set of data, called a database. The data in a database is organized according to some data model, such as the relational model used in a DB2 database [DW88] or the hierarchical model found with IMS databases [Dat77] . Users access the data through an interface (the query language) provided by the database management system. A schema describes the actual data structures and organization within the system. During the decade ofthe nineteen-seventies, centralized databases were predominant, but recent innovations in communications and database technologies have engendered a revolution in data processing, giving rlse to a new generation of decentralized database systems.
    [Show full text]
  • Transaction Processing Monitors
    Chapter 24: Advanced Transaction Processing ! Transaction-Processing Monitors ! Transactional Workflows ! High-Performance Transaction Systems ! Main memory databases ! Real-Time Transaction Systems ! Long-Duration Transactions ! Transaction management in multidatabase systems 1 Database System Concepts 24.1 ©Silberschatz, Korth and Sudarshan Transaction Processing Monitors ! TP monitors initially developed as multithreaded servers to support large numbers of terminals from a single process. ! Provide infrastructure for building and administering complex transaction processing systems with a large number of clients and multiple servers. ! Provide services such as: ! Presentation facilities to simplify creating user interfaces ! Persistent queuing of client requests and server responses ! Routing of client messages to servers ! Coordination of two-phase commit when transactions access multiple servers. ! Some commercial TP monitors: CICS from IBM, Pathway from Tandem, Top End from NCR, and Encina from Transarc 2 Database System Concepts 24.2 ©Silberschatz, Korth and Sudarshan 1 TP Monitor Architectures 3 Database System Concepts 24.3 ©Silberschatz, Korth and Sudarshan TP Monitor Architectures (Cont.) ! Process per client model - instead of individual login session per terminal, server process communicates with the terminal, handles authentication, and executes actions. ! Memory requirements are high ! Multitasking- high CPU overhead for context switching between processes ! Single process model - all remote terminals connect to a single server process. ! Used in client-server environments ! Server process is multi-threaded; low cost for thread switching ! No protection between applications ! Not suited for parallel or distributed databases 4 Database System Concepts 24.4 ©Silberschatz, Korth and Sudarshan 2 TP Monitor Architectures (Cont.) ! Many-server single-router model - multiple application server processes access a common database; clients communicate with the application through a single communication process that routes requests.
    [Show full text]
  • High-Performance Transaction Processing in SAP HANA
    High-Performance Transaction Processing in SAP HANA Juchang Lee1, Michael Muehle1, Norman May1, Franz Faerber1, Vishal Sikka1, Hasso Plattner2, Jens Krueger2, Martin Grund3 1SAP AG 2Hasso Plattner Insitute, Potsdam, Germany, 3eXascale Infolab, University of Fribourg, Switzerland Abstract Modern enterprise applications are currently undergoing a complete paradigm shift away from tradi- tional transactional processing to combined analytical and transactional processing. This challenge of combining two opposing query types in a single database management system results in additional re- quirements for transaction management as well. In this paper, we discuss our approach to achieve high throughput for transactional query processing while allowing concurrent analytical queries. We present our approach to distributed snapshot isolation and optimized two-phase commit protocols. 1 Introduction An efficient and holistic data management infrastructure is one of the key requirements for making the right deci- sions at an operational, tactical, and strategic level and is core to support all kinds of enterprise applications[12]. In contrast to traditional architectures of database systems, the SAP HANA database takes a different approach to provide support for a wide range of data management tasks. The system is organized in a main-memory centric fashion to reflect the shift within the memory hierarchy[2] and to consistently provide high perfor- mance without prohibitively slow disk interactions. Completely transparent for the application, data is orga- nized along its life cycle either in column or row format, providing the best performance for different workload characteristics[11, 1]. Transactional workloads with a high update rate and point queries can be routed against a row store; analytical workloads with range scans over large datasets are supported by column oriented data struc- tures.
    [Show full text]