Fast and Energy-Efficient OLAP Data Management on Hybrid Main Memory Systems

Total Page:16

File Type:pdf, Size:1020Kb

Fast and Energy-Efficient OLAP Data Management on Hybrid Main Memory Systems Fast and Energy-Efficient OLAP Data Management on Hybrid Main Memory Systems Hassan, A., Nikolopoulos, D., & Vandierendonck, H. (2019). Fast and Energy-Efficient OLAP Data Management on Hybrid Main Memory Systems. IEEE Transactions on Computers. https://doi.org/10.1109/TC.2019.2919287 Published in: IEEE Transactions on Computers Document Version: Peer reviewed version Queen's University Belfast - Research Portal: Link to publication record in Queen's University Belfast Research Portal Publisher rights © 2019 IEEE. This work is made available online in accordance with the publisher’s policies. Please refer to any applicable terms of use of the publisher. General rights Copyright for the publications made accessible via the Queen's University Belfast Research Portal is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The Research Portal is Queen's institutional repository that provides access to Queen's research output. Every effort has been made to ensure that content in the Research Portal does not infringe any person's rights, or applicable UK laws. If you discover content in the Research Portal that you believe breaches copyright or violates any law, please contact [email protected]. Download date:01. Oct. 2021 SUBMITTED TO IEEE TRANSACTIONS ON COMPUTERS, JUNE 2018 1 Fast and Energy-Efficient OLAP Data Management on Hybrid Main Memory Systems Ahmad Hassan, Dimitrios S. Nikolopoulos, Senior Member, IEEE, and Hans Vandierendonck, Senior Member, IEEE, Abstract—This paper studies the problem of efficiently utilizing hybrid memory systems, consisting of both Dynamic Random Access Memory (DRAM) and novel Non-Volatile Memory (NVM) in database management systems (DBMS) for online analytical processing (OLAP) workloads. We present a methodology to determine the database operators that are responsible for most main memory accesses. Our analysis uses both cost models and empirical measurements. We develop heuristic decision procedures to allocate data in hybrid memory at the time that the data buffers are allocated, depending on the expected memory access frequency. We implement these heuristics in the MonetDB column-oriented database and demonstrate performance improvement and energy-efficiency as compared to state-of-the-art application-agnostic hybrid memory management techniques. Index Terms—Non-volatile memory, hybrid main memory, database management system F 1 INTRODUCTION RAM) and Resistive RAM (RRAM) [19], are compelling alternatives due to their high density and near-zero leak- Online analytical processing (OLAP) is a key component age power [8], [13], [37]. NVM technologies however have of modern data processing pipelines and aims at analyzing several drawbacks, broadly involving increased latency and large data sets to inform business decisions. OLAP con- increased dynamic energy for NVM accesses, reduced mem- sists of long-running, complex transactions that are read- ory bandwidth and a fast wear-out of memory cells [8], [28], intensive. To maintain high processing throughput, data [46], [65]. On the plus side, NVM is predicted to scale to sets are kept in main memory during OLAP processing to smaller feature sizes [8] and has around 100× lower static avoid I/O bottlenecks. This approach, known as in-memory energy due to the absence of refresh operations [13]. More- processing, puts severe pressure on the required main mem- over, NVM is persistent, which is important to the design ory capacity as data sets are continuously growing. It is of DBMSs. This work, however, focuses on the orthogonal estimated that data sets world-wide double in size every aspects of energy-efficiency and performance. year [20]. As such, multi-terabyte main memory setups are Hybrid memory systems typically have a fast and small desired [48]. component and a large and slow component, each with DRAM, the dominant technology for main memory distinct energy consumption. In the case of a DRAM+NVM chips, has hit power and scaling limitations [22], [64]. hybrid memory, DRAM provides high performance but it is DRAM-based main memory consumes 30-40 % of the total not feasible to have large amounts of it. On the other hand, server power due to leakage and refresh power consump- NVM can be scaled up to provide large amounts of memory, tion [3], [30], [34]. The background power consumption of but it cannot be accessed as efficiently as DRAM. Hybrid DRAM moreover scales proportionally with the DRAM size, memory systems raise the key question of how to distribute adding to the total cost of ownership (TCO). Moreover, it is the data over the memory types? uncertain whether DRAM technology can be scaled below State-of-the-art techniques propose data migration poli- 40 nm feature sizes [22], [64]. It is thus unlikely that DRAM cies for hybrid memory where data migration is decided will continue to be the dominant memory technology. The on by the hardware [28], [46], [53], [62], the operating sys- techniques like High-Bandwidth Memory (HBM) and Intel’s tem [12], [50] or a mixture of both [47]. These techniques try Knight’s Landing processor provide ways of mitigating to second-guess the properties of the workload and migrate DRAM limitations [25][53]. large chunks of data, typically at the page granularity of the An alternative setup uses a combination of conventional virtual memory system. This reactive approach introduces DRAM and novel byte-addressable non-volatile memory runtime overhead and energy consumption due to monitor- (NVM) chips that are interface-compatible with DRAM [27], ing and migration. [28], [46], [47]. Emerging NVM technologies such as Phase- In this paper, we propose a radically different approach: Change Memory (PCM), Spin Transfer Torque RAM (STT- Building on intricate knowledge of the structure and mem- ory access patterns of column-oriented database, we pro- • Ahmad Hassan is with SAP. He was with Queen’s University Belfast, pose that the DBMS manages data placement in the hybrid United Kingdom at the time this research was conducted. E-mail: [email protected] main memory. We present a methodology to analyze a DBMS • Dimitrios S. Nikolopoulos and Hans Vandierendonck are with Queen’s in order to identify a good strategy for data placement. University Belfast. Our analysis uses a tool [18] that instruments all program Manuscript received June 6, 2018; revised December 2, 2018. variables and memory allocations and collects statistics on SUBMITTED TO IEEE TRANSACTIONS ON COMPUTERS, JUNE 2018 2 their size, their lifetime and on the number of memory e.g., to apply filter operations as early in the query plan as accesses made for each. Using these statistics, we formulate possible in order to reduce data volumes quickly. a strategy for data placement in a hybrid main memory Columnar databases require materialization, which is system. In our solution the DBMS analyzes the query plan to the act of “gluing” together the columns that make up a predict which data sets will incur frequent memory accesses relation [1]. Early materialization glues together all columns and should thus be placed on DRAM. It then allocates these that are needed in the evaluation of the query. This then is data sets on DRAM as they are generated. By predicting the similar to executing the query in the same style as a record- best placement with high accuracy, our solution does not oriented database where records are initially shortened to require migration of data between DRAM and NVM to fix contain only the columns relevant to the query. Late mate- wrong data placements. This contributes to its performance rialization postpones glueing together the columns as long and energy-efficiency. as possible. Algebraic operators are not applied to the data The key contributions of this paper are: columns themselves. Instead they operate on columns that hold the indices of the relevant records. • Derivation of runtime data placement heuristics for We focus our work on MonetDB [6], a highly effi- inputs and intermediate allocations of hash join us- cient columnar database. MonetDB uses late materializa- ing the access patterns of binary algebraic operators. tion, which results in high memory locality [6]. Moreover, The hash join is the single most important operation MonetDB evaluates each operator at a time, i.e., it processes in our OLAP workloads. Our solution, optimized the full input columns to produce the output(s) before for OLAP workloads, requires only two heuristics proceeding to evaluate the next operator. for MonetDB [6], a state-of-the-art column-oriented database. Our approach is, however, not limited to MonetDB. 2.2 The Memory Hierarchy • Demonstrating the performance and energy- The goal of cache memories (or caches for short) is to create efficiency of our data placement heuristics for hybrid the impression of a large and fast memory, where in reality memory in MonetDB. We furthermore compare our large memories are slow and fast memories must be small. heuristics to state-of-the-art application-agnostic Cache memories create this impression using the principle page placement and migration techniques. of reference locality, an empirical observation that programs • Demonstrating that query plan partitioning yields tend to repeatedly access only a limited amount of data intermediate data sets that are small enough to fit during any period of time. This limited amount of data is in DRAM without incurring significant overhead. called the working set of the program. Only the first access to The remainder of this paper is organized as follows. the working set is slow. Subsequent accesses are fast as the Section2 reviews the organization of memory systems and working set is then available in the cache. We call a cache their key performance and energy properties. Section3 access that finds the data in the cache a cache hit.
Recommended publications
  • Monetdb/X100: Hyper-Pipelining Query Execution
    MonetDB/X100: Hyper-Pipelining Query Execution Peter Boncz, Marcin Zukowski, Niels Nes CWI Kruislaan 413 Amsterdam, The Netherlands fP.Boncz,M.Zukowski,[email protected] Abstract One would expect that query-intensive database workloads such as decision support, OLAP, data- Database systems tend to achieve only mining, but also multimedia retrieval, all of which re- low IPC (instructions-per-cycle) efficiency on quire many independent calculations, should provide modern CPUs in compute-intensive applica- modern CPUs the opportunity to get near optimal IPC tion areas like decision support, OLAP and (instructions-per-cycle) efficiencies. multimedia retrieval. This paper starts with However, research has shown that database systems an in-depth investigation to the reason why tend to achieve low IPC efficiency on modern CPUs in this happens, focusing on the TPC-H bench- these application areas [6, 3]. We question whether mark. Our analysis of various relational sys- it should really be that way. Going beyond the (im- tems and MonetDB leads us to a new set of portant) topic of cache-conscious query processing, we guidelines for designing a query processor. investigate in detail how relational database systems The second part of the paper describes the interact with modern super-scalar CPUs in query- architecture of our new X100 query engine intensive workloads, in particular the TPC-H decision for the MonetDB system that follows these support benchmark. guidelines. On the surface, it resembles a The main conclusion we draw from this investiga- classical Volcano-style engine, but the cru- tion is that the architecture employed by most DBMSs cial difference to base all execution on the inhibits compilers from using their most performance- concept of vector processing makes it highly critical optimization techniques, resulting in low CPU CPU efficient.
    [Show full text]
  • Data Warehouse Fundamentals for Storage Professionals – What You Need to Know EMC Proven Professional Knowledge Sharing 2011
    Data Warehouse Fundamentals for Storage Professionals – What You Need To Know EMC Proven Professional Knowledge Sharing 2011 Bruce Yellin Advisory Technology Consultant EMC Corporation [email protected] Table of Contents Introduction ................................................................................................................................ 3 Data Warehouse Background .................................................................................................... 4 What Is a Data Warehouse? ................................................................................................... 4 Data Mart Defined .................................................................................................................. 8 Schemas and Data Models ..................................................................................................... 9 Data Warehouse Design – Top Down or Bottom Up? ............................................................10 Extract, Transformation and Loading (ETL) ...........................................................................11 Why You Build a Data Warehouse: Business Intelligence .....................................................13 Technology to the Rescue?.......................................................................................................19 RASP - Reliability, Availability, Scalability and Performance ..................................................20 Data Warehouse Backups .....................................................................................................26
    [Show full text]
  • Supporting Decision Making Chapter
    Chapter 5 Supporting Decision Making McGraw-Hill/Irwin Copyright © 2008, The McGraw-Hill Companies, Inc. All rights reserved. Chapter Highlights • Introduction • Decision support systems • Management information systems • Online analytical processing • Using decision support systems • Executive information systems • Enterprise information portals • Knowledge management systems 2-2 Learning Objectives • Identify the changes taking place in the form and use of decision support in business. • Identify the role and reporting alternatives of management information systems. • Describe how online analytical processing can meet key information needs of managers. • Explain how the following IS can support the information needs of executives, managers and business professionals: a. Executives information systems b. Enterprise information portals c. Knowledge management systems 2-3 INTRODUCTION • To succeed in business today, companies need IS that can support the diverse information and decision making needs of their managers and business professionals. • Internet, Intranets and other Web enabled information technologies have significantly support the role that IS play in supporting the decision making activities of every managers and knowledge workers in business. 2-4 • The type of information required by decision makers in a company is directly related to the level of management decision making and the amount of structure in the decision situations they face. • Levels of managerial decision making that must be supported by information technology in a successful organization are: • Strategic management • Tactical management • Operational management 2-5 • Strategic Management • Board of directors and an executive committee of the CEO and top executives develop overall organizational goals, strategies, policies and objectives as part of a strategic planning process. • They also monitor the strategic performance of the organization and its overall direction in the political, economic and competitive business environment.
    [Show full text]
  • Cluster Analysis for Olap in Online Decision Support Systems
    Turkish Journal of Physiotherapy and Rehabilitation; 32(3) ISSN 2651-4451 | e-ISSN 2651-446X CLUSTER ANALYSIS FOR OLAP IN ONLINE DECISION SUPPORT SYSTEMS Kiruthika S1, Umamaheswari E2, Karmel A3, Kanchana Devi V4 1Department of Computer Science and Engineering, Sona College of Technology, Salem. [email protected] 2Associate Professor Grade - II, Center Faculty - Cyber Physical Systems, Vellore Institute of Technology, Chennai, 600 127, Tamilnadu, India [email protected] 3Associate Professor Grade - I, School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, 600 127, Tamilnadu, India [email protected] 4Associate Professor Grade - I, School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, 600 127, Tamilnadu, India [email protected] ABSTRACT Online Decision Support Systems serve people of different segments of the society viz., traders, business enthusiasts, entrepreneurs, etc., Decision Support Systems are used in almost every business segment, spanning from micro-level to High level and even Heavy industries. This phenomenon is one of the results of Globalization. Proper decisions have to be made in every business industry to cope up with the prevailing market conditions. These Decision support systems are fed with large volumes of data. These data volumes are generally huge and heterogeneous. Cluster Analysis is a technique used to find out the group of data which is similar to each other and dissimilar to other data. This technique is very important for the efficient functioning of any Online Decision Support System. This paper focuses on using the Cluster Analysis Techniques to make the Decision Support Systems to work efficiently.
    [Show full text]
  • Hannes Mühleisen
    Large-Scale Statistics with MonetDB and R Hannes Mühleisen DAMDID 2015, 2015-10-13 About Me • Postdoc at CWI Database architectures since 2012 • Amsterdam is nice. We have open positions. • Special interest in data management for statistical analysis • Various research & software projects in this space 2 Outline • Column Store / MonetDB Introduction • Connecting R and MonetDB • Advanced Topics • “R as a Query Language” • “Capturing The Laws of Data Nature” 3 Column Stores / MonetDB Introduction 4 Postgres, Oracle, DB2, etc.: Conceptional class speed flux NX 1 3 Constitution 1 8 Galaxy 1 3 Defiant 1 6 Intrepid 1 1 Physical (on Disk) NX 1 3 Constitution 1 8 Galaxy 1 3 Defiant 1 6 Intrepid 1 1 5 Column Store: class speed flux Compression! NX 1 3 Constitution 1 8 Galaxy 1 3 Defiant 1 6 Intrepid 1 1 NX Constitution Galaxy Defiant Intrepid 1 1 1 1 1 3 8 3 6 1 6 What is MonetDB? • Strict columnar architecture OLAP RDBMS (SQL) • Started by Martin Kersten and Peter Boncz ~1994 • Free & Open Open source, active development ongoing • www.monetdb.org Peter A. Boncz, Martin L. Kersten, and Stefan Manegold. 2008. Breaking the memory wall in MonetDB. Communications of the ACM 51, 12 (December 2008), 77-85. DOI=10.1145/1409360.1409380 7 MonetDB today • Expanded C code • MAL “DB assembly” & optimisers • SQL to MAL compiler • Memory-Mapped files • Automatic indexing 8 Some MAL • Optimisers run on MAL code • Efficient Column-at-a-time implementations EXPLAIN SELECT * FROM mtcars; | X_2 := sql.mvc(); | | X_3:bat[:oid,:oid] := sql.tid(X_2,"sys","mtcars"); | | X_6:bat[:oid,:dbl]
    [Show full text]
  • LDBC Introduction and Status Update
    8th Technical User Community meeting - LDBC Peter Boncz [email protected] ldbcouncil.org LDBC Organization (non-profit) “sponsors” + non-profit members (FORTH, STI2) & personal members + Task Forces, volunteers developing benchmarks + TUC: Technical User Community (8 workshops, ~40 graph and RDF user case studies, 18 vendor presentations) What does a benchmark consist of? • Four main elements: – data & schema: defines the structure of the data – workloads: defines the set of operations to perform – performance metrics: used to measure (quantitatively) the performance of the systems – execution & auditing rules: defined to assure that the results from different executions of the benchmark are valid and comparable • Software as Open Source (GitHub) – data generator, query drivers, validation tools, ... Audience • For developers facing graph processing tasks – recognizable scenario to compare merits of different products and technologies • For vendors of graph database technology – checklist of features and performance characteristics • For researchers, both industrial and academic – challenges in multiple choke-point areas such as graph query optimization and (distributed) graph analysis SPB scope • The scenario involves a media/ publisher organization that maintains semantic metadata about its Journalistic assets (articles, photos, videos, papers, books, etc), also called Creative Works • The Semantic Publishing Benchmark simulates: – Consumption of RDF metadata (Creative Works) – Updates of RDF metadata, related to Annotations • Aims to be
    [Show full text]
  • A Systematic Review and Taxonomy of Explanations in Decision Support and Recommender Systems
    Noname manuscript No. (will be inserted by the editor) A Systematic Review and Taxonomy of Explanations in Decision Support and Recommender Systems Ingrid Nunes · Dietmar Jannach Received: date / Accepted: date Abstract With the recent advances in the field of artificial intelligence, an increasing number of decision-making tasks are delegated to software systems. A key requirement for the success and adoption of such systems is that users must trust system choices or even fully automated decisions. To achieve this, explanation facilities have been widely investigated as a means of establishing trust in these systems since the early years of expert systems. With today's increasingly sophisticated machine learning algorithms, new challenges in the context of explanations, accountability, and trust towards such systems con- stantly arise. In this work, we systematically review the literature on expla- nations in advice-giving systems. This is a family of systems that includes recommender systems, which is one of the most successful classes of advice- giving software in practice. We investigate the purposes of explanations as well as how they are generated, presented to users, and evaluated. As a result, we derive a novel comprehensive taxonomy of aspects to be considered when de- signing explanation facilities for current and future decision support systems. The taxonomy includes a variety of different facets, such as explanation objec- tive, responsiveness, content and presentation. Moreover, we identified several challenges that remain unaddressed so far, for example related to fine-grained issues associated with the presentation of explanations and how explanation facilities are evaluated. Keywords Explanation · Decision Support System · Recommender System · Expert System · Knowledge-based System · Systematic Review · Machine Learning · Trust · Artificial Intelligence I.
    [Show full text]
  • The Database Architectures Research Group at CWI
    The Database Architectures Research Group at CWI Martin Kersten Stefan Manegold Sjoerd Mullender CWI Amsterdam, The Netherlands fi[email protected] http://www.cwi.nl/research-groups/Database-Architectures/ 1. INTRODUCTION power provided by the early MonetDB implementa- The Database research group at CWI was estab- tions. In the years following, many technical inno- lished in 1985. It has steadily grown from two PhD vations were paired with strong industrial maturing students to a group of 17 people ultimo 2011. The of the software base. Data Distilleries became a sub- group is supported by a scientific programmer and sidiary of SPSS in 2003, which in turn was acquired a system engineer to keep our machines running. by IBM in 2009. In this short note, we look back at our past and Moving MonetDB Version 4 into the open-source highlight the multitude of topics being addressed. domain required a large number of extensions to the code base. It became of the utmost importance to support a mature implementation of the SQL- 2. THE MONETDB ANTHOLOGY 03 standard, and the bulk of application program- The workhorse and focal point for our research is ming interfaces (PHP, JDBC, Python, Perl, ODBC, MonetDB, an open-source columnar database sys- RoR). The result of this activity was the first official tem. Its development goes back as far as the early open-source release in 2004. A very strong XQuery eighties when our first relational kernel, called Troll, front-end was developed with partners and released was shipped as an open-source product. It spread to in 2005 [1].
    [Show full text]
  • Micro Adaptivity in Vectorwise
    Micro Adaptivity in Vectorwise Bogdan Raducanuˇ Peter Boncz Actian CWI [email protected] [email protected] ∗ Marcin Zukowski˙ Snowflake Computing marcin.zukowski@snowflakecomputing.com ABSTRACT re-arrange the shape of a query plan in reaction to its execu- Performance of query processing functions in a DBMS can tion so far, thereby enabling the system to correct mistakes be affected by many factors, including the hardware plat- in the cost model, or to e.g. adapt to changing selectivities, form, data distributions, predicate parameters, compilation join hit ratios or tuple arrival rates. Micro Adaptivity, intro- method, algorithmic variations and the interactions between duced here, is an orthogonal approach, taking adaptivity to these. Given that there are often different function imple- the micro level by making the low-level functions of a query mentations possible, there is a latent performance diversity executor more performance-robust. which represents both a threat to performance robustness Micro Adaptivity was conceived inside the Vectorwise sys- if ignored (as is usual now) and an opportunity to increase tem, a high performance analytical RDBMS developed by the performance if one would be able to use the best per- Actian Corp. [18]. The raw execution power of Vectorwise forming implementation in each situation. Micro Adaptivity, stems from its vectorized query executor, in which each op- proposed here, is a framework that keeps many alternative erator performs actions on vectors-at-a-time, rather than a function implementations (“flavors”) in a system. It uses a single tuple-at-a-time; where each vector is an array of (e.g.
    [Show full text]
  • Teradata Solution Technical Overview
    Teradata Solution Technical Overview 05.15 EB3025 DaTa WarEhOuSing Table of contents Teradata Pioneered Data Warehousing 2 Teradata Pioneered Data Warehousing Since our first shipment of the Teradata® Database, we’ve gained more than 35 years of experience building and 2 Making Smarter, Faster Decisions supporting data warehouses worldwide. Today, Teradata solutions outperform other vendors’ data warehouse 2 Real, measurable Data Warehouse rOi solutions, from small to very large production warehouses 3 Data Warehousing Leader with petabytes of data. With a family of workload-specific platforms, all running the Teradata Database, that lead 4 The Database—a critical component of Your spans needs from specialized analytical applications to Data Warehouse the most general enterprise data management. 5 Teradata Database—The Premier Performer Teradata corporation offers a powerful, complete solution that combines parallel database technology and scal- 5 What is Parallel Processing? able hardware with the world’s most experienced data warehousing consultants, along with the best tools and 6 Key Definitions applications available in the industry today. 9 Application Programming interfaces making Smarter, Faster Decisions 9 Language Preprocessors 9 Data utilities most companies have large volumes of detailed opera- tional data, but key business analysts and decision makers 9 Database administration Tools still can’t get the answers they need to react quickly enough to changing conditions. Why? Because the data 10 Access from anywhere are spread across many departments in the organiza- tion or are locked in a sluggish technology environment. 10 Need more reasons to choose Teradata? Today, Teradata solutions are helping companies like 11 How industries use Teradata yours consolidate those data to present a single view of your business.
    [Show full text]
  • Hmi + Clds = New Sdsc Center
    New Technologies for Data Management Chaitan Baru 2013 Summer Institute: Discover Big Data, August 5-9, San Diego, California SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO 2 2 Why new technologies? • Big Data Characteristics: Volume, Velocity, Variety • Began as a Volume problem ▫ E.g. Web crawls … ▫ 1’sPB-100’sPB in a single cluster • Velocity became an issue ▫ E.g. Clickstreams at Facebook ▫ 100,000 concurrent clients ▫ 6 billion messages/day • Variety is now important ▫ Integrate, fuse data from multiple sources ▫ Synoptic view; solve complex problems 2013 Summer Institute: Discover Big Data, August 5-9, San Diego, California SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO 3 3 Varying opinions on Big Data • “It’s all about velocity. The others issues are old problems.” • “Variety is the important characteristic.” • “Big Data is nothing new.” 2013 Summer Institute: Discover Big Data, August 5-9, San Diego, California SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO 4 4 Expanding Enterprise Systems: From OLTP to OLAP to “Catchall” • OLTP: OnLine Transaction Processing ▫ E.g., point-of-sale terminals, e-commerce transactions, etc. ▫ High throughout; high rate of “single shot” read/write operations ▫ Large number of concurrent clients ▫ Robust, durable processing 2013 Summer Institute: Discover Big Data, August 5-9, San Diego, California SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO 5 5 Expanding Enterprise Systems: OLAP • OLAP: Decision Support Systems requiring complex query processing ▫ Fast response times for complex SQL queries ▫ Queries over historical data ▫ Fewer number of concurrent clients ▫ Data aggregated from multiple systems Multiple business systems, e.g.
    [Show full text]
  • Columnar Storage in SQL Server 2012
    Columnar Storage in SQL Server 2012 Per-Ake Larson Eric N. Hanson Susan L. Price [email protected] [email protected] [email protected] Abstract SQL Server 2012 introduces a new index type called a column store index and new query operators that efficiently process batches of rows at a time. These two features together greatly improve the performance of typical data warehouse queries, in some cases by two orders of magnitude. This paper outlines the design of column store indexes and batch-mode processing and summarizes the key benefits this technology provides to customers. It also highlights some early customer experiences and feedback and briefly discusses future enhancements for column store indexes. 1 Introduction SQL Server is a general-purpose database system that traditionally stores data in row format. To improve performance on data warehousing queries, SQL Server 2012 adds columnar storage and efficient batch-at-a- time processing to the system. Columnar storage is exposed as a new index type: a column store index. In other words, in SQL Server 2012 an index can be stored either row-wise in a B-tree or column-wise in a column store index. SQL Server column store indexes are “pure” column stores, not a hybrid, because different columns are stored on entirely separate pages. This improves I/O performance and makes more efficient use of memory. Column store indexes are fully integrated into the system. To improve performance of typical data warehous- ing queries, all a user needs to do is build a column store index on the fact tables in the data warehouse.
    [Show full text]