Monetdb/Datacell: Online Analytics in a Streaming Column-Store

Total Page:16

File Type:pdf, Size:1020Kb

Monetdb/Datacell: Online Analytics in a Streaming Column-Store MonetDB/DataCell: Online Analytics in a Streaming Column-Store Erietta Liarou, Stratos Idreos, Stefan Manegold, Martin Kersten CWI, Amsterdam, The Netherlands [email protected] [email protected] [email protected] [email protected] ABSTRACT tion, web log analysis requires fast analysis of big streaming In DataCell, we design streaming functionalities in a mod- data for decision support. ern relational database kernel which targets big data analyt- A new processing paradigm is born [16, 10, 12] where in- ics. This includes exploitation of both its storage/execution coming streams of data need to quickly be analyzed and engine and its optimizer infrastructure. We investigate the possibly combined with existing data to discover trends and opportunities and challenges that arise with such a direction patterns. Subsequently, the new data may also enter the and we show that it carries significant advantages for mod- data warehouse and be stored as normal for further analy- ern applications in need for online analytics such as web sis if necessary. This new paradigm requires scalable query logs, network monitoring and scientific data management. processing that can combine continuous querying for fast The major challenge then becomes the efficient support for reaction to incoming data with traditional querying for ac- specialized stream features, e.g., multi-query processing and cess to the existing data. However, neither pure database incremental window-based processing as well as exploiting technology nor pure stream technology are designed for this standard DBMS functionalities in a streaming environment purpose. Database systems do not qualify for continuous such as indexing. query processing, while data stream systems are not built This demo presents DataCell, an extension of the Mon- to scale for big data analysis. etDB open-source column-store for online analytics. The DataCell Motivation. MonetDB/DataCell aims to pro- demo gives users the opportunity to experience the features vide scalable online analytics. We begin from a state of of DataCell such as processing both stream and persistent the art column-store design for big data analytics, Mon- data and performing window based processing. The demo etDB, and extend it with online functionality. The goal is provides a visual interface to monitor the critical system to fully exploit the generic storage and execution engine of components, e.g., how query plans transform from typical the DBMS as well as its complete optimizer stack. Stream DBMS query plans to online query plans, how data flows processing then becomes primarily a query scheduling task, through the query plans as the streams evolve, how Da- i.e., make the proper queries see the proper portion of stream taCell maintains intermediate results in columnar form to data at the proper time. A positive side-effect is that our avoid repeated evaluation of the same stream portions, etc. architecture supports SQL’03, which allows stream applica- The demo also provides the ability to interactively set the tions to exploit sophisticated query semantics. Numerous test scenarios and various DataCell knobs. research and technical questions immediately arise. The most prominent issues are the ability to provide specialized stream functionality and hindrances to guarantee real-time 1. INTRODUCTION constraints for event handling. Contributions and Demo. Paper [16] illustrates the Numerous applications nowadays require online analytics basic DataCell architecture and sets the research path and over high rate streaming data. For example, emerging ap- critical milestones. In addition, DataCell ships together plications over mobile data can exploit the big mobile data with the MonetDB open-source system. Here, we present streams for advertising and traffic control. In addition, the a demo based on MonetDB/DataCell. The demo showcases recent and continuously expanding massive cloud infrastruc- several of the key aspects of the MonetDB/DataCell design tures require continuous monitoring to remain in good state such as the ability to do both stream processing and normal and prevent fraud attacks. Similarly, scientific databases queries and the ability to provide window based processing. create data at massive rates daily or even hourly. In addi- Through a graphical user interface the user of the demo can observe the query execution inside DataCell, e.g., how the Permission to make digital or hard copies of all or part of this work for various columnar structures are populated, how intermedi- personal or classroom use is granted without fee provided that copies are ate results are kept around to avoid reevaluation for sliding not made or distributed for profit or commercial advantage and that copies window queries and how the shape of a normal query plan bear this notice and the full citation on the first page. To copy otherwise, to changes to a continuous query plan through the optimizer. republish, to post on servers or to redistribute to lists, requires prior specific In addition, the user can interact with the system, posing permission and/or a fee. Articles from this volume were invited to present continuous or one-time queries or choosing some pre-defined their results at The 38th International Conference on Very Large Data Bases, August 27th - 31st 2012, Istanbul, Turkey. scenarios as well as varying both input parameters and Da- Proceedings of the VLDB Endowment, Vol. 5, No. 12 taCell knobs. Copyright 2012 VLDB Endowment 2150-8097/12/08... $ 10.00. 1910 Paper Organization. The rest of the paper is organized Receptors Parser/Compiler Emitters as follows. Section 2 briefly discusses related work while Optimizer Section 3 gives an overview of the main design of DataCell. Then, Section 4 presents the demonstration scenarios as well Rewriter as the ways the user can interact with the system and how the demo visualizes query execution and system status. Fi- Scheduler/Factories nally, Section 5 concludes the paper. 2. RELATED WORK Kernel DataCell fundamentally differs from existing stream ef- Baskets Tables forts [3, 4, 5, 8, 9, 11, 13, 21, 6, 2], etc. by building on top of the storage and execution engine of a DBMS kernel. It opens a very interesting path towards exploiting and merg- Figure 1: The DataCell architecture. ing technologies from both worlds. as a collection of Binary Association Tables (BATs), one for Compared to even earlier efforts on active databases, e.g., each attribute. Advanced column-stores process one column [20], DataCell adds support for specialized stream function- at a time, using late tuple reconstruction, discussed in, e.g., alities, such as incremental processing. Incremental process- [1, 15]. Intermediates are also in column format. This allows ing in a DBMS has been studied in the context of updat- the engine to exploit CPU- and cache-optimized vector-like ing materialized views, e.g., [7, 14], but there the scenario is operator implementations throughout the whole query eval- very different given that it targets read-mostly environments uation, using an efficient bulk processing model instead of whereas an online scenario is by definition a write-only one. the typical tuple-at-a-time volcano approach. This way, a Truviso Continuous Analytics system [12], a commercial select operator for example, operates on a single column, fil- product of Truviso, is another recent example that follows tering the qualifying values and producing an intermediate the same approach as DataCell. They extend the open that holds their tuple IDs. This intermediate can then be source PostgreSQL database [19] to enable continuous anal- used to retrieve the necessary values from a different column ysis of streaming data, tackling the problem of low latency for further actions, e.g., aggregations, further filtering, etc. query evaluation over massive data volumes. TruCQ inte- The key point is that in DataCell these intermediates can be grates streaming and traditional relational query process- exploited for flexible incremental processing strategies, i.e., ing in such a way that ends-up to a stream-relational da- we can selectively keep around the proper intermediates at tabase architecture. It is able to run SQL queries continu- the proper places of a plan for efficient future reuse. ously and incrementally over data while they are still com- DataCell. DataCell [16] is positioned between the SQL ing and before they are stored in active database tables (if compiler/optimizer and the DBMS kernel. The SQL com- they need to be stored). TruCQ’s query processing signif- piler is extended with a few orthogonal language constructs icantly outperforms traditional store-first-query-later data- to recognize and process continuous queries. The query plan base technologies as the query evaluation has already been as generated by the SQL optimizer is rewritten to a contin- initiated when the first tuples arrive. It allows evaluation uous query plan and handed over to the DataCell scheduler. of one-time queries, continuous queries, as well as combina- In turn, the scheduler handles the execution of the plan. tions of both types. Figure 1 shows a DataCell instance. It contains receptors Another recent work, coming from the HP Labs [10], also and emitters, i.e., a set of separate processes per stream and confirms the strong research attraction for this trend. It per client, respectively, to listen for new data and to deliver defines an extended SQL query model that unifies queries results. They form the edges of the architecture and the over both static relations and dynamic streaming data, by bridges to the outside world, e.g., to sensor drivers. developing techniques to generalize the query engine. It also Baskets/Columns. The key idea is that when an event extends the PostgreSQL database kernel [19], building an stream enters the system via a receptor, stream tuples are engine that can process persistent and streaming data in immediately stored in a lightweight table, called basket. By a uniform design. First, they convert the stream into a collecting event tuples into baskets, DataCell can evaluate sequence of chunks and then continuously call the query the continuous queries over the baskets as if they were nor- over each sequential chunk.
Recommended publications
  • Monetdb/X100: Hyper-Pipelining Query Execution
    MonetDB/X100: Hyper-Pipelining Query Execution Peter Boncz, Marcin Zukowski, Niels Nes CWI Kruislaan 413 Amsterdam, The Netherlands fP.Boncz,M.Zukowski,[email protected] Abstract One would expect that query-intensive database workloads such as decision support, OLAP, data- Database systems tend to achieve only mining, but also multimedia retrieval, all of which re- low IPC (instructions-per-cycle) efficiency on quire many independent calculations, should provide modern CPUs in compute-intensive applica- modern CPUs the opportunity to get near optimal IPC tion areas like decision support, OLAP and (instructions-per-cycle) efficiencies. multimedia retrieval. This paper starts with However, research has shown that database systems an in-depth investigation to the reason why tend to achieve low IPC efficiency on modern CPUs in this happens, focusing on the TPC-H bench- these application areas [6, 3]. We question whether mark. Our analysis of various relational sys- it should really be that way. Going beyond the (im- tems and MonetDB leads us to a new set of portant) topic of cache-conscious query processing, we guidelines for designing a query processor. investigate in detail how relational database systems The second part of the paper describes the interact with modern super-scalar CPUs in query- architecture of our new X100 query engine intensive workloads, in particular the TPC-H decision for the MonetDB system that follows these support benchmark. guidelines. On the surface, it resembles a The main conclusion we draw from this investiga- classical Volcano-style engine, but the cru- tion is that the architecture employed by most DBMSs cial difference to base all execution on the inhibits compilers from using their most performance- concept of vector processing makes it highly critical optimization techniques, resulting in low CPU CPU efficient.
    [Show full text]
  • Data Warehouse Fundamentals for Storage Professionals – What You Need to Know EMC Proven Professional Knowledge Sharing 2011
    Data Warehouse Fundamentals for Storage Professionals – What You Need To Know EMC Proven Professional Knowledge Sharing 2011 Bruce Yellin Advisory Technology Consultant EMC Corporation [email protected] Table of Contents Introduction ................................................................................................................................ 3 Data Warehouse Background .................................................................................................... 4 What Is a Data Warehouse? ................................................................................................... 4 Data Mart Defined .................................................................................................................. 8 Schemas and Data Models ..................................................................................................... 9 Data Warehouse Design – Top Down or Bottom Up? ............................................................10 Extract, Transformation and Loading (ETL) ...........................................................................11 Why You Build a Data Warehouse: Business Intelligence .....................................................13 Technology to the Rescue?.......................................................................................................19 RASP - Reliability, Availability, Scalability and Performance ..................................................20 Data Warehouse Backups .....................................................................................................26
    [Show full text]
  • Master Data Management Whitepaper.Indd
    Creating a Master Data Environment An Incremental Roadmap for Public Sector Organizations Master Data Management (MDM) is the processes and technologies used to create and maintain consistent and accurate representations of master data. Movements toward modularity, service orientation, and SaaS make Master Data Management a critical issue. Master Data Management leverages both tools and processes that enable the linkage of critical data through a unifi ed platform that provides a common point of reference. When properly done, MDM streamlines data management and sharing across the enterprise among different areas to provide more effective service delivery. CMA possesses more than 15 years of experience in Master Data Management and more than 20 in data management and integration. CMA has also focused our efforts on public sector since our inception, more than 30 years ago. This document describes the fundamental keys to success for developing and maintaining a long term master data program within a public service organization. www.cma.com 2 Understanding Master Data transactional data. As it becomes more volatile, it typically is considered more transactional. Simple Master Data Behavior entities are rarely a challenge to manage and are rarely Master data can be described by the way that it interacts considered master-data elements. The less complex an with other data. For example, in transaction systems, element, the less likely the need to manage change for master data is almost always involved with transactional that element. The more valuable the data element is to data. This relationship between master data and the organization, the more likely it will be considered a transactional data may be fundamentally viewed as a master data element.
    [Show full text]
  • Data Warehouse Applications in Modern Day Business
    California State University, San Bernardino CSUSB ScholarWorks Theses Digitization Project John M. Pfau Library 2002 Data warehouse applications in modern day business Carla Mounir Issa Follow this and additional works at: https://scholarworks.lib.csusb.edu/etd-project Part of the Business Intelligence Commons, and the Databases and Information Systems Commons Recommended Citation Issa, Carla Mounir, "Data warehouse applications in modern day business" (2002). Theses Digitization Project. 2148. https://scholarworks.lib.csusb.edu/etd-project/2148 This Project is brought to you for free and open access by the John M. Pfau Library at CSUSB ScholarWorks. It has been accepted for inclusion in Theses Digitization Project by an authorized administrator of CSUSB ScholarWorks. For more information, please contact [email protected]. DATA WAREHOUSE APPLICATIONS IN MODERN DAY BUSINESS A Project Presented to the Faculty of California State University, San Bernardino In Partial Fulfillment of the Requirements for the Degree Master of Business Administration by Carla Mounir Issa June 2002 DATA WAREHOUSE APPLICATIONS IN MODERN DAY BUSINESS A Project Presented to the Faculty of California State University, San Bernardino by Carla Mounir Issa June 2002 Approved by: Date Dr. Walter T. Stewart ABSTRACT Data warehousing is not a new concept in the business world. However, the constant changing application of data warehousing is what is being innovated constantly. It is these applications that are enabling managers to make better decisions through strategic planning, and allowing organizations to achieve a competitive advantage. The technology of implementing the data warehouse will help with the decision making process, analysis design, and will be more cost effective in the future.
    [Show full text]
  • Lecture Notes on Data Mining& Data Warehousing
    LECTURE NOTES ON DATA MINING& DATA WAREHOUSING COURSE CODE:BCS-403 DEPT OF CSE & IT VSSUT, Burla SYLLABUS: Module – I Data Mining overview, Data Warehouse and OLAP Technology,Data Warehouse Architecture, Stepsfor the Design and Construction of Data Warehouses, A Three-Tier Data WarehouseArchitecture,OLAP,OLAP queries, metadata repository,Data Preprocessing – Data Integration and Transformation, Data Reduction,Data Mining Primitives:What Defines a Data Mining Task? Task-Relevant Data, The Kind of Knowledge to be Mined,KDD Module – II Mining Association Rules in Large Databases, Association Rule Mining, Market BasketAnalysis: Mining A Road Map, The Apriori Algorithm: Finding Frequent Itemsets Using Candidate Generation,Generating Association Rules from Frequent Itemsets, Improving the Efficiently of Apriori,Mining Frequent Itemsets without Candidate Generation, Multilevel Association Rules, Approaches toMining Multilevel Association Rules, Mining Multidimensional Association Rules for Relational Database and Data Warehouses,Multidimensional Association Rules, Mining Quantitative Association Rules, MiningDistance-Based Association Rules, From Association Mining to Correlation Analysis Module – III What is Classification? What Is Prediction? Issues RegardingClassification and Prediction, Classification by Decision Tree Induction, Bayesian Classification, Bayes Theorem, Naïve Bayesian Classification, Classification by Backpropagation, A Multilayer Feed-Forward Neural Network, Defining aNetwork Topology, Classification Based of Concepts from
    [Show full text]
  • Hannes Mühleisen
    Large-Scale Statistics with MonetDB and R Hannes Mühleisen DAMDID 2015, 2015-10-13 About Me • Postdoc at CWI Database architectures since 2012 • Amsterdam is nice. We have open positions. • Special interest in data management for statistical analysis • Various research & software projects in this space 2 Outline • Column Store / MonetDB Introduction • Connecting R and MonetDB • Advanced Topics • “R as a Query Language” • “Capturing The Laws of Data Nature” 3 Column Stores / MonetDB Introduction 4 Postgres, Oracle, DB2, etc.: Conceptional class speed flux NX 1 3 Constitution 1 8 Galaxy 1 3 Defiant 1 6 Intrepid 1 1 Physical (on Disk) NX 1 3 Constitution 1 8 Galaxy 1 3 Defiant 1 6 Intrepid 1 1 5 Column Store: class speed flux Compression! NX 1 3 Constitution 1 8 Galaxy 1 3 Defiant 1 6 Intrepid 1 1 NX Constitution Galaxy Defiant Intrepid 1 1 1 1 1 3 8 3 6 1 6 What is MonetDB? • Strict columnar architecture OLAP RDBMS (SQL) • Started by Martin Kersten and Peter Boncz ~1994 • Free & Open Open source, active development ongoing • www.monetdb.org Peter A. Boncz, Martin L. Kersten, and Stefan Manegold. 2008. Breaking the memory wall in MonetDB. Communications of the ACM 51, 12 (December 2008), 77-85. DOI=10.1145/1409360.1409380 7 MonetDB today • Expanded C code • MAL “DB assembly” & optimisers • SQL to MAL compiler • Memory-Mapped files • Automatic indexing 8 Some MAL • Optimisers run on MAL code • Efficient Column-at-a-time implementations EXPLAIN SELECT * FROM mtcars; | X_2 := sql.mvc(); | | X_3:bat[:oid,:oid] := sql.tid(X_2,"sys","mtcars"); | | X_6:bat[:oid,:dbl]
    [Show full text]
  • LDBC Introduction and Status Update
    8th Technical User Community meeting - LDBC Peter Boncz [email protected] ldbcouncil.org LDBC Organization (non-profit) “sponsors” + non-profit members (FORTH, STI2) & personal members + Task Forces, volunteers developing benchmarks + TUC: Technical User Community (8 workshops, ~40 graph and RDF user case studies, 18 vendor presentations) What does a benchmark consist of? • Four main elements: – data & schema: defines the structure of the data – workloads: defines the set of operations to perform – performance metrics: used to measure (quantitatively) the performance of the systems – execution & auditing rules: defined to assure that the results from different executions of the benchmark are valid and comparable • Software as Open Source (GitHub) – data generator, query drivers, validation tools, ... Audience • For developers facing graph processing tasks – recognizable scenario to compare merits of different products and technologies • For vendors of graph database technology – checklist of features and performance characteristics • For researchers, both industrial and academic – challenges in multiple choke-point areas such as graph query optimization and (distributed) graph analysis SPB scope • The scenario involves a media/ publisher organization that maintains semantic metadata about its Journalistic assets (articles, photos, videos, papers, books, etc), also called Creative Works • The Semantic Publishing Benchmark simulates: – Consumption of RDF metadata (Creative Works) – Updates of RDF metadata, related to Annotations • Aims to be
    [Show full text]
  • Database Vs. Data Warehouse: a Comparative Review
    Insights Database vs. Data Warehouse: A Comparative Review By Drew Cardon Professional Services, SVP Health Catalyst A question I often hear out in the field is: I already have a database, so why do I need a data warehouse for healthcare analytics? What is the difference between a database vs. a data warehouse? These questions are fair ones. For years, I’ve worked with databases in healthcare and in other industries, so I’m very familiar with the technical ins and outs of this topic. In this post, I’ll do my best to introduce these technical concepts in a way that everyone can understand. But, before we discuss the difference, could I ask one big favor? This will only take 10 seconds. Could you click below and take a quick poll? I’d like to find out if your organization has a data warehouse, data base(s), or if you don’t know? This would really help me better understand how prevalent data warehouses really are. Click to take our 10 second database vs data warehouse poll Before diving in to the topic, I want to quickly highlight the importance of analytics in healthcare. If you don’t understand the importance of analytics, discussing the distinction between a database and a data warehouse won’t be relevant to you. Here it is in a nutshell. The future of healthcare depends on our ability to use the massive amounts of data now available to drive better quality at a lower cost. If you can’t perform analytics to make sense of your data, you’ll have trouble improving quality and costs, and you won’t succeed in the new healthcare environment.
    [Show full text]
  • MASTER's THESIS Role of Metadata in the Datawarehousing Environment
    2006:24 MASTER'S THESIS Role of Metadata in the Datawarehousing Environment Kranthi Kumar Parankusham Ravinder Reddy Madupu Luleå University of Technology Master Thesis, Continuation Courses Computer and Systems Science Department of Business Administration and Social Sciences Division of Information Systems Sciences 2006:24 - ISSN: 1653-0187 - ISRN: LTU-PB-EX--06/24--SE Preface This study is performed as the part of the master’s programme in computer and system sciences during 2005-06. It has been very useful and valuable experience and we have learned a lot during the study, not only about the topic at hand but also to manage to the work in the specified time. However, this workload would not have been manageable if we had not received help and support from a number of people who we would like to mention. First of all, we would like to thank our professor Svante Edzen for his help and supervision during the writing of thesis. And also we would like to express our gratitude to all the employees who allocated their valuable time to share their professional experience. On a personal level, Kranthi would like to thank all his family for their help and especially for my friends Kiran, Chenna Reddy, and Deepak Kumar. Ravi would like to give the greatest of thanks to his family for always being there when needed, and constantly taking so extremely good care of me….Also, thanks to all my friends for being close to me. Luleå University of Technology, 31 January 2006 Kranthi Kumar Parankusham Ravinder Reddy Madupu Abstract In order for a well functioning data warehouse to succeed many components must work together.
    [Show full text]
  • The Teradata Data Warehouse Appliance Technical Note on Teradata Data Warehouse Appliance Vs
    The Teradata Data Warehouse Appliance Technical Note on Teradata Data Warehouse Appliance vs. Oracle Exadata By Krish Krishnan Sixth Sense Advisors Inc. November 2013 Sponsored by: Table of Contents Background ..................................................................3 Loading Data ............................................................3 Data Availability .........................................................3 Data Volumes ............................................................3 Storage Performance .....................................................4 Operational Costs ........................................................4 The Data Warehouse Appliance Selection .....................................5 Under the Hood ..........................................................5 Oracle Exadata. .5 Oracle Exadata Intelligent Storage ........................................6 Hybrid Columnar Compression ...........................................7 Mixed Workload Capability ...............................................8 Management ............................................................9 Scalability ................................................................9 Consolidation ............................................................9 Supportability ...........................................................9 Teradata Data Warehouse Appliance .........................................10 Hardware Improvements ................................................10 Teradata’s Intelligent Memory (TIM) vs. Exdata Flach Cache.
    [Show full text]
  • Enterprise Data Warehouse (EDW) Platform PRIVACY IMPACT ASSESSMENT (PIA)
    U.S. Securities and Exchange Commission Enterprise Data Warehouse (EDW) Platform PRIVACY IMPACT ASSESSMENT (PIA) May 31, 2020 Office of Information Technology Privacy Impact Assessment Enterprise Data Warehouse Platform (EDW) General Information 1. Name of Project or System. Enterprise Data Warehouse (EDW) Platform 2. Describe the project and its purpose or function in the SEC’s IT environment. The Enterprise Data Warehouse (EDW) infrastructure will enable the provisioning of data to Commission staff for search and analysis through a virtual data warehouse platform. As part of its Enterprise Data Warehouse initiative, the SEC will begin to integrate data from various systems to provide more comprehensive management and financial reporting on a regular basis and facilitate better decision-making. (SEC Strategic Plan, 2014-2018, pg.31) The EDW will replicate source system-provided data from other operational SEC systems and provide a simplified way of producing Agency reports. The EDW is designed for summarization of information, retrieval of information, reporting speed, and ease of use (i.e. creating reports). It will enhance business intelligence, augment data quality and consistency, and generate a high return on investment by allowing users to quickly search and access critical data from a single location and obtain historical intelligence to analyze different time periods and performance trends in order to make future predictions. (SEC FY2013 Agency Financial Report, pg.31-32) The data content in the Enterprise Data Warehouse platform will be provisioned incrementally over time via a series of distinct development projects, each of which will target specific data sets based on requirements provided by SEC business stakeholders.
    [Show full text]
  • High-Performance Analytics for Today's Business
    Teradata Database: High-Performance Analytics for Today’s Business DATA WAREHOUSING Because Performance Counts Already the foundation of the world’s most successful data warehouses, Teradata Database is designed for decision support and parallel implementation. And it’s as versatile The success of your data warehouse has always rested as it is powerful. It supports hundreds of installations, on the performance of your database engine. But as from three terabytes to huge warehouses with petabytes your analytics and data requirements grow increasingly (PB) of data and thousands of users. Teradata Database’s complex, performance becomes more vital than ever. inherent parallelism and architecture provide more So does finding a faster, simpler way to manage your data than pure performance. It also offers low total cost of warehouse. You can find that combination of unmatched ownership (TCO) and a simple, scalable, and self-managing performance, flexibility, and efficient management in the solution for building a successful data warehouse. Teradata® Database. With the Teradata Software-Defined Warehouse, In fact, we continually enhance the Teradata Database customers have the flexibility to segregate data and to provide you and your business with new features users on a single system. By combining Teradata and functions for even higher levels of flexibility and Database’s secure zones with Teradata’s industry-leading performance. Work with dynamic data and schema-on- mixed workload management capabilities, multi-tenant read operational models together with structured data economies and flexible privacy regulation compliance through completely integrated JSON and Avro data. become possible in a single Teradata Database system. Query processing performance is also accelerated by the world’s most advanced hybrid row/column table storage capability and in-memory optimization and vectorization for in-memory database speed.
    [Show full text]