TOP-OF-MIND TIME for IN-MEMORY DATABASES PERFORMANCE 2 In-Memory Databases Help Meet Need for IT Speed

Total Page:16

File Type:pdf, Size:1020Kb

TOP-OF-MIND TIME for IN-MEMORY DATABASES PERFORMANCE 2 In-Memory Databases Help Meet Need for IT Speed VIRTUALIZATION CLOUD DEVELOPMENT APPLICATION IT HEALTH NETWORKING ARCHITECTURE STORAGE CENTER MANAGEMENT DATA BI/APPLICATIONS RECOVERY/COMPLIANCE DISASTER SECURITY 1 EDITOR’S NOTE Top-of-Mind Time for 2 IN-MEMORY DATABASES HELP In-Memory Databases MEET NEED FOR IT SPEED For all the promise they hold, in-memory databases and the process of implementing them require heavy investments in 3 IBM GIVES DB2 MORE GAS WITH company resources and skills. Is it worth it? IN-MEMORY ACCELERATOR 4 ADD-ON SOFTWARE TAKES ORACLE 12C IN NEW DIRECTION EDITOR’S NOTE 1 In-Memory’s Moment in the Database Sun In-memory databases used to be terri- 2014 report that in-memory databases could Home tory for niche technology vendors and equally provide “transformational performance im- niche applications. But today vendors of all da- provements” in operational and analytical ap- Editor’s Note tabase stripes—SQL, NoSQL, NewSQL—now plications. But in a video posted on YouTube offer in-memory technology, some as stand- the following month, Rosen said the heavily In-Memory Databases Help alone products and others as add-ons to disk- hyped technology also has “the potential to be Meet Need for IT based database management systems. That the next failed silver bullet from IT.” Challenges Speed includes relational database market leaders Or- abound, he cautioned, including data migration acle, IBM and Microsoft as well as business ap- issues and the proliferation of data silos that IBM Gives DB2 plications bigwig SAP with its HANA system. make it hard to do real-time analytics. More Gas With In an interview with SearchDataManage- This guide explores in-memory database In-Memory Accelerator ment’s Jack Vaughan, data management consul- trends and offers advice to help you get started tant William McKnight said that as the price of on deciding whether the technology is right for Add-On Software RAM declines, “memory in a lot of ways is be- your organization. First, we assess applications Takes Oracle 12c coming the new disk.” There are good reasons that are a good fit for in-memory software. in New Direction for that. Keeping analytical data in memory We follow that with a close look at both IBM’s can sharply reduce query response times and BLU Acceleration technology and Oracle’s in- enable end users to run deeper analyses, Mc- memory add-on for its 12c database. n Knight said. “When you can do that,” he added, “you’re hopefully producing a better business.” Craig Stedman IDC analyst Mike Rosen wrote in a January Executive Editor, SearchDataManagement 2 TOP-OF-MIND TIME FOR IN-MEMORY DATABASES PERFORMANCE 2 In-Memory Databases Help Meet Need for IT Speed Ten years ago, it would have been un- application performance in two ways. First and Home thinkable to imagine enterprise-class database foremost, maintaining data in main memory management systems running primarily in instead of significantly slower disk-based stor- Editor’s Note main memory. Yet over time, RAM prices have age minimizes or even eliminates the data steadily declined to the point where doing that latency typically associated with database que- In-Memory Databases Help is no longer prohibitively expensive. The cost ries. Second, alternative database architectures Meet Need for IT of memory is orders of magnitude less expen- amenable to in-memory processing enable Speed sive than it used to be, and the plummeting more efficient use of the available memory. prices have opened new opportunities for con- For example, many in-memory technologies IBM Gives DB2 figuring database systems to take advantage of use a columnar layout in tables instead of a More Gas With increased main memory capacities. row-based orientation. Values aligned along In-Memory Accelerator And it’s no longer just startup companies columns are more suitable for compression, developing in-memory databases designed to and the ability to rapidly scan all column values Add-On Software support high-performance processing needs. speeds query execution. Takes Oracle 12c Leading database and software vendors—IBM, in New Direction Oracle, Microsoft, SAP, Teradata—are market- ing in-memory database technology, putting A WINNING HAND? money behind their belief that mainstream or- Conceptually, it’s hard to argue against applica- ganizations are ready to consider incorporating tion speed-up and optimized organization of such software into IT systems. data. But in the real world, when should IT and In-memory databases provide accelerated data management practitioners recommend 3 TOP-OF-MIND TIME FOR IN-MEMORY DATABASES PERFORMANCE 2 that transaction processing or business ana- about the stocks of items on trucks or rail cars lytics needs warrant the investments in tech- en route between various sites, updates on traf- nology, resources and new skills required to fic and weather conditions—could help drive transition to an in-memory framework? faster decisions on routing and distribution The practical aspects of that question in- to ensure that goods get to where they need volve weighing the need for increased database to be, when they need to be there. A resulting Home performance versus the associated costs of increase in sales clearly could justify the in- acquiring and deploying an in-memory plat- memory investment. Editor’s Note form. Even though RAM costs have decreased dramatically, systems with large-scale memory In-Memory Databases Help configurations will still carry a healthy price TAKE A LOOK INSIDE Meet Need for IT tag compared with database servers that stay It’s a good idea to take the overall character- Speed with disk storage only. Corporate and busi- istics of your organization into account. In- ness executives might experience sticker shock memory databases are worth considering if one IBM Gives DB2 when they see the in-memory bill. To make an or more of the following terms can be used to More Gas With in-memory technology purchase pay off, you describe the environment you work in. In-Memory Accelerator need to find applications with characteristics that make them a good fit. Open to investing in IT. Corporate execs must Add-On Software The answer lies partly in assessing your or- be willing to spend money on hardware with Takes Oracle 12c ganization’s demand for processing increased enough memory to satisfy the processing needs in New Direction data volumes and the business value of reduced of business applications, even though scaling database response times. Consider this exam- out systems to support in-memory computing ple: Enabling real-time analysis in supply chain carries a higher price than buying disk-heavy management applications that incorporate a database servers does. variety of data streams—inventory data from warehouses and retail locations, information Analytically agile. In-memory systems can 4 TOP-OF-MIND TIME FOR IN-MEMORY DATABASES PERFORMANCE 2 power reporting and analysis applications database. According to a 2013 white paper from that help improve business processes—and data warehouse and database vendor Teradata, results—by enabling end users to make in- 43% of queries against data warehouses it formed decisions on a shorter cycle. For exam- studied accessed just 1% of the available infor- ple, transitioning from weekly to hourly sales mation, while 92% of queries used only 20% of forecasting can lead to the creation of real-time the data at hand. Identifying “hot” data that’s Home product pricing models that increase profit- accessed frequently and keeping it in memory ability—as long as pricing decisions can also should greatly reduce query response times. Editor’s Note be communicated and executed rapidly. In summary, organizations whose business In-Memory Databases Help Supportive of mixed-use development. processes can benefit from real-time data avail- Meet Need for IT Allowing transactional and analytical appli- ability, simultaneous mixed-use applications Speed cations to simultaneously access the same and noticeably faster reporting and analytics database is another way to provide real-time are good candidates for deploying in-memory IBM Gives DB2 analytics capabilities. But resource conflicts databases. There are some scenarios in which More Gas With can cause performance problems with a con- the decision to do so is a no-brainer. But in In-Memory Accelerator ventional relational database, largely due to most cases, the consideration of in-memory the latency associated with finding and access- databases must be aligned with IT spending Add-On Software ing data records stored on disk. With an in- priorities and corporate business objectives— Takes Oracle 12c memory configuration, latency becomes less including a demonstrable awareness of how key in New Direction of an issue. areas of corporate performance could be im- proved by the faster transaction processing and Data-aware. In-memory technology can also access to reports and ad hoc query results that be a valuable tool when a large percentage of in-memory software makes possible. data access calls touch only a small part of a —David Loshin 5 TOP-OF-MIND TIME FOR IN-MEMORY DATABASES TOOLS 3 IBM Gives DB2 More Gas With In-Memory Accelerator In recent years, technology vendors, in- Some of those traits are the kinds of things Home dustry analysts and corporate data managers that have given analytical databases and have focused much of their attention on ad- NoSQL upstarts much of their allure as alter- Editor’s Note vances in specialized data warehouse engines natives to DB2 and other top relational data- and NoSQL databases. But relational databases bases. For example, columnar table structures, In-Memory Databases Help aren’t standing still. IBM’s addition of a set of often coupled with advanced compression Meet Need for IT in-memory software called BLU Acceleration to techniques, have become associated with the Speed its DB2 database is a case in point.
Recommended publications
  • Data Warehouse Fundamentals for Storage Professionals – What You Need to Know EMC Proven Professional Knowledge Sharing 2011
    Data Warehouse Fundamentals for Storage Professionals – What You Need To Know EMC Proven Professional Knowledge Sharing 2011 Bruce Yellin Advisory Technology Consultant EMC Corporation [email protected] Table of Contents Introduction ................................................................................................................................ 3 Data Warehouse Background .................................................................................................... 4 What Is a Data Warehouse? ................................................................................................... 4 Data Mart Defined .................................................................................................................. 8 Schemas and Data Models ..................................................................................................... 9 Data Warehouse Design – Top Down or Bottom Up? ............................................................10 Extract, Transformation and Loading (ETL) ...........................................................................11 Why You Build a Data Warehouse: Business Intelligence .....................................................13 Technology to the Rescue?.......................................................................................................19 RASP - Reliability, Availability, Scalability and Performance ..................................................20 Data Warehouse Backups .....................................................................................................26
    [Show full text]
  • Magic Quadrant for Data Warehouse Database Management Systems
    Magic Quadrant for Data Warehouse Database Management Systems Gartner RAS Core Research Note G00209623, Donald Feinberg, Mark A. Beyer, 28 January 2011, RV5A102012012 The data warehouse DBMS market is undergoing a transformation, including many acquisitions, as vendors adapt data warehouses to support the modern business intelligence and analytic workload requirements of users. This document compares 16 vendors to help you find the right one for your needs. WHAT YOU NEED TO KNOW Despite a troubled economic environment, the data warehouse database management system (DBMS) market returned to growth in 2010, with smaller vendors gaining in acceptance. As predicted in the previous iteration of this Magic Quadrant, 2010 brought major acquisitions, and several of the smaller vendors, such as Aster Data, Ingres and Vertica, took major strides by addressing specific market needs. The year also brought major market growth from data warehouse appliance offerings (see Note 1), with both EMC/Greenplum and Microsoft formally introducing appliances, and IBM, Oracle and Teradata broadening their appliance lines with new offerings. Although we believe that much of the growth was due to replacements of aging or performance-constrained data warehouse environments, we also think that the business value of using data warehouses for new applications such as performance management and advanced analytics has driven — and is driving — growth. All the vendors have stepped up their marketing efforts as the competition has grown. End-user organizations should ignore marketing claims about the applicability and performance capabilities of solutions. Instead, they should base their decisions on customer references and proofs of concept (POCs) to ensure that vendors’ claims will hold up in their environments.
    [Show full text]
  • Government Contracting M&A Update
    Government Contracting M&A Update “Market Intelligence for Business Owners” Q3 2013 Capstone Partners Investment Banking Advisors BOSTON | CHICAGO | LONDON | LOS ANGELES | PHILADELPHIA | SAN DIEGO | SILICON VALLEY Government Contracting Coverage Report MERGERS & ACQUISITIONS UPDATE With the nation’s attention focused on reducing government spending and sequestration, one would expect mergers & acquisitions in the government contracting space to come CAPSTONE PARTNERS LLC to a standstill. But such is not the case, with the number of acquisitions announced 200 South Wacker Drive through June totaling more than 250. 31st Floor Chicago, IL 60606 M&A Activity: Government Contractors www.capstonellc.com 1000 964 900 852 800 786 772 786 732 751 700 568 Ted Polk 600 521 Transactions Managing Director 500 of 398 (312) 674‐4531 400 [email protected] 300 256 Number 200 100 Lisa Tolliver 0 Director 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 YTD (312) 674‐4532 2013 [email protected] YTD 2013 through June 30, 2013 Source: Capital IQ, Capstone Partners LLC research While the year’s activity is currently on‐track to come in under the 2012 figure, that trend is reflective of what we are seeing in mergers and acquisitions in general. M&A activity across the board has been down in early 2013 compared to 2012, primarily the result of the market continuing to absorb the rash of transactions that were closed at the end of 2012 in anticipation of rising capital gains tax rates. But, while the number of closed transactions has slowed this year, M&A activity continues to be supported by strong market fundamentals, namely reasonably high transaction valuations; strategic acquirers with strong balance sheets; abundant private equity capital; an accessible and affordable debt market; and a modestly expanding U.S.
    [Show full text]
  • Nexus User Guide (Pdf)
    The Best Query Tool Works on all Systems When you possess a tool like Nexus, you have access to every system in your enterprise! The Nexus Query Chameleon is the only tool that works on all systems. Its Super Join Builder allows for the ERwin Logical Model to be loaded, and then Nexus shows tables and views visually. It then guides users to show what joins to what. As users choose the tables and columns they want in their report, Nexus builds the SQL for them with each click of the mouse. Nexus was designed for Teradata and Hadoop, but works on all platforms. Nexus even converts table structures between vendors, so querying and managing multi-vendor platforms is transparent. Even if you only work with one system, you will find that the Nexus is the best query tool you have ever used. If you work with multiple systems, you will be even more amazed. Download a free trial at www.CoffingDW.com. The Tera-Tom Video Series Lessons with Tera-Tom Teradata Architecture and SQL Video Series These exciting videos make learning and certification much easier Four ways to view them: 1. Safari (look up Coffing Studios) 2. CoffingDW.com (sign-up on our website) 3. Your company can buy them all for everyone to see (contact [email protected]) 4. YouTube – Search for CoffingDW or Tera-Tom. The Tera-Tom Genius Series The Tera-Tom Genius Series consists of ten books. Each book is designed for a specific audience, and Teradata is explained to the level best suited for that audience.
    [Show full text]
  • Next Generation Data Warehouse Platforms
    fourth quarter 2009 TDWI besT pracTIces reporT Next geNeratioN Data Warehouse Platforms By Philip Russom www.tdwi.org Research Sponsors Aster Data Systems HP IBM Infobright Kognitio Microsoft Oracle/Intel Sybase Teradata fourth QuArtEr 2009 TDWI besT pracTIces reporT Next geNeratioN Data Warehouse Platforms By Philip Russom Table of Contents Research Methodology and Demographics . 3 Introduction to Next Generation Data Warehouse Platforms . 4 Definitions of Terms and Concepts. 4 Why Care about Data Warehouse Platforms Now? . 5 The Evolving State of Data Warehouse Platforms . 6 Technology Drivers for New Generations of Data Warehouses . 6 Business Drivers for New Generations of Data Warehouses . 9 Your Data Warehouse Today and Tomorrow. 10 Quantifying Data Warehouse Generations . 13 Growth or Decline of Usage versus Breadth or Narrowness of Commitment . 14 Trends for Next Generation Data Warehouse Platform Options . 16 Next Generation Data Warehouse Platform Options . 17 Real-Time Data Warehousing. 17 Data Management Practices . 19 Cloud Computing and Software-as-a-Service (SaaS). .20 In-Memory Processing and 64-Bit Computing . 21 Open Source Software . .22 Advanced Analytics . .23 Services . 24 Processing Architectures. .25 Data Warehouse Appliances and Similar Platforms . .26 New Database Management Systems as Alternative Options. .28 Recommendations . 31 © 2009 by TDWI (The Data Warehousing InstituteTM), a division of 1105 Media, Inc. All rights reserved. Reproductions in whole or in part are prohibited except by written permission. E-mail requests or feedback to [email protected]. Product and company names mentioned herein may be trademarks and/or registered trademarks of their respective companies. www.tdwi.org 1 NENERATIONE x T G DATA WAREHOUSE Pl ATfORMS About the Author PHILIP RUSSOM is the senior manager of TDWI Research at The Data Warehousing Institute (TDWI), where he oversees many of TDWI’s research-oriented publications, services, and events.
    [Show full text]
  • Big Data Landscape for Databases
    Big Data Landscape for Databases Bob Baran Senior Sales Enginee [email protected] ! May 12, 2015 Typical Database Workloads OLTP Applications Real-Time Web, Real-Time, Ad-Hoc Analytics Enterprise Data Mobile, and IoT Operational Warehouses Applications Reporting Typical • MySQL • MongoDB • MySQL • Greenplum • Teradata Databases • Oracle • Cassandra • Oracle • Paraccel • Oracle • MySQL • Netezza • Sybase IQ • Oracle Use Cases • ERP, CRM, Supply • Web, mobile, social • Operational • Exploratory • Enterprise Chain • IoT Datastores Analytics Reporting • Crystal Reports • Data Mining Workload • Real-time updates • Real-time updates • Real-time updates • Complex • Parameterized Strengths • ACID transactions • High ingest rates • Canned, queries reports against • High concurrency • High concurrency of parameterized requiring full historical data of small reads/ small reads/ writes reports table scans writes • Range queries • Range queries • Append only • Range queries Operational Analytical 2 Recent History of RDBMSs ▪ RDBMS Definition ▪ Relational with joins ▪ ACID transactions ▪ Secondary indexes ▪ Typically row-oriented ▪ Operational and/or analytical workloads ▪ By early 2000s ▪ Limited innovation ▪ Looked like Oracle and Teradata won… 3 Hadoop Shakes Up Batch Analytics ▪ Data processing framework ▪ Cheap distributed file system ▪ Brute force, batch processing through MapReduce ▪ Great for batch analytics ▪ Great place to dump data to look at later 4 NoSQL Shakes Ups Operational DBs ▪ NoSQL wave ▪ Companies like Google, Amazon and
    [Show full text]
  • A Technical Overview the Paraccel Analytic Database
    THE PARACCEL ANALYTIC DATABASE A TECHNICAL OVERVIEW The ParAccel Analytic Database: A Technical Overview The ParAccel Analytic Database: A Technical Overview Version 2.5 February 10, 2010. www.paraccel.com © 2010 ParAccel, Inc. All Rights Reserved. ParAccel product names are trademarks of ParAccel, Inc. Other product names are trademarks of their respective owners. © 2010 ParAccel, Inc. All rights reserved. i The ParAccel Analytic Database: A Technical Overview TABLE OF CONTENTS Introduction ............................................................... 1 System Architecture ................................................... 2 LEADER NODE.......................................................................................2 COMPUTE NODES .................................................................................3 COMMUNICATION FABRIC ....................................................................4 OPTIONAL STORAGE AREA NETWORK (SAN) .....................................4 PADB Features ............................................................ 4 PERFORMANCE.....................................................................................4 Columnar Orientation ................................................................................... 5 Column vs. Row Example ........................................................................... 6 How Does Columnar Orientation Impact Design Considerations?.............. 6 Extensible Analytics ....................................................................................
    [Show full text]
  • The “Tech”Tonic Shift Dale Wickizer Chief Technology Officer U.S
    The “Tech”tonic Shift Dale Wickizer Chief Technology Officer U.S. Public Sector NetApp Confidential — Limited Use Today I want to talk to you about the “Tech”tonic shift occurring to traditional enterprise applications as well as the IT organizations that manage them. © 2011 NetApp. All rights reserved. 1 World Data Explosion Growth Over the Next Decade: Servers (Phys/VM): 10x Data/Information: 50x #Files: 75x IT Professionals: <1.5x Source: Revisited: The Rapid Growth in Unstructured Data « Wikibon Blog http://bit.ly/oRSdXm • Growing 9x in 5 yrs! (1.8 ZB in 2011) • > 90% unstructured data Source: Gantz, John and Reinsel, David, “Extracting Value from Chaos”, • End user and machine generated IDC IVIEW, June 2011, page 4. 2 That shift is being driven by an explosion of data being generated and consumed in the world. Data has grown by a factor of 9 over the past 5 years, crossing 1.2 ZB for the first time! (If anyone wonders what 1.2 ZB is, Wikibon has this great graphic, showing it is the equivalent of 75 billion fully loaded iPads, stacked end-to-end and side-by-side, covering Wembley stadium, in a column more than 4 miles high). This year it will grow to 1.8 ZB. More than 90% of this data was unstructured and much of machine generated, in response to data stored by end users. Over the next decade, this data growth is expected to accelerate, increasing by a factor of 50. Over the same time, the number of files is expected to increase by more than a factor of 75, which will break most traditional file systems.
    [Show full text]
  • Systems for Cloud Data Analytics
    Peter Boncz SYSTEMS FOR CLOUD DATA ANALYTICS www.cwi.nl/~boncz/badsCloud Data Systems Credits • David DeWitt & Willis Lang (Microsoft) – cloud DW material • Stratis Viglas (Google) – extreme computing course (University Edinburgh) • Marcin Zukowski (Snowflake) • Ippokratis Pandis (Amazon Redshift/Spectrum) • Spark Team – Matei Zaharia, Xiangrui Meng (Stanford), – Ion Stoica, Xifan Pu (UC Berkeley) – Reynold Xin, Alex Behm (Databricks) www.cwi.nl/~boncz/badsCloud Data Systems Is it safe to have enterprise data in the Cloud? 2005: No way! Are you crazy? 2012: Don’t think so... But wait, we store our email where? 2018: Of course! www.cwi.nl/~boncz/badsCloud Data Systems Getting a database in a cloud Hi! I'm a Data Scientist! Hello! I am your account manager at X! I'm looking for a database for our cloud system Sure thing! Let's install our product, DBMS X for you! Awesome! It seems to work! Great. Let me send you that invoice! Just a sec… How much does the storage cost ? Hold on, let me check that Wait, what? And the system is elastic, right? Mommy!!! And I only pay for what I use, right? www.cwi.nl/~boncz/badsCloud Data Systems Traditional DB systems and the cloud • Designed for: –Small, fixed, optimized clusters of machines –Constrained amount of data and resources • Can be delivered via the Cloud –Reduce the complexity of hardware setup, software installation –No elasticity –No cheap storage –Not designed for cloud's poor stability –Not easy to use –Not "always on" –... www.cwi.nl/~boncz/badsCloud Data Systems Data in the Cloud • Data
    [Show full text]
  • TPC Benchmark H Full Disclosure Report Vmware® ESX
    TPC Benchmark H Full Disclosure Report VMware® ESX™ Using ParAccel Analytic Database™ Submitted for Review Report Date: April 11, 2010 TPC Benchmark H Full Disclosure Report Pricing revision: August 24, 2010 TPC Benchmark H Full Disclosure Report Page 1 First Edition – April 2010 Copyright © 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware, ESX, and ESXi are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. TPC-H Benchmark™ is a trademark of the Transaction Processing Performance Council. ParAccel Analytic Database™ is a registered trademark of ParAccel, Inc. VMware, Inc., the Sponsor of this benchmark test, believes that the information in this document is accurate as of the publication date. The information in this document is subject to change without notice. The Sponsor assumes no responsibility for any errors that may appear in this document. The pricing information in this document is believed to accurately reflect the current prices as of the publication date. However, the Sponsor provides no warranty of the pricing information included in this document. Benchmark results are highly dependent upon workload, specific application requirements, and system design and implementation. Relative system performance will vary as a result of these and other factors. Therefore, the TPC Benchmark H should not be used as a substitute for a specific customer application benchmark when critical capacity planning and/or product evaluation decisions are contemplated.
    [Show full text]
  • Big Data: Challenges, Opportunities and Realities
    Big Data: Challenges, Opportunities and Realities (This is the pre-print version submitted for publication as a chapter in an edited volume “Effective Big Data Management and Opportunities for Implementation”) Recommended Citation: Bhadani, A., Jothimani, D. (2016), Big data: Challenges, opportunities and realities, In Singh, M.K., & Kumar, D.G. (Eds.), Effective Big Data Management and Opportunities for Implementation (pp. 1-24), Pennsylvania, USA, IGI Global Big Data: Challenges, Opportunities, and Realities Abhay Kumar Bhadani Indian Institute of Technology Delhi, India Dhanya Jothimani Indian Institute of Technology Delhi, India ABSTRACT With the advent of Internet of Things (IoT) and Web 2.0 technologies, there has been a tremendous growth in the amount of data generated. This chapter emphasizes on the need for big data, technological advancements, tools and techniques being used to process big data are discussed. Technological improvements and limitations of existing storage techniques are also presented. Since, the traditional technologies like Relational Database Management System (RDBMS) have their own limitations to handle big data, new technologies have been developed to handle them and to derive useful insights. This chapter presents an overview of big data analytics, its application, advantages, and limitations. Few research issues and future directions are presented in this chapter. Keywords: Big Data, Big Data Analytics, Cloud Computing, Data Value Chain, Grid Computing, Hadoop, High Dimensional Data, MapReduce INTRODUCTION With the digitization of most of the processes, emergence of different social network platforms, blogs, deployment of different kind of sensors, adoption of hand-held digital devices, wearable devices and explosion in the usage of Internet, huge amount of data are being generated on continuous basis.
    [Show full text]
  • Citus Data Prepares Citusdb 4.0, Now a Massively Parallel Postgresql Analytic Database
    Citus Data prepares CitusDB 4.0, now a massively parallel PostgreSQL analytic database Analyst: Matt Aslett 9 Mar, 2015 Citus Data has changed its positioning since our last update, evolving CitusDB from being a scalable analytics database predominantly designed to bring SQL analytics to Hadoop to offering a stand-alone massively parallel columnar analytics database that is PostgreSQL-compatible. The 451 Take We noted that Citus Data was entering a crowded market in 2013, and therefore see the change of direction as a good thing. While the MPP analytic-database market is no less crowded, Citus Data is differentiated by its focus on extending, rather than forking, PostgreSQL. Making the cstore_fdw and pg-shard projects open source should grow the company's profile in the PostgreSQL user community and lay the foundation for potential CitusDB adoption. The competitive situation is likely to heat up, given that Pivotal's open source Greenplum strategy appears to be dependent on making it the default MPP choice for PostgreSQL, but we agree with Citus Data that Greenplum, having forked from PostgreSQL several years ago, will be a challenge. Either way, the PostgreSQL community will decide. Context The first time we encountered Citus Data, almost two years ago, the company had just launched CitusDB 2.0, bringing real-time SQL analytics to the Apache Hadoop data-processing framework. The SQL-on-Hadoop party was already in full swing, and got very crowded very quickly. While Citus Data had planned to differentiate itself by bringing SQL-based analytics to other nonrelational data Copyright 2015 - The 451 Group 1 platforms, including NoSQL databases, the company instead took a more radical change of direction, turning CitusDB into a stand-alone PostgreSQL-compatible massively parallel columnar analytics database.
    [Show full text]