Data Storage Architectures for Machine Learning and Artificial Intelligence On-Premises and Public Cloud

Total Page:16

File Type:pdf, Size:1020Kb

Data Storage Architectures for Machine Learning and Artificial Intelligence On-Premises and Public Cloud REPORT Data Storage Architectures for Machine Learning and Artificial Intelligence On-Premises and Public Cloud ENRICO SIGNORETTI TOPICS: ARTIFICIAL INTELLIGENCE DATA STORAGE MACHINE LEARNING Data Storage Architectures for Machine Learning and Artificial Intelligence On-Premises and Public Cloud TABLE OF CONTENTS 1 Summary 2 Market Framework 3 Maturity of Categories 4 Considerations for Selecting ML/AI Storage 5 Vendors to Watch 6 Near-Term Outlook 7 Key Takeaways 8 About Enrico Signoretti 9 About GigaOm 10 Copyright Data Storage Architectures for Machine Learning and Artificial Intelligence 2 1. Summary There is growing interest in machine learning (ML) and artificial intelligence (AI) in enterprise organizations. The market is quickly moving from infrastructures designed for research and development to turn-key solutions that respond quickly to new business requests. ML/AI are strategic technologies across all industries, improving business processes while enhancing the competitiveness of the entire organization. ML/AI software tools are improving and becoming more user-friendly, making it easier to to build new applications or reuse existing models for more use cases. As the ML/AI market matures, high- performance computing (HPC) vendors are now joined by traditional storage manufacturers, that are usually focused on enterprise workloads. Even though the requirements are similar to that of big data analytics workloads, the specific nature of ML/AI algorithms, and GPU-based computing, demand more attention to throughputs and $/GB, primarily because of the sheer amount of data involved in most of the projects. Depending on several factors, including the organization’s strategy, size, security needs, compliance, cost control, flexibility, etc, the infrastructure could be entirely on-premises, in the public cloud, or a combination of both (hybrid) – figure 1. The most flexible solutions are designed to run in all of these scenarios, giving organizations ample freedom of choice. In general, long term and large capacity projects, run by skilled teams, are more likely to be developed on-premises. Public cloud is usually chosen by smaller teams for its flexibility and less demanding projects. ML/AI workloads require infrastructure efficiency to yield rapid results. With thex e ception of the initial data collection, many parts of the workflow are repeated over time, so managing latency and throughput is crucial for the entire process. The system must handle metadata quickly while maximizing throughput to ensure that system GPUs are always fed at their utmost capacity. A single modern GPU is a very expensive component, able to crunch data at 6GB/s and more, and each single compute node can have multiple GPUs installed. Additionally, CPU-storage vicinity is important and why NVMe-based flash devices are usually selected for their characteristics of parallelization and performance. What is more, the data sets require a huge amount of storage capacity to train the neural network. For this reason, scale-out object stores are usually preferred because of their scalability characteristics, rich metadata, and competitive cost. Data Storage Architectures for Machine Learning and Artificial Intelligence 3 Figure 1: Possible combination of storage and computing resources in ML/AI projects In this report, we discuss the most recent storage architecture designs and innovative solutions deployed on-premises, cloud, and hybrid, aimed at supporting ML/AI workloads for enterprise organizations of all sizes. Key findings: • Enterprise organizations are aware of the strategic value of ML/AI for their business and are increasing investments in this area. • End users are looking for turn-key solutions that are easy to implement and that deliver a quick ROI (Return on Investment). • Many of the solutions available are based on a two-tier architecture with a flash-based, parallel, and scale-out file system for active data processing and object storage for capacity and long term data retention. There are also some innovative solutions that are taking a different approach, with the two tiers integrated together in a single system. Data Storage Architectures for Machine Learning and Artificial Intelligence 4 2. Market Framework There are several processes involved in an ML workflow and most of them need to be repeated several times. They all demonstrate different workload characteristics and need to encompass large data sets to return effective results. Very briefly, an ML workflow (figure 2) can be summarized as follows: • Data collection. Data is collected from one or multiple sources into a single repository. Capacity, scalability, and throughput are the most important metrics particularly with active data sources and non-historical data. • Data preparation. The data set is analyzed and stripped of anomalies, out-of-range data, inconsistency, noise, errors, and any other exceptions that could compromise the outcome. Data is tagged and indexed, making it searchable and reusable. This section of the workflow is characterized by a large number of read and write operations with plenty of metadata updates. It is also interesting to note that some storage systems are now able to complete this operation during the ingestion phase thanks to the implementation of serverless computing mechanisms, accelerating the process while simplifying the infrastructure. • Analysis and model selection. This part of the process is all about finding the right algorithms and data models for the training phase. The data scientist analyzes the data to find the training model that suits it best and repeats this part as many times as necessary on a small subset of the data. The storage system has to be fast to get results quickly, allowing it to compare several models before proceeding. With recent advancement in this field, new solutions are surfacing and uto-MLA products are becoming more common, helping end-users with this task (for reference, check Andrew Brust’s GigaOm Market Landscape Report, AI Within Reach: AutoML Platforms for the Enterprise. • Neural network training. This is the most compute and storage-intensive part of the entire workflow. It is where the training data set is passed to the selected algorithms to train the model. • Production and evaluation. This last part is where the data scientist actually sees the results of the ML model. Storage is not accessed anymore but the data has to be preserved in case it is necessary to reassess and improve the model. Data Storage Architectures for Machine Learning and Artificial Intelligence 5 Figure 2. Typical ML/AI Workflow model A storage infrastructure designed for ML/AI workloads has to provide performance, capacity, and cost- effectiveness. With this in mind, we can divide the market into two categories: 1. Two-Tier Infrastructure. The first tier is a fast scale-out file system for all active data and front-end operations. This is backed up by a second-tier object store for capacity. This flexible solution allows end-users to build an infrastructure that can be installed on-premises, in the cloud, or in a hybrid fashion (any combination of the two). This flexibility translates into cost savings but sacrifices some efficiency because of the backend data movements necessary to get the right data available where and when it is needed. 2. Single-System Architecture. A single system is much more efficient and provides top-notch performance by hiding data movements internally or making them not necessary. This simplifies infrastructure and operations, contributing to a reduction in TCO (total cost of ownership). On the flip side, they are more difficult andxpensive e to implement in the cloud, especially at scale, limiting them to on-premises or small cloud installations. Major cloud providers offer managed services based on similar architecture designs, allowing the end- user to simplify resource provisioning and management but, unfortunately, this also creates vendor lock-in for large-scale projects, and becomes expensive. Data Storage Architectures for Machine Learning and Artificial Intelligence 6 3. Maturity of Categories Enterprises adopt an architecture depending on the following factors: • Number of current and future projects • Size of the projects • Size of the organization • Performance needs • Cost If the organization’s strategy is to make a major investment in ML and AI, then it is highly likely that it will adopt an on-premises solution alongside the other resources to run it properly. This is necessary to maintain control over the entire process while containing costs. Other evaluation criteria include security and maintaining the validity and the source of the original data. In fact, if the data sets include sensitive information and already belong to the company, moving or replicating it to external repositories adds complexity to data management, monitoring, and auditing processes. Data governance and stewardship is particularly important in highly regulated industries and for organizations that handle personally identifiable information (PII). For xamplee , making copies of PII, without having the right tools to mask or remove sensitive information, could compromise General Data Protection Regulation (GDPR) compliance. One of the advantages of deploying a cloud-only or hybrid infrastructure is leveraging the tools made available by cloud providers. In fact, most cloud providers have optimized VM instances with GPUs or other types of coprocessors designed for AI workloads, but they also developed specific
Recommended publications
  • Respecting the Block Interface – Computational Storage Using Virtual Objects
    Respecting the block interface – computational storage using virtual objects Ian F. Adams, John Keys, Michael P. Mesnier Intel Labs Abstract immediately obvious, and trade-offs need to be made. Unfor- tunately, this often means that the parallel bandwidth of all Computational storage has remained an elusive goal. Though drives within a storage server will not be available. The same minimizing data movement by placing computation close to problem exists within a single SSD, as the internal NAND storage has quantifiable benefits, many of the previous at- flash bandwidth often exceeds that of the storage controller. tempts failed to take root in industry. They either require a Enter computational storage, an oft-attempted approach to departure from the widespread block protocol to one that address the data movement problem. By keeping computa- is more computationally-friendly (e.g., file, object, or key- tion physically close to its data, we can avoid costly I/O. Var- value), or they introduce significant complexity (state) on top ious designs have been pursued, but the benefits have never of the block protocol. been enough to justify a new storage protocol. In looking We participated in many of these attempts and have since back at the decades of computational storage research, there concluded that neither a departure from nor a significant ad- is a common requirement that the block protocol be replaced dition to the block protocol is needed. Here we introduce a (or extended) with files, objects, or key-value pairs – all of block-compatible design based on virtual objects. Like a real which provide a convenient handle for performing computa- object (e.g., a file), a virtual object contains the metadata that tion.
    [Show full text]
  • Hype Cycle for Storage and Data Protection Technologies, 2020
    Hype Cycle for Storage and Data Protection Technologies, 2020 Published 6 July 2020 - ID G00441602 - 78 min read By Analysts Julia Palmer Initiatives:Data Center Infrastructure This Hype Cycle evaluates storage and data protection technologies in terms of their business impact, adoption rate and maturity level to help IT leaders build stable, scalable, efficient and agile storage and data protection platform for digital business initiatives. Analysis What You Need to Know The storage and data protection market is evolving to address new challenges in enterprise IT such as exponential data growth, changing demands for skills, rapid digitalization and globalization of business, requirements to connect and collect everything, and expansion of data privacy and sovereignty laws. Requirements for robust, scalable, simple and performant storage are on the rise. As the data center no longer remains the center of data, IT leaders expect storage to evolve from being delivered by rigid appliances in core data centers to flexible storage platforms capable of enabling hybrid cloud data flow at the edge and in the public cloud. Here, Gartner has assessed 24 of the most relevant storage and data protection technologies that IT leaders must evaluate to address the fast-evolving needs of the enterprise. For more information about how peer I&O leaders view the technologies aligned with this Hype Cycle, see “2020-2022 Emerging Technology Roadmap for Large Enterprises.” The Hype Cycle IT leaders responsible for storage and data protection must cope with the rapidly changing requirements of digital business, exponential data growth, introduction of new workloads, and the desire to leverage public cloud and enable edge capabilities.
    [Show full text]
  • Performance and Progress Report
    Performance and Progress Report NOAA Grant No: NA15NOS4000200 Project Title: Joint Hydrographic Center Report Period: 01/01/2016 – 12/31/2016 Lead Principal Investigator: Larry A. Mayer Principal Investigators Brian Calder John Hughes Clarke James Gardner David Mosher Colin Ware Thomas Weber Co-PIs Thomas Butkiewicz Jenn Dijkstra Semme Dijkstra Paul Johnson Thomas Lippmann Giuseppe Masetti Shachak Pe’eri Yuri Rzhanov Val Schmidt Briana Sullivan Larry Ward CONTENTS INTRODUCTION ........................................................................................................................................................ 3 INFRASTRUCTURE .................................................................................................................................................. 4 PERSONNEL .......................................................................................................................................................................... 4 Research Scientists and Staff .......................................................................................................................................... 9 NOAA Employees ......................................................................................................................................................... 14 Other Affiliated Faculty ................................................................................................................................................ 16 Visiting Scholars ..........................................................................................................................................................
    [Show full text]
  • IDC Marketscape IDC Marketscape: Worldwide Object-Based Storage 2019 Vendor Assessment
    IDC MarketScape IDC MarketScape: Worldwide Object-Based Storage 2019 Vendor Assessment Amita Potnis THIS IDC MARKETSCAPE EXCERPT FEATURES SCALITY IDC MARKETSCAPE FIGURE FIGURE 1 IDC MarketScape Worldwide Object-Based Storage Vendor Assessment Source: IDC, 2019 December 2019, IDC #US45354219e Please see the Appendix for detailed methodology, market definition, and scoring criteria. IN THIS EXCERPT The content for this excerpt was taken directly from IDC MarketScape: Worldwide Object-Based Storage 2019 Vendor Assessment (Doc # US45354219). All or parts of the following sections are included in this excerpt: IDC Opinion, IDC MarketScape Vendor Inclusion Criteria, Essential Guidance, Vendor Summary Profile, Appendix and Learn More. Also included is Figure 1. IDC OPINION The storage market has come a long way in terms of understanding object-based storage (OBS) technology and actively adopting it. It is a common practice for OBS to be adopted for secondary and cold storage needs at scale. Over the recent years, OBS has proven its ability to scale to tens and hundreds of petabytes and is now maturing to support newer workloads such as unstructured data analytics, IoT, AI/ML/DL, and so forth. As the price of flash declines and the data sets continue to grow, the need for analyzing the data is on the rise. Moving data sets from an object store to a high- performance tier for analysis is a thing of the past. Many vendors are enhancing their object offerings to include a flash tier or are bringing all-flash array object storage offerings to the market today. In this IDC MarketScape, IDC assesses the present commercial OBS supplier (suppliers that deliver software-defined OBS solutions as software or appliances much like other storage platforms) landscape.
    [Show full text]
  • An Object Storage Solution for Data Archive Using Supermicro SSG-5018D8-AR12L and Openio SDS
    CONTENTS WHITE PAPER 2 NEW CHALLENGES AN OBJECT STORAGE 2 DATA ARCHIVE WITHOUT COMPROMISE SOLUTION FOR DATA 3 OBJECT STORAGE FOR DATA ARCHIVE 4 SSG-5018D8-AR12L AND OPENIO ARCHIVE USING Power Efficiency at its Core Extreme Storage Density SUPERMICRO SSG-5018D8- Robust Networking 5 OPENIO, AN OPEN SOURCE OBJECT AR12L AND OPENIO SDS STORAGE SOLUTION 6 SUPERMICRO SSG-5018D8-AR12L AND OPENIO COMBINED SOLUTION 7 TEST RESULTS 11 CONCLUSIONS White Paper October 2016 Rodolfo Campos Sr. Product Marketing Specialist Super Micro Computer, Inc. 980 Rock Avenue San Jose, CA 95131 USA www.supermicro.com Intel Inside®. Powerful Data Center Outside. WHITE PAPER An Object Storage Solution For Data Archive using Supermicro SSG-5018D8-AR12L and OpenIO SDS NEW CHALLENGES Every minute - 2.5 million messages on Facebook are sent, nearly 430K tweets are posted, 67K photos on Instagram and over 5 million YouTube videos are uploaded. ~50% These are a few examples of how the Big Data market is growing 9 times faster than Digital Storage the traditional IT market. According to IDC reports, in 2012 the World created 4.4 Annual Growth zettabytes of digital data and is estimated to create 44 zettabytes of data by 2020. Cold Storage These large volumes of digital data are being created, shared, and stored on the cloud. As a result, data storage demands are reaching new limits and are in need Capacity of new requirements. Thus, data storage, data movement, and data analytics applications need a new storage platform to keep up with the greater capacity and scaling demands it brings.
    [Show full text]
  • Quarterly Enterprise Software Market Review 1Q 2019
    Quarterly Enterprise Software Market Review 1Q 2019 Boston San Francisco 200 Clarendon Street, Floor 45 601 Montgomery Street, Suite 2010 Boston, MA 02116 San Francisco, CA 94111 Peter M. Falvey Michael H.M. Shea Christopher J. Pingpank Michael S. Barker Managing Director Managing Director Managing Director Managing Director 617.896.2251 617.896.2255 617.896.2218 415.762.8101 [email protected] [email protected] [email protected] [email protected] Jeffrey G. Cook Brad E. McCarthy Misha Cvetkovic Principal Principal Vice President 617.896.2252 617.896.2245 415.762.8104 [email protected] [email protected] [email protected] www.shea-co.com Member FINRA & SIPC Copyright ©2019 Shea & Company Overview People ▪ Industry Expertise ▪ Process Excellence 1 2 24 15+ >70 Firm focused exclusively Offices in Boston and San Professionals focused on Years of experience Transactions completed on enterprise software Francisco the software industry amongst our senior representing billions of bankers dollars in value Mergers & Acquisitions Private Placements & Capital Raising Corporate Strategy ■ Sell-side and buy-side M&A advisory ■ Late-stage venture, growth equity and buyouts ■ Corporate development advisory ■ Divestitures ■ Recapitalizations ■ Balance sheet and capital structure review ■ Restructuring ■ IPO advisory ■ Fairness opinions has received an investment from has received an investment from Superior Outcomes has been acquired by has acquired Shea & Company has advised on important transactions representing billions of dollars in
    [Show full text]
  • Presto: the Definitive Guide
    Presto The Definitive Guide SQL at Any Scale, on Any Storage, in Any Environment Compliments of Matt Fuller, Manfred Moser & Martin Traverso Virtual Book Tour Starburst presents Presto: The Definitive Guide Register Now! Starburst is hosting a virtual book tour series where attendees will: Meet the authors: • Meet the authors from the comfort of your own home Matt Fuller • Meet the Presto creators and participate in an Ask Me Anything (AMA) session with the book Manfred Moser authors + Presto creators • Meet special guest speakers from Martin your favorite podcasts who will Traverso moderate the AMA Register here to save your spot. Praise for Presto: The Definitive Guide This book provides a great introduction to Presto and teaches you everything you need to know to start your successful usage of Presto. —Dain Sundstrom and David Phillips, Creators of the Presto Projects and Founders of the Presto Software Foundation Presto plays a key role in enabling analysis at Pinterest. This book covers the Presto essentials, from use cases through how to run Presto at massive scale. —Ashish Kumar Singh, Tech Lead, Bigdata Query Processing Platform, Pinterest Presto has set the bar in both community-building and technical excellence for lightning- fast analytical processing on stored data in modern cloud architectures. This book is a must-read for companies looking to modernize their analytics stack. —Jay Kreps, Cocreator of Apache Kafka, Cofounder and CEO of Confluent Presto has saved us all—both in academia and industry—countless hours of work, allowing us all to avoid having to write code to manage distributed query processing.
    [Show full text]
  • Unlock Modern Backup and Recovery with Object Storage
    Unlock Modern Backup and Recovery With Object Storage E-BOOK It’s a New Era for Data Protection In this age of digital transformation, data is growing at an astounding rate, posing challenges for data protection, long-term data retention and adherence Annual Size of the Global Datasphere 175 ZB to business and regulatory compliance. In just the 180 last two years, 90% of the world’s data has been 160 created by computers, mobile phones, email, social 140 media, IoT smart devices, connected cars and other 120 100 devices constantly generating streams of data1. 80 Zettabytes 60 IDC predicts that the data we generate, collect, and consume will 40 continue to increase significantly—from about 50 zettabytes in 2020 to 175 zettabytes in 2025. With massive collections of data 20 0 amounting, trying to keep pace with the performance required to 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 protect and recover it quickly, while scaling cost effectively, has Years become a major pain point for the enterprise. 1 Jeremy Waite, IBM ‘10 Key Marketing Trends for 2017’ Global Data Growth Prediction 50 zettabytes in 2020 175 zettabytes in 2025 1 zettabyte (ZB) = 175 ZB of data stored on DVDs would circle the 1 trillion gigabytes earth 222 times. 1 Redefine Data Protection Requirements How will your current backup, recovery and long-term retention strategy scale as data reaches uncharted volumes? Consider that for every terabyte of primary data stored, it is not uncommon to store three or more terabytes of secondary data copies for protection, high availability and compliance purposes.
    [Show full text]
  • Retrospect Backup for Windows Data Protection for Businesses
    DATA SHEET | RETROSPECT FOR WINDOWS Retrospect Backup for Windows Data Protection for Businesses NEW Retrospect Backup 17: Automatic Onboarding, Nexsan E-Series/Unity Certification, 10x Faster ProactiveAI, Restore Preflight Retrospect Backup protects over 100 Petabytes in over 500,000 homes and businesses in over 100 countries. With broad platform and application support, Retrospect protects every part of your computer environment, on-site and in the cloud. Start your first backup with one click. Complete Data Protection With cross platform support for Windows, Mac, and Linux, Retrospect offers business backup with system recovery, local backup, long-term retention, along with centralized management, end-to-end security, email protection, and extensive customization–all at an affordable price for a small business. Cloud Backup Theft and disaster have always been important reasons for offsite backups, but now, ransomware is the most powerful. Ransomware will encrypt the file share just like any other file. Retrospect integrates with over a dozen cloud storage providers for offsite backups, connects securely to prevent access from malware and ransomware, and lets you transfer local backups to it in a couple clicks. Full System Recovery Systems need the same level of protection as individual files. It takes days to recreate an operating system from scratch, with specific operating system versions, system state, application installations and settings, and user preferences. Retrospect does full system backup and 100 recovery for your entire environment. Petabytes File and System Migration Retrospect offers built-in migration for files, folders, or entire bootable systems, including extended attributes and ACLs, with extensive options for which files to replace if source and 500,000 destination overlap.
    [Show full text]
  • End-Users Survey 2020 March 2020
    End-Users Survey 2020 March 2020 Published March 16th, 2020 Document #CR-2020-011-44 Introduction We ran an end-users survey in January 2020 to learn about storage related IT projects needs, collect users perceptions and understand their technologies adoptions. By end-users we mean companies that pick, deploy and used IT products to support their business activity and mission. We considered 2 populations – US and Europe – with respectively 1123 US companies and 560 European ones. For Europe, we limit our study to UK, Germany and France. Each of these countries represents approximately one third of the total European users. Users belong to the enterprise and SMB segments with 50% from each segment. Companies span several verticals in term of industries and use-cases. The study addresses technology adoption, projects priorities and products/features needs for a total of 20 questions. Each graphic is sorted by descending order for US. “We wished to understand end-users perspective beyond what we heard from vendors, so we asked a series of 20 questions to end-users across all verticals and industries and results are more than interesting with some surprises.” Philippe Nicolas Founder and Lead Analyst Coldago Research Copyright © 2020 Coldago Research End-Users Survey 2020 - March 2020 2 Questions about technologies and products #1: What are the technologies you will consider for new projects in 2020? What are the technologies you will consider for new projects in 2020? AFA Cloud Storage AF NAS Cloud Object Storage NVMe Array* Object Storage
    [Show full text]
  • The Arcati Mainframe Yearbook 2018
    ArcatiArcati MainframeMainframe YearbookYearbook 20072018 Mainframe strategy The Arcati Mainframe Yearbook 2018 The independent annual guide for users of IBM mainframe systems SPONSORED BY: PUBLISHED BY: Arcati Limited 19 Ashbourne Way Thatcham Berks RG19 3SJ UK Phone: +44 (0) 7717 858284 Fax: +44 (0) 1635 881717 Web: http://www.arcati.com/ E-mail: [email protected] © Arcati Limited, 2018 1 Arcati Mainframe Yearbook 2018 Mainframe strategy Contents Welcome to the Arcati Mainframe Yearbook 2018 ............................................................ 3 Staying secure and compliant ........................................................................................... 5 How to Ditch Waterfall for DevOps on the Mainframe ................................................... 10 Health Solutions Provider Accelerates Integration, Sparks IT Collaboration Using Server-Side JavaScript ............................................................. 16 z/OS Code Scanning Is Essential to System z® Security ............................................. 21 DevOps for the mainframe................................................................................................ 27 ‘Reports of my death have been greatly exaggerated’ .................................................. 33 The 2018 Mainframe User Survey .................................................................................... 36 An analysis of the profile, plans, and priorities of mainframe users Vendor Directory ..............................................................................................................
    [Show full text]
  • Openio SDS on ARM a Practical and Cost-Effective Object Storage Infrastructure Based on Soyoustart Dedicated ARM Servers
    OpenIO SDS on ARM A practical and cost-effective object storage infrastructure based on SoYouStart dedicated ARM servers. Copyright © 2017 OpenIO SAS All Rights Reserved. Restriction on Disclosure and Use of Data 30 September 2017 "1 OpenIO OpenIO SDS on ARM 30/09/2017 Table of Contents Introduction 3 Benchmark Description 4 1. Architecture 4 2. Methodology 5 3. Benchmark Tool 5 Results 7 1. 128KB objects 7 Disk and CPU metrics (on 48 nodes) 8 2. 1MB objects 9 Disk and CPU metrics (on 48 nodes) 10 5. 10MB objects 11 Disk and CPU metrics (on 48 nodes) 12 Cluster Scalability 13 Total disks IOps 13 Conclusion 15 2 OpenIO OpenIO SDS on ARM 30/09/2017 Introduction In this white paper, OpenIO will demonstrate how to use its SDS Object Storage platform with dedicated SoYouStart ARM servers to build a flexible private cloud. This S3-compatible storage infrastructure is ideal for a wide range of uses, offering full control over data, but without the complexity found in other solutions. OpenIO SDS is a next-generation object storage solution with a modern, lightweight design that associates flexibility, efficiency, and ease of use. It is open source software, and it can be installed on ARM and x86 servers, making it possible to build a hyper scalable storage and compute platform without the risk of lock-in. It offers excellent TCO for the highest and fastest ROI. Object storage is generally associated with large capacities, and its benefts are usually only visible in large installations. But thanks to characteristics of OpenIO SDS, this next-generation object storage solution can be cost effective even with the smallest of installations.
    [Show full text]