Openio SDS on ARM a Practical and Cost-Effective Object Storage Infrastructure Based on Soyoustart Dedicated ARM Servers

Total Page:16

File Type:pdf, Size:1020Kb

Openio SDS on ARM a Practical and Cost-Effective Object Storage Infrastructure Based on Soyoustart Dedicated ARM Servers OpenIO SDS on ARM A practical and cost-effective object storage infrastructure based on SoYouStart dedicated ARM servers. Copyright © 2017 OpenIO SAS All Rights Reserved. Restriction on Disclosure and Use of Data 30 September 2017 "1 OpenIO OpenIO SDS on ARM 30/09/2017 Table of Contents Introduction 3 Benchmark Description 4 1. Architecture 4 2. Methodology 5 3. Benchmark Tool 5 Results 7 1. 128KB objects 7 Disk and CPU metrics (on 48 nodes) 8 2. 1MB objects 9 Disk and CPU metrics (on 48 nodes) 10 5. 10MB objects 11 Disk and CPU metrics (on 48 nodes) 12 Cluster Scalability 13 Total disks IOps 13 Conclusion 15 2 OpenIO OpenIO SDS on ARM 30/09/2017 Introduction In this white paper, OpenIO will demonstrate how to use its SDS Object Storage platform with dedicated SoYouStart ARM servers to build a flexible private cloud. This S3-compatible storage infrastructure is ideal for a wide range of uses, offering full control over data, but without the complexity found in other solutions. OpenIO SDS is a next-generation object storage solution with a modern, lightweight design that associates flexibility, efficiency, and ease of use. It is open source software, and it can be installed on ARM and x86 servers, making it possible to build a hyper scalable storage and compute platform without the risk of lock-in. It offers excellent TCO for the highest and fastest ROI. Object storage is generally associated with large capacities, and its benefts are usually only visible in large installations. But thanks to characteristics of OpenIO SDS, this next-generation object storage solution can be cost effective even with the smallest of installations. And it can easily grow from a simple setup to a full-sized data center, depending on users’ storage needs: one node at a time, with a linear increase in performance and capacity. OpenIO SDS’s simple, lightweight design allows it to be installed on very small nodes. The minimal requirements for a node on ARM are 512MB RAM and 1CPU core, with packages available for Raspbian and Ubuntu Linux (supporting Raspberry Pi computers). The software’s true flexibility is powered by Conscience technology, a set of algorithms and mechanisms that continuously monitors the cluster, computing quality scores for all nodes, and choosing the best node for each operation. Thanks to Conscience, all operations are dynamically distributed and there is no need to rebalance the cluster when new resources are added. By partnering with SoYouStart, a dedicated server provider built on OVH’s infrastructure around the world, we have been able to build a cluster and run a complete set of benchmarks to demonstrate the benefts of an ARM-based storage solution that can quickly scale from three nodes to an unlimited number of nodes at a reasonable cost. Thanks to its very competitive offer, SoYouStart enables small and medium-sized organizations to build private infrastructures and making them available on high-speed public networks without the hassle of managing physical hardware purchasing and maintenance. For this benchmark, the OpenIO team worked on a 48-node confguration to take advantage of erasure coding and parallelism, but the minimal confguration supported in production starts at three nodes, allowing end users with the smallest storage needs to take advantage of object storage at reasonable prices. SoYouStart offers ARM-based servers in European and North American datacenters, and the confguration described in the following pages is replicable in those countries, allowing end users to be compliant with local laws and regulations. (https://www.soyoustart.com/en/server-storage/) 3 OpenIO OpenIO SDS on ARM 30/09/2017 Benchmark Description 1. Architecture For our test, we chose to work on a two-tier architecture. This allowed us to install the necessary software to run the benchmark testing suite on two X86 nodes, which also acted as Swift gateways, while the storage layer was built around 48 ARM servers (2 CPU cores, 2GB RAM, 2TB HDD, unlimited traffic, and a public IP address: https://www.soyoustart.com/fr/offres/162armada1.xml). Each node was confgured to reflect the minimum requirements for OpenIO SDS. FileSystem: • Rootvol (/): EXT4, 8 GB • Datavol for OpenIO SDS data (/var/lib/oio): XFS, 1.9 TB The object store was confgured using a dynamic data protection policy which enabled erasure coding for objects larger than 250 KB and three-way replication for smaller ones. OpenIO SDS is easy to deploy and scale thanks to the available Ansible role (available at https:// github.com/open-io/ansible-role-openio-sds) and the OVH/SoYouStart APIs. 4 OpenIO OpenIO SDS on ARM 30/09/2017 2. Methodology For the benchmark, the platform was populated with 500 containers and 50 objects in each container. Sizes of objects were 128KB, 1MB, or 10MB distributed in equal parts. A frst type of test (80% read / 20% write), which is close to a real use case scenario, was performed for each object size, and we launched seven different runs based on different levels of parallelism (5, 10, 20, 40, 80, 160, and 320 workers). A second test was designed to test the linear scalability of the solution. In this case, a 100% read run was launched against 10MB objects on three different cluster confgurations: 12, 24, and 48 nodes. Each runs lasted 5 minutes (long enough to see any performance issues). 3. Benchmark Tool We ran the tests using COSbench, a tool developed by Intel to test object storage solutions. It is open source, so results can be easily compared and verifed. In this case, we chose to use the Swift API, but OpenIO SDS is also compatible with the S3 API. COSbench (https://github.com/intel-cloud/cosbench) features include: - Easy to use via a web interface or on the command line - Exports signifcant metrics for comparative usage - All metrics are saved in CVS format 5 OpenIO OpenIO SDS on ARM 30/09/2017 The benchmark was organized in five phases: init: container creation prepare: container population with objects main: bench scenario (with all possible read/write/delete combinations) cleanup: object deletion dispose: container deletion Here is an example of a result page. 6 OpenIO OpenIO SDS on ARM 30/09/2017 Results 1. 128KB objects Response time (ms) Throughput (op/s) 128KB objects 80% read 800 160 144,59 138,9 139,67 141,23 600 120 98,46 400 80 70,46 Throughput(op/s) Response time(ms) 200 40 34,13 0 0 5 10 20 40 80 160 320 Number of workers Response time (ms) Throughput (op/s) 128KB objects 20% write 1800 40 34,23 34,46 34,82 35,23 1350 30 25,01 900 20 17,94 Throughput(op/s) Response time(ms) 450 10 8,72 0 0 5 10 20 40 80 160 320 Number of workers 7 OpenIO OpenIO SDS on ARM 30/09/2017 Disk and CPU metrics (on 48 nodes) 8 OpenIO OpenIO SDS on ARM 30/09/2017 2. 1MB objects Response time (ms) Throughput (op/s) 1MB objects 80% read 800 140 136,84 137,59 130,81 126,25 600 105 84,06 400 70 Throughput (op/s) Response time(ms) 48,26 200 35 29,49 0 0 5 10 20 40 80 160 320 Number of workers Response time (ms) Throughput (op/s) 1MB objects 20% write 2400 40 34,43 33 33,68 1800 31,23 30 1200 20,55 20 Throughput (op/s) Response time(ms) 600 11,83 10 7,15 0 0 5 10 20 40 80 160 320 Number of workers 9 OpenIO OpenIO SDS on ARM 30/09/2017 Disk and CPU metrics (on 48 nodes) 10 OpenIO OpenIO SDS on ARM 30/09/2017 5. 10MB objects Response time (ms) Throughput (op/s) 10MB objects 80% read 4000 70 69,68 63,41 3000 52,5 53,08 43,99 2000 35 32,09 Throughput(op/s) Response time(ms) 1000 21,17 17,5 11,38 0 0 5 10 20 40 80 160 320 Number of workers Response time (ms) Throughput (op/s) 10MB objects 20% write 6000 16 15,69 15,89 13,22 4500 12 10,75 3000 8 8,13 Throughput(op/s) Response time(ms) 5,14 1500 4 2,95 0 0 5 10 20 40 80 160 320 Number of workers 11 OpenIO OpenIO SDS on ARM 30/09/2017 Disk and CPU metrics (on 48 nodes) 12 OpenIO OpenIO SDS on ARM 30/09/2017 Cluster Scalability To demonstrate the linear scalability of OpenIO SDS, as well as its ability to scale quickly when needed, we simulated a cluster of 12 nodes as it was expanded to 24, then 48 nodes. We ran three benchmarks using 80 workers confgured to perform 100% read operations on 10MB objects with the three confgurations. Bandwidth (MB/s) Throughput (op/s) 10MB objects 100% read 1100 1.044,48 100 825 75 553,45 550 99,42 50 Bandwidth (MB/s) Throughput (op/s) 235,15 275 54,05 25 22,96 0 0 12 24 48 Number of nodes 80 workers 12 nodes 2,5 KIOps 24 nodes 5 KIOps 48 nodes 8,7 KIOps Total disks IOps Each disk delivers between 180 IOps and 210 IOps, which is very good for SATA disks (and is also the limit as we reach 100% disk utilization). 13 OpenIO OpenIO SDS on ARM 30/09/2017 On the 48-node cluster, we confgured the number of workers to increase progressively from 20 to 40, then 80, 160, and 320. After the COSbench preparation phase, we found that the highest bandwidth was achieved with 80 workers.
Recommended publications
  • Respecting the Block Interface – Computational Storage Using Virtual Objects
    Respecting the block interface – computational storage using virtual objects Ian F. Adams, John Keys, Michael P. Mesnier Intel Labs Abstract immediately obvious, and trade-offs need to be made. Unfor- tunately, this often means that the parallel bandwidth of all Computational storage has remained an elusive goal. Though drives within a storage server will not be available. The same minimizing data movement by placing computation close to problem exists within a single SSD, as the internal NAND storage has quantifiable benefits, many of the previous at- flash bandwidth often exceeds that of the storage controller. tempts failed to take root in industry. They either require a Enter computational storage, an oft-attempted approach to departure from the widespread block protocol to one that address the data movement problem. By keeping computa- is more computationally-friendly (e.g., file, object, or key- tion physically close to its data, we can avoid costly I/O. Var- value), or they introduce significant complexity (state) on top ious designs have been pursued, but the benefits have never of the block protocol. been enough to justify a new storage protocol. In looking We participated in many of these attempts and have since back at the decades of computational storage research, there concluded that neither a departure from nor a significant ad- is a common requirement that the block protocol be replaced dition to the block protocol is needed. Here we introduce a (or extended) with files, objects, or key-value pairs – all of block-compatible design based on virtual objects. Like a real which provide a convenient handle for performing computa- object (e.g., a file), a virtual object contains the metadata that tion.
    [Show full text]
  • IDC Marketscape IDC Marketscape: Worldwide Object-Based Storage 2019 Vendor Assessment
    IDC MarketScape IDC MarketScape: Worldwide Object-Based Storage 2019 Vendor Assessment Amita Potnis THIS IDC MARKETSCAPE EXCERPT FEATURES SCALITY IDC MARKETSCAPE FIGURE FIGURE 1 IDC MarketScape Worldwide Object-Based Storage Vendor Assessment Source: IDC, 2019 December 2019, IDC #US45354219e Please see the Appendix for detailed methodology, market definition, and scoring criteria. IN THIS EXCERPT The content for this excerpt was taken directly from IDC MarketScape: Worldwide Object-Based Storage 2019 Vendor Assessment (Doc # US45354219). All or parts of the following sections are included in this excerpt: IDC Opinion, IDC MarketScape Vendor Inclusion Criteria, Essential Guidance, Vendor Summary Profile, Appendix and Learn More. Also included is Figure 1. IDC OPINION The storage market has come a long way in terms of understanding object-based storage (OBS) technology and actively adopting it. It is a common practice for OBS to be adopted for secondary and cold storage needs at scale. Over the recent years, OBS has proven its ability to scale to tens and hundreds of petabytes and is now maturing to support newer workloads such as unstructured data analytics, IoT, AI/ML/DL, and so forth. As the price of flash declines and the data sets continue to grow, the need for analyzing the data is on the rise. Moving data sets from an object store to a high- performance tier for analysis is a thing of the past. Many vendors are enhancing their object offerings to include a flash tier or are bringing all-flash array object storage offerings to the market today. In this IDC MarketScape, IDC assesses the present commercial OBS supplier (suppliers that deliver software-defined OBS solutions as software or appliances much like other storage platforms) landscape.
    [Show full text]
  • An Object Storage Solution for Data Archive Using Supermicro SSG-5018D8-AR12L and Openio SDS
    CONTENTS WHITE PAPER 2 NEW CHALLENGES AN OBJECT STORAGE 2 DATA ARCHIVE WITHOUT COMPROMISE SOLUTION FOR DATA 3 OBJECT STORAGE FOR DATA ARCHIVE 4 SSG-5018D8-AR12L AND OPENIO ARCHIVE USING Power Efficiency at its Core Extreme Storage Density SUPERMICRO SSG-5018D8- Robust Networking 5 OPENIO, AN OPEN SOURCE OBJECT AR12L AND OPENIO SDS STORAGE SOLUTION 6 SUPERMICRO SSG-5018D8-AR12L AND OPENIO COMBINED SOLUTION 7 TEST RESULTS 11 CONCLUSIONS White Paper October 2016 Rodolfo Campos Sr. Product Marketing Specialist Super Micro Computer, Inc. 980 Rock Avenue San Jose, CA 95131 USA www.supermicro.com Intel Inside®. Powerful Data Center Outside. WHITE PAPER An Object Storage Solution For Data Archive using Supermicro SSG-5018D8-AR12L and OpenIO SDS NEW CHALLENGES Every minute - 2.5 million messages on Facebook are sent, nearly 430K tweets are posted, 67K photos on Instagram and over 5 million YouTube videos are uploaded. ~50% These are a few examples of how the Big Data market is growing 9 times faster than Digital Storage the traditional IT market. According to IDC reports, in 2012 the World created 4.4 Annual Growth zettabytes of digital data and is estimated to create 44 zettabytes of data by 2020. Cold Storage These large volumes of digital data are being created, shared, and stored on the cloud. As a result, data storage demands are reaching new limits and are in need Capacity of new requirements. Thus, data storage, data movement, and data analytics applications need a new storage platform to keep up with the greater capacity and scaling demands it brings.
    [Show full text]
  • Presto: the Definitive Guide
    Presto The Definitive Guide SQL at Any Scale, on Any Storage, in Any Environment Compliments of Matt Fuller, Manfred Moser & Martin Traverso Virtual Book Tour Starburst presents Presto: The Definitive Guide Register Now! Starburst is hosting a virtual book tour series where attendees will: Meet the authors: • Meet the authors from the comfort of your own home Matt Fuller • Meet the Presto creators and participate in an Ask Me Anything (AMA) session with the book Manfred Moser authors + Presto creators • Meet special guest speakers from Martin your favorite podcasts who will Traverso moderate the AMA Register here to save your spot. Praise for Presto: The Definitive Guide This book provides a great introduction to Presto and teaches you everything you need to know to start your successful usage of Presto. —Dain Sundstrom and David Phillips, Creators of the Presto Projects and Founders of the Presto Software Foundation Presto plays a key role in enabling analysis at Pinterest. This book covers the Presto essentials, from use cases through how to run Presto at massive scale. —Ashish Kumar Singh, Tech Lead, Bigdata Query Processing Platform, Pinterest Presto has set the bar in both community-building and technical excellence for lightning- fast analytical processing on stored data in modern cloud architectures. This book is a must-read for companies looking to modernize their analytics stack. —Jay Kreps, Cocreator of Apache Kafka, Cofounder and CEO of Confluent Presto has saved us all—both in academia and industry—countless hours of work, allowing us all to avoid having to write code to manage distributed query processing.
    [Show full text]
  • Unlock Modern Backup and Recovery with Object Storage
    Unlock Modern Backup and Recovery With Object Storage E-BOOK It’s a New Era for Data Protection In this age of digital transformation, data is growing at an astounding rate, posing challenges for data protection, long-term data retention and adherence Annual Size of the Global Datasphere 175 ZB to business and regulatory compliance. In just the 180 last two years, 90% of the world’s data has been 160 created by computers, mobile phones, email, social 140 media, IoT smart devices, connected cars and other 120 100 devices constantly generating streams of data1. 80 Zettabytes 60 IDC predicts that the data we generate, collect, and consume will 40 continue to increase significantly—from about 50 zettabytes in 2020 to 175 zettabytes in 2025. With massive collections of data 20 0 amounting, trying to keep pace with the performance required to 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 protect and recover it quickly, while scaling cost effectively, has Years become a major pain point for the enterprise. 1 Jeremy Waite, IBM ‘10 Key Marketing Trends for 2017’ Global Data Growth Prediction 50 zettabytes in 2020 175 zettabytes in 2025 1 zettabyte (ZB) = 175 ZB of data stored on DVDs would circle the 1 trillion gigabytes earth 222 times. 1 Redefine Data Protection Requirements How will your current backup, recovery and long-term retention strategy scale as data reaches uncharted volumes? Consider that for every terabyte of primary data stored, it is not uncommon to store three or more terabytes of secondary data copies for protection, high availability and compliance purposes.
    [Show full text]
  • Retrospect Backup for Windows Data Protection for Businesses
    DATA SHEET | RETROSPECT FOR WINDOWS Retrospect Backup for Windows Data Protection for Businesses NEW Retrospect Backup 17: Automatic Onboarding, Nexsan E-Series/Unity Certification, 10x Faster ProactiveAI, Restore Preflight Retrospect Backup protects over 100 Petabytes in over 500,000 homes and businesses in over 100 countries. With broad platform and application support, Retrospect protects every part of your computer environment, on-site and in the cloud. Start your first backup with one click. Complete Data Protection With cross platform support for Windows, Mac, and Linux, Retrospect offers business backup with system recovery, local backup, long-term retention, along with centralized management, end-to-end security, email protection, and extensive customization–all at an affordable price for a small business. Cloud Backup Theft and disaster have always been important reasons for offsite backups, but now, ransomware is the most powerful. Ransomware will encrypt the file share just like any other file. Retrospect integrates with over a dozen cloud storage providers for offsite backups, connects securely to prevent access from malware and ransomware, and lets you transfer local backups to it in a couple clicks. Full System Recovery Systems need the same level of protection as individual files. It takes days to recreate an operating system from scratch, with specific operating system versions, system state, application installations and settings, and user preferences. Retrospect does full system backup and 100 recovery for your entire environment. Petabytes File and System Migration Retrospect offers built-in migration for files, folders, or entire bootable systems, including extended attributes and ACLs, with extensive options for which files to replace if source and 500,000 destination overlap.
    [Show full text]
  • Next-Gen Object Storage and Serverless Computing Agenda
    Next-Gen Object Storage and Serverless Computing Agenda 1 About OpenIO 2 SDS: Next-gen Object storage 3 Grid for Apps: Serverless Computing OpenIO About OpenIO OpenIO An experienced team, a robust and mature technology 2017 2016 2006 2007 2009 2012 2014 2015 Global launch OpenIO SDS Idea & 1st Design Production ready! Open 15 PBs OpenIO First release concept dev starts sourced customer company 1st massive prod success over 1PB Lille (FR) | San Francisco | Tokyo OpenIO Quickly Growing Across Geographies And Vertical Markets 35 3 25+ 2 Employees Continents Customers Solutions Mostly engineers, Hem (Lille, France), Paris, Installations ranging OpenIO SDS, next-gen support and pre-sales Tokyo, San Francisco from 3 nodes up to 60 object storage Growing fast Petabytes and billions of objects Teams across EMEA, Grid for Apps, serverless Japan and, soon, US computing framework OpenIO Customers Large Telco Provider Email storage Small objects, high scalability 65 Mln 15 PB mail boxes of storage 650 10 Bln nodes objects 20K services online OpenIO DailyMotion Media & Entertainment High throughput and capacity, 60 PB 80 Mln fat x86 nodes of OpenioIO SDS videos 30% 3 Bln growing/year views per month OpenIO Japanese ISP High Speed Object Storage High number of transactions 6000 10+10 on SSDs Emails per second All-flash nodes 2-sites Indexing Async replication With Grid for Apps OpenIO Teezily E-commerce Website On-premises S3, migrated from Amazon AWS Private Cloud Storage on Public Infrastructure, 350TB 10Bln Cost effective Very small files Objects
    [Show full text]
  • Data Storage Architectures for Machine Learning and Artificial Intelligence On-Premises and Public Cloud
    REPORT Data Storage Architectures for Machine Learning and Artificial Intelligence On-Premises and Public Cloud ENRICO SIGNORETTI TOPICS: ARTIFICIAL INTELLIGENCE DATA STORAGE MACHINE LEARNING Data Storage Architectures for Machine Learning and Artificial Intelligence On-Premises and Public Cloud TABLE OF CONTENTS 1 Summary 2 Market Framework 3 Maturity of Categories 4 Considerations for Selecting ML/AI Storage 5 Vendors to Watch 6 Near-Term Outlook 7 Key Takeaways 8 About Enrico Signoretti 9 About GigaOm 10 Copyright Data Storage Architectures for Machine Learning and Artificial Intelligence 2 1. Summary There is growing interest in machine learning (ML) and artificial intelligence (AI) in enterprise organizations. The market is quickly moving from infrastructures designed for research and development to turn-key solutions that respond quickly to new business requests. ML/AI are strategic technologies across all industries, improving business processes while enhancing the competitiveness of the entire organization. ML/AI software tools are improving and becoming more user-friendly, making it easier to to build new applications or reuse existing models for more use cases. As the ML/AI market matures, high- performance computing (HPC) vendors are now joined by traditional storage manufacturers, that are usually focused on enterprise workloads. Even though the requirements are similar to that of big data analytics workloads, the specific nature of ML/AI algorithms, and GPU-based computing, demand more attention to throughputs and $/GB, primarily because of the sheer amount of data involved in most of the projects. Depending on several factors, including the organization’s strategy, size, security needs, compliance, cost control, flexibility, etc, the infrastructure could be entirely on-premises, in the public cloud, or a combination of both (hybrid) – figure 1.
    [Show full text]
  • How Aqua Ray Leverages Openio's Conscience Technology to Build a Flexible Storage Infrastructure
    Case Study How Aqua Ray Leverages OpenIO’s Conscience Technology to Build a ABOUT AQUA RAY Flexible Storage Infrastructure • Cloud Service Provider based in Paris (FR) • Founded in 2003 • Offers services to small and medium-sized organizations QUICK DATA Aqua Ray needs their storage infrastructure to be flexible and easy 3 800+ to use, without having to sacrifice overall TCO and reliability. This datacenters servers is why they have chosen OpenIO SDS as a back-end for their cloud 200 TB 20% storage services. Thanks to Conscience technology, OpenIO storage storage SDS’s object store is easy to adopt and continuously adapts to new growth/year and demanding business needs. Case study Aqua Ray We can run the software on any kind of server, “ on any hardware, with small amounts of CPU and RAM. When we add a new node it is > Guillaume de Lafond, Technical Director, Aqua Ray immediately available without rebalancing, without redistributing data and network traffic. ” The context Aqua Ray is an agile, dynamic cloud service provider based in Paris (FR), offering cloud infrastructure services to small and medium-sized organizations both in the private and public sectors. Founded in 2003, hosting on Apple hardware, the company moved to Linux in 2009, and has been expanding its service portfolio since The choice then. It now offers solutions based on Amazon AWS, as well as other platforms. Aqua Ray wanted to radically change the Aqua Ray serves more than 1,500 customers, with an infrastructure distributed in way they build and consume their storage three data centers, and more than 800 servers under management.
    [Show full text]
  • Next-Generation Object Storage and Serverless Computing Explained
    Next-Generation Object Storage and Serverless Computing Explained March 2018 1 OpenIO Next-generation Object Storage and Serverless Computing Explained Table of Contents Executive Summary 3 Next-Generation Object Storage 4 Serverless Computing 5 Takeways 6 2 OpenIO “Next-generation Object Storage and Serverless Computing” explained Executive Summary The world is entering what can be called the data-centric era. The quantity of machine-generated data is already overwhelming, and this is just the beginning. With big data, industrial internet of things, and machine learning initiatives becoming increasingly common among organizations of all sizes, we will need to rethink from the ground up the way we store and compute data, while keeping costs as low as possible. Traditional infrastructures are not designed to cope with these new challenges and, at the same time, scalability, flexibility, and efficiency are the foundation of overall infrastructure sustainability, and are fundamental to supporting future storage and data processing needs. There are several types of storage systems, and each one is best for different workloads and applications. Object storage is ideal for large scale internet-based applications because data can be stored alongside metadata, and this data can be accessed from anywhere and on any device. It has always been very focused on low $/ GB and high scalability but some of its characteristics make it rigid and not very user friendly. OpenIO is changing this paradigm. Based on a modern and innovative lightweight design, it blends efficiency, flexibility, and ease of use with the benefts usually found on other platforms. OpenIO SDS is open source software that can be installed on ARM and x86 servers, making it possible to build a hyper scalable storage and compute platform without the risk of lock-ins and with a very good TCO for the highest and fastest ROI.
    [Show full text]
  • Solutions Guide
    SOLUTIONS GUIDE White label, email solutions for telcos, ISPs and hosting providers ABOUT US OUR SOLUTIONS With 20 years of global email expertise, we Based in Australia, we power more than help telcos, ISPs and hosting providers tap 170 million mailboxes worldwide, with WEBMAIL into the power of branded email hosting to the help of our team members in Europe, ..................................................................................................................................... 4 fight customer churn and grow revenue. Asia Pacific and the United States. ATMAIL CLOUD ATMAIL SUITE ........................................................................................................................................ 5 We offer modern, user-friendly, white label, Offering migration expertise and 24/7* cloud-hosted email with 99.99% uptime support, we can be trusted to deliver a MAIL SERVER and your choice of US or GDPR-compliant, smooth transition, as well as reliable and ................................................................................................................... 6 EU data centres. ongoing assistance. ATMAIL MAIL SERVER PREMIUM PROTECTION For companies wanting to stay in-house, Contact us today to learn more about our we offer on-premises webmail and/or mail secure, stable and highly scalable ATMAIL PREMIUM ANTISPAM AND MALWARE DETECTION .................... 7 server options. email solutions. OBJECT STORAGE ATMAIL OBJ.STORE() ....................................................................................................................
    [Show full text]
  • WP SME for Openiov2.Pages
    INVESTOR NEWSLETTER ISSUE N°3 Storage Made Easy Providing an Enterprise File Fabric for STORAGE MADE EASY IS THE PRODUCT TRADING NAME OF VEHERA LTD REG NO:07079346 WWW.STORAGEMADEEASY.COM STORAGE MADE EASY™ FILE FABRIC™ FOR OPENIO STORAGE MADE EASY ENTERPRISE FILE FABRIC FOR OPENIO Storage Made Easy™ (SME) File Fabric™ is a comprehensive Enterprise File Fabric solution built on top of the SME ‘Cloud Control’ Gateway. The SME Cloud Control Gateway allows IT to control access to OpenIO and transform it into a private corporate ‘drop box’ style solution. The SME File Fabric layers collaboration, synchronization, governance, BYOD, audit, security, encryption, back-up and migration capabilities on top of OpenIO to give organizations an enterprise grade EFSS solution. STORAGE MADE EASY IS THE PRODUCT TRADING NAME OF VEHERA LTD REG NO:07079346 WWW.STORAGEMADEEASY.COM STORAGE MADE EASY™ FILE FABRIC™ FOR OPENIO SINGLE-SIGN-ON (SSO) Companies can leverage their existing identity provider (IdP) to provide single-sign on and set access control for Storage Made Easy and the OpenIO SDS. SME supports SAML 2.0 and includes out of the box connectors for LDAP and Microsoft Active Directory (AD). Users can authenticate with the SME login service using their existing corporate username and password. Access is granted based on delegated authentication. SME does not store passwords but passes requests to the underlying identity provider. Multiple identity providers can be chained together, a common model for larger organizations. Access control levels in SME can also be driven from the identity provider by mapping IdP roles to SME groups.
    [Show full text]