Clouder Documentation Release 1.0

Total Page:16

File Type:pdf, Size:1020Kb

Clouder Documentation Release 1.0 Clouder Documentation Release 1.0 Yannick Buron May 15, 2017 Contents 1 Getting Started 3 1.1 Odoo installation.............................................3 1.2 Clouder configuration..........................................4 1.3 Services deployed by the oneclick....................................6 2 Connect to a new node 9 3 Images 13 4 Applications 15 4.1 Application Types............................................ 15 4.2 Application................................................ 16 5 Services 21 6 Domains and Bases 25 6.1 Domains................................................. 25 6.2 Bases................................................... 27 7 Backups and Configuration 31 7.1 Backups................................................. 31 7.2 Configuration............................................... 33 i ii Clouder Documentation, Release 1.0 Contents: Contents 1 Clouder Documentation, Release 1.0 2 Contents CHAPTER 1 Getting Started In this chapter, we’ll see a step by step guide to install a ready-to-use infrastructure. For the example, the base we will create will be another Clouder. Odoo installation This guide will not cover the Odoo installation in itself, we suggest you read the installation documentation on the official website. You can also, and it’s probably the easier way, use an Odoo Docker image like https://hub.docker.com/ _/odoo/ or https://hub.docker.com/r/tecnativa/odoo-base/ Due to the extensive use of ssh, Clouder is only compatible with Linux. Once your Odoo installation is ready, install the paramiko, erppeek and apache-libcloud python libraries (pip install paramiko erppeek apache-libcloud), download the OCA/Connector module on Github and the Clouder modules on Github and add them in your addons directory, then install the clouder module and clouder_template_odoo (this module will install a lot of template dependencies, like postgres, postfix etc...). 3 Clouder Documentation, Release 1.0 Clouder configuration The first thing to do is to set the sysadmin email in the Clouder configuration. You can also configure here how Clouder will deploy the container. You can either use Docker Engine (for single node) or Docker Swarm mode (for multiple node), see Configuration for more information about the settings and actions available in configuration page. Then you have to create a new Environment. An environment shall be seen as a project, this is the object where access right are based upon. All services/bases are linked to an environment, you’ll usually create one for each customers. So first we’ll create the main environment, which will deploy all the core services. 4 Chapter 1. Getting Started Clouder Documentation, Release 1.0 You’ll also need to create a Domain. The first base you’ll deploy will be accessible under this domain. Finally you have to configure the node you want to connect. See how to connect a node for more explanation. 1.2. Clouder configuration 5 Clouder Documentation, Release 1.0 In the node form, click on Test Connection to check there is no problem. Also make sure your node is configured as the DNS nameserver of your domain. If everything is ok, select the Odoo oneclick, assign the critical port and click on deploy to launch the oneclick deploy. This will deploy all on your node all the services you need for your infrastructure, with an example of Odoo service with his database. Keep an eye on your Odoo logs because you’ll see all executed command there, and see if something wrong happen. You can also afterward check the log in the node record to see them. Theses logs can be seen on all objects in Clouder, and are generated each time an action is perform. They are the best way for you to understand how each action work, and see if something wrong happened. Services deployed by the oneclick The oneclick will deploy 5 root services. 6 Chapter 1. Getting Started Clouder Documentation, Release 1.0 • The backup service, which will store your backup. Of course, we suggest you move this service on another node once in production. • The DNS service, which will resolve the DNS record of your infrastructure. You can ei- ther use a bind container on your node for this (deployed by the oneclick), or a cloud provider <http://libcloud.readthedocs.io/en/latest/dns/supported_providers.html>_. • The mail service, which will redirect outgoing and incoming emails from your containers. • The proxy service, which will redirect incoming http request to the correct service. It also take care of https encryption and LetsEncrypt cert generation. • The Odoo/Clouder service, which will deploy the Postgres database and the Odoo service. The example base deploy by the oneclick will be linked to this service. Services has a notion of parent/child, when a root service is deploy it will also deploy all child services. You can see them by removing the root filter. Only children on the lowest level are containers actually deployed on your node. 1.3. Services deployed by the oneclick 7 Clouder Documentation, Release 1.0 Usually a service can contain theses containers/childs : • A Data container, which only represent the volume created for this service. Removing this container means destroying the data. • A Files container, which represent the version of the application you want to deploy. For example for Odoo, this container will contain the files downloaded from Github, of Odoo but also community module you specified. Replace this container with another one when you want to update you service. • An Exec container, which contain the binary executed by this service. The proxy services usually redirect to the http port of this container. This container shall contain no data, it is linked to Data and Files containers, hence you can destroy and rebuilt it at will. • A SSH container, linked to others containers it allow you to give an SSH access to files to your customers, without having to give them access to the node itself. Note that you can have several layer of services. For example, the root Odoo services contain PostgreSQL and Odoo children, which themselves have Data/Files/Exec/SSH children. Finally, once all services are deployed, the oneclick will also deploy an example base. This will create a new database on your Odoo service, and will configure it with the credentials and with the modules specified in application (here the clouder module). It will also configure the links to DNS/Proxy/Mail and generate LetsEncrypt certificate. After the deployment, you shall be able to access your example Odoo instance on the url clouder9.mydomain.com. Congratulations! You can now easily create another base or deploy any other application you can find in the clouder_template_* modules, or even create your own images and applications. 8 Chapter 1. Getting Started CHAPTER 2 Connect to a new node We will see in this chapter how we can configure the nodes and connect them to your Clouder. Clouder is only compatible with Linux, and the Debian distribution is recommended. First, if it’s not already done, you shall install openssh on your server and Docker. In Debian, you just have to execute these commands : For ssh : apt-get install openssh-client For Docker : sudo curl -sSL https://get.docker.com/ | sudo sh You can also follow the official instructions <https://docs.docker.com/engine/installation/>. Then, you need to add the public key of your clouder in the /root/.ssh/authorized_keys file, so Clouder can connect to the server. Root rights are needed for the docker commands. If you don’t have a public key or don’t know how to create one, a public key will be automatically generated when you register the server in your Clouder. And that’s it. Now we can go to the Server menu in Clouder and create a new record. 9 Clouder Documentation, Release 1.0 Here the fields needed : • The prefix domain name used to contact the server. You can only use here lowercase, digit, hyphen and dot. • The domain name used to contact the server. • The server public IP. You can only use here digit and dot. This IP will be used to publish the services to Internet. • The server private IP. You can only use here digit and dot. This IP will be used internally for communication between services deployed by Clouder. • The ssh login user to use. Note you shall make sure this user can use the docker daemon. Otherwise root will be used. • The ssh port to connect to the server. • The environment of the server, define which user can use this server. • The provider field, if you want to use clouder provider. More detail below. • The SSH private and public key to connect to the server. A couple of keys are automatically generated when you create a new record, but you can use your owns. • The ports range for the containers which will be created in this server. When you create a container without specifying an hostport, a new port from this range will be attributed to it. • Assign port with public IP if you want to have several services with the same port on this node, with IP failover. This is interesting with Docker Engine to have several Clouder infrastructure on the same node. Otherwise, the 25/53/80/443 ports of mail/DNS/proxy services will conflicts. This doesn’t work with Docker Swarm mode. • Check the public checkbox if you want all users of the Clouder to be able to use this server. Otherwise, a user can only access a server if he is the manager of this server (or an administrator). When you save the new record, the ssh key will be saved in the system hosting the Clouder so it can connect to the new server. You can check the result of the command in the logs. 10 Chapter 2. Connect to a new node Clouder Documentation, Release 1.0 You have several actions available : • The reinstall action, if you think the ssh key wasn’t correctly installed.
Recommended publications
  • Bokudrive – Sync and Share Online Storage
    BOKU-IT BOKUdrive – Sync and Share Online Storage At https://drive.boku.ac.at members of the University of Natural Resources and Life Sciences have access to a modern Sync and Share online storage facility. The data of this online storage are stored on servers and in data centers of the University of Natural Resources and Life Sciences. The solution is technically based on the free software "Seafile". Users can access their data via a web interface or synchronize via desktop and mobile clients. Seafile offers similar features to popular online services such as Dropbox and Google Drive. The main difference is that Seafile can be installed as open source software on its own servers and its data is stored completely on servers and in data centers of the University of Natural Resources and Life Sciences. Target group of the documentation:BOKU staff, BOKU students Please send inquiries: BOKU-IT Hotline [email protected] Table of contents 1 What is BOKUdrive ? ............................................................................................................... 3 2 BOKUdrive: First steps ............................................................................................................ 4 2.1 Seadrive Client vs. Desktop Syncing Client ...................................................................... 4 2.2 Installation of the Desktop Syncing Client ......................................................................... 5 3 Shares, links and groups ........................................................................................................
    [Show full text]
  • Open Virtualization Infrastructure for Large Telco: How Turkcell Adopted Ovirt for Its Test and Development Environments
    Open Virtualization Infrastructure for large Telco: How Turkcell adopted oVirt for its test and development environments DEVRIM YILMAZ SAYGIN BAKTIR Senior Expert Cloud Engineer Cloud Systems Administrator 09/2020 This presentation is licensed under a Creative Commons Attribution 4.0 International License About Turkcell ● Turkcell is a digital operator headquartered in Turkey ● Turkcell Group companies operate in 5 countries – Turkey, Ukraine, Belarus, Northern Cyprus, Germany ● Turkcell is the only NYSE-listed company in Turkey. ● www.turkcell.com.tr 3 Business Objectives ● Alternative solutions compatible with Turkcell operational and security standards ● Dissemination of open source infrastructure technologies within the company ● Competitive infrastructure with cost advantage 3 The journey of oVirt 4 The Journey of oVirt 3. Step three 1. Research & 2. Go-Live 3. Go-Live 4. Private Cloud 5. Go-Live Development Phase-1 Phase-2 Automation RHV 5 Research & Development ● Motivation Factors ○ Cost 1. Research & ○ Participation Development ○ Regulation ○ Independence ○ Expertise ● Risk Factors ○ Security ○ Quality ○ Compliance ○ Support ○ Worst Practices 6 Research & Development ● Why oVirt? ○ Open Source licensing 1. Research & ○ Community contribution Development ○ The same roadmap with commercial product ○ Support via subscription if required ○ Adequate features for enterprise management ○ Rest API support 6 Research & Development ● Difficulties for new infra solution ○ Integration with current infrastructure 1. Research & - Centralized Management Development - Certified/Licensed Solutions - Integration Cost ○ Incident & Problem Management - 3rd Party Support - Support with SLA ○ Acquired Habits - Customer Expectations - Quality of IT Infrastructure Services 6 Research & Development ● What we achieved ○ Building of PoC environment 1. Research & ○ V2V Migration Development ○ Upgrade Tests starting with v.4.3.2 ○ Functional Tests ○ Backup Alternative Solutions 6 Go-Live Phase-1 ● Phase-1 contains : ○ Building of new oVirt platform with unused h/w 2.
    [Show full text]
  • Red Hat Enterprise Linux 7 Libreswan Cryptographic Module Version 7.0 and Version Rhel7.20190509 FIPS 140-2 Non-Proprietary Security Policy
    Red Hat Enterprise Linux 7 Libreswan Cryptographic Module version 7.0 and version rhel7.20190509 FIPS 140-2 Non-Proprietary Security Policy Version 1.3 Last update: 2021-05-03 Prepared by: atsec information security corporation 9130 Jollyville Road, Suite 260 Austin, TX 78759 www.atsec.com ©2021 Red Hat®, Inc. / atsec information security corporation Page 1 of 23 This document can be reproduced and distributed only whole and intact, including this copyright notice. Red Hat Enterprise Linux 7 Libreswan Cryptographic Module FIPS 140-2 Non-Proprietary Security Policy Table of contents 1 Introduction ........................................................................................................................... 3 2 Cryptographic Module Specification ...................................................................................... 4 2.1 Module Overview ......................................................................................................... 4 2.2 FIPS 140-2 Validation ................................................................................................... 5 2.3 Modes of Operation ...................................................................................................... 6 3 Cryptographic Module Ports and Interfaces ........................................................................... 7 4 Roles, Services and Authentication ....................................................................................... 8 4.1 Roles ...........................................................................................................................
    [Show full text]
  • File Synchronization As a Way to Add Quality Metadata to Research Data
    File Synchronization as a Way to Add Quality Metadata to Research Data Master Thesis - Master in Library and Information Science (MALIS) Faculty of Information Science and Communication Studies - Technische Hochschule Köln Presented by: Ubbo Veentjer on: September 27, 2016 to: Dr. Peter Kostädt (First Referee) Prof. Dr. Andreas Henrich (Second Referee) License: Creative-Commons Attribution-ShareAlike (CC BY-SA) Abstract Research data which is put into long term storage needs to have quality metadata attached so it may be found in the future. Metadata facilitates the reuse of data by third parties and makes it citable in new research contexts and for new research questions. However, better tools are needed to help the researchers add metadata and prepare their data for publication. These tools should integrate well in the existing research workflow of the scientists, to allow metadata enrichment even while they are creating, gathering or collecting the data. In this thesis an existing data publication tool from the project DARIAH-DE was connected to a proven file synchronization software to allow the researchers prepare the data from their personal computers and mobile devices and make it ready for publication. The goal of this thesis was to find out whether the use of file synchronization software eases the data publication process for the researchers. Forschungsadaten, die langfristig gespeichert werden sollen, benötigen qualitativ hochwertige Meta- daten um wiederauffindbar zu sein. Metadaten ermöglichen sowohl die Nachnutzung der Daten durch Dritte als auch die Zitation in neuen Forschungskontexten und unter neuen Forschungsfragen. Daher werden bessere Werkzeuge benötigt um den Forschenden bei der Metadatenvergabe und der Vorbereitung der Publikation zu unterstützen.
    [Show full text]
  • Deploying Netapp HCI for Red Hat Openshift on RHV HCI Netapp September 23, 2021
    Deploying NetApp HCI for Red Hat OpenShift on RHV HCI NetApp September 23, 2021 This PDF was generated from https://docs.netapp.com/us-en/hci- solutions/redhat_openshift_deployment_summary.html on September 23, 2021. Always check docs.netapp.com for the latest. Table of Contents Deploying NetApp HCI for Red Hat OpenShift on RHV . 1 Deployment Summary: NetApp HCI for Red Hat OpenShift on RHV . 1 1. Create Storage Network VLAN: NetApp HCI for Red Hat OpenShift on RHV. 1 2. Download OpenShift Installation Files: NetApp HCI for Red Hat OpenShift on RHV . 2 3. Download CA Certificate from RHV: NetApp HCI for Red Hat OpenShift on RHV . 4 4. Register API/Apps in DNS: NetApp HCI for Red Hat OpenShift on RHV . 5 5. Generate and Add SSH Private Key: NetApp HCI for Red Hat OpenShift on RHV. 7 6. Install OpenShift Container Platform: NetApp HCI for Red Hat OpenShift on RHV . 8 7. Access Console/Web Console: NetApp HCI for Red Hat OpenShift on RHV . 10 8. Configure Worker Nodes to Run Storage Services: NetApp HCI for Red Hat OpenShift on RHV. 11 9. Download and Install NetApp Trident: NetApp HCI for Red Hat OpenShift on RHV . 13 Deploying NetApp HCI for Red Hat OpenShift on RHV Deployment Summary: NetApp HCI for Red Hat OpenShift on RHV The detailed steps provided in this section provide a validation for the minimum hardware and software configuration required to deploy and validate the NetApp HCI for Red Hat OpenShift on RHV solution. Deploying Red Hat OpenShift Container Platform through IPI on Red Hat Virtualization consists of the following steps: 1.
    [Show full text]
  • 8. IBM Z and Hybrid Cloud
    The Centers for Medicare and Medicaid Services The role of the IBM Z® in Hybrid Cloud Architecture Paul Giangarra – IBM Distinguished Engineer December 2020 © IBM Corporation 2020 The Centers for Medicare and Medicaid Services The Role of IBM Z in Hybrid Cloud Architecture White Paper, December 2020 1. Foreword ............................................................................................................................................... 3 2. Executive Summary .............................................................................................................................. 4 3. Introduction ........................................................................................................................................... 7 4. IBM Z and NIST’s Five Essential Elements of Cloud Computing ..................................................... 10 5. IBM Z as a Cloud Computing Platform: Core Elements .................................................................... 12 5.1. The IBM Z for Cloud starts with Hardware .............................................................................. 13 5.2. Cross IBM Z Foundation Enables Enterprise Cloud Computing .............................................. 14 5.3. Capacity Provisioning and Capacity on Demand for Usage Metering and Chargeback (Infrastructure-as-a-Service) ................................................................................................................... 17 5.4. Multi-Tenancy and Security (Infrastructure-as-a-Service) .......................................................
    [Show full text]
  • Paas Solutions Evaluation
    PaaS solutions evaluation August 2014 Author: Sofia Danko Supervisors: Giacomo Tenaglia Artur Wiecek CERN openlab Summer Student Report 2014 CERN openlab Summer Student Report 2014 Project Specification OpenShift Origin is an open source software developed mainly by Red Hat to provide a multi- language PaaS. It is meant to allow developers to build and deploy their applications in a uniform way, reducing the configuration and management effort required on the administration side. The aim of the project is to investigate how to deploy OpenShift Origin at CERN, and to which extent it could be integrated with CERN "Middleware on Demand" service. The student will be exposed to modern cloud computing concepts such as PaaS, and will work closely with the IT middleware experts in order to evaluate how to address service needs with a focus on deployment in production. Some of the tools that are going to be heavily used are Puppet and Openstack to integrate with the IT infrastructure. CERN openlab Summer Student Report 2014 Abstract The report is a brief summary of Platform as a Service (PaaS) solutions evaluation including investigation the current situation at CERN and Services on Demand provision, homemade solutions, external market analysis and some information about PaaS deployment process. This first part of the report is devoted to the current status of the process of deployment OpenShift Origin at existing infrastructure at CERN, as well as specification of the common issues and restrictions that were found during this process using different machines for test. Furthermore, the following open source software solutions have been proposed for the investigation of possible PaaS provision at CERN: OpenShift Online; Cloud Foundry; Deis; Paasmaster; Cloudify; Stackato; WSO2 Stratos.
    [Show full text]
  • Red Hat Openshift Container Platform 3.6
    RED HAT OPENSHIFT CONTAINER PLATFORM 3.6 DATASHEET KEY BENEFITS OVERVIEW • Deliver your latest innova- Red Hat® OpenShift Container Platform helps organizations develop, deploy, and manage exist- tion to market faster and stay ing and container-based applications seamlessly across physical, virtual, and public cloud infra- ahead of your competition. structures. Built on proven open source technologies, Red Hat OpenShift Container Platform helps application development and IT operations teams modernize applications, deliver new services, and • Accelerate application accelerate development processes. development by giving your developers and system admin- RED HAT OPENSHIFT CONTAINER PLATFORM istrators the tools they need FOR APPLICATION DEVELOPMENT TEAMS to get the job done. OpenShift Container Platform provides developers with an optimal platform for provisioning, build- • Use a secure, enterprise- ing, and deploying applications and their components in a self-service fashion. With automated grade, container-based workflows like our source-to-image (S2I) process, it is easy to get source code from version control platform with no vendor systems into ready-to-run, docker-formatted container images. OpenShift Container Platform inte- lock-in. grates with continuous integration (CI) and continuous delivery (CD) tools, making it an ideal solution • Support DevOps and depart- for any organization. ment-wide collaboration. FOR I.T. OPERATIONS OpenShift Container Platform gives IT operations a secure, enterprise-grade Kubernetes that pro- Red Hat OpenShift Online vides policy-based control and automation for applications. Cluster services, scheduling, and orches- is a public cloud application tration provide load-balancing and auto-scaling capabilities. Security features prevent tenants from platform that lets you quickly compromising other applications or the underlying host.
    [Show full text]
  • Initial Definition of Protocols and Apis
    Initial definition of protocols and APIs Project acronym: CS3MESH4EOSC Deliverable D3.1: Initial Definition of Protocols and APIs Contractual delivery date 30-09-2020 Actual delivery date 16-10-2020 Grant Agreement no. 863353 Work Package WP3 Nature of Deliverable R (Report) Dissemination Level PU (Public) Lead Partner CERN Document ID CS3MESH4EOSC-20-006 Hugo Gonzalez Labrador (CERN), Guido Aben (AARNET), David Antos (CESNET), Maciej Brzezniak (PSNC), Daniel Muller (WWU), Jakub Moscicki (CERN), Alessandro Petraro (CUBBIT), Antoon Prins Authors (SURFSARA), Marcin Sieprawski (AILLERON), Ron Trompert (SURFSARA) Disclaimer: The document reflects only the authors’ view and the European Commission is not responsible for any use that may be made of the information it contains. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 863353 Table of Contents 1 Introduction ............................................................................................................. 3 2 Core APIS .................................................................................................................. 3 2.1 Open Cloud Mesh (OCM) ...................................................................................................... 3 2.1.1 Introduction .......................................................................................................................................... 3 2.1.2 Advancing OCM ....................................................................................................................................
    [Show full text]
  • ERDA User Guide
    User Guide 22. July 2021 1 / 116 Table of Contents Introduction..........................................................................................................................................3 Requirements and Terms of Use...........................................................................................................3 How to Access UCPH ERDA...............................................................................................................3 Sign-up.............................................................................................................................................4 Login................................................................................................................................................7 Overview..........................................................................................................................................7 Home................................................................................................................................................8 Files..................................................................................................................................................9 File Sharing and Data Exchange....................................................................................................15 Share Links...............................................................................................................................15 Workgroup Shared Folders.......................................................................................................19
    [Show full text]
  • Openshift Vs Pivotal Cloud Foundry Comparison Red Hat Container Stack - Pivotal Cloud Foundry Stack
    OPENSHIFT VS PIVOTAL CLOUD FOUNDRY COMPARISON RED HAT CONTAINER STACK - PIVOTAL CLOUD FOUNDRY STACK 3 AT A GLANCE PIVOTAL CF OPENSHIFT • ●Garden and Diego • ●Docker and Kubernetes • ●.NET and Spring • ●.NET, Spring and JBoss Middleware • ●Only Cloud-native apps (including full Java EE) • ●Container security on Ubuntu • ●Cloud-native and stateful apps • ●Deployment automation • ●Enterprise-grade security on • ●Open Core Red Hat Enterprise Linux • ●Pivotal Labs consulting method • ●Complete Ops Management • ●100% Open Source 5X PRICE • ●Red Hat Innovation Labs consulting method BRIEF COMPARISON PIVOTAL CF OPENSHIFT GARDEN & DIEGO DOCKER & KUBERNETES • ●Garden uses OCI runC backend • ●Portable across all docker platforms • ●Not portable across Cloud Foundry distros • ●IP per container • ●Containers share host IP • ●Integrated image registry • ●No image registry • ●Image build from source and binary • ●Private registries are not supported • ●Adoption in many solutions • ●No image build • ●Adoption only in Cloud Foundry 11 NO NATIVE DOCKER IN CLOUD FOUNDRY Converters Are Terrible Cloud Foundry is based on the Garden container runtime, not Docker, and then has RunC and Windows backends. RunC is not Docker, just the lowest runtime layer Docker Developer Experience Does Not Exist in PCF PCF “cf push” Dev Experience does not exist for Docker. In Openshift v3 we built S2I to provide that same experience on top of native Docker images/containers Diego Is Not Kubernetes Kubernetes has become the defacto standard for orchestrating docker containers.
    [Show full text]
  • Forrester: Multicloud Container Development Platforms, Q3 2020
    LICENSED FOR INDIVIDUAL USE ONLY The Forrester Wave™: Multicloud Container Development Platforms, Q3 2020 The Eight Providers That Matter Most And How They Stack Up by Dave Bartoletti and Charlie Dai September 15, 2020 Why Read This Report Key Takeaways In our 29-criterion evaluation of multicloud Red Hat-IBM, Google, And Rancher Lead The container development platform providers, we Pack identified the eight most significant ones — Forrester’s research uncovered a market in which Canonical, D2iQ, Google, Mirantis, Platform9 Red Hat-IBM, Google, and Rancher are Leaders; Systems, Rancher, Red Hat-IBM, VMware — VMware, D2iQ, and Platform9 Systems are and researched, analyzed, and scored them. Strong Performers; and Mirantis and Canonical This report shows how each provider measures are Contenders. up and helps infrastructure and operations Dev Experience, Distributed Operations, And professionals select the right one for their needs. Ecosystem Integrations Are Key Differentiators As developers and technology teams race to meet the demand for cloud-native applications, developer experience and development services, distributed infrastructure operations, and rich ecosystem partnerships and integrations will dictate which platform providers will lead the pack. This PDF is only licensed for individual use when downloaded from forrester.com or reprints.forrester.com. All other distribution prohibited. FORRESTER.COM FOR INFRASTRUCTURE & OPERATIONS PROFESSIONALS The Forrester Wave™: Multicloud Container Development Platforms, Q3 2020 The Eight Providers
    [Show full text]