HPC Storage Systems

Total Page:16

File Type:pdf, Size:1020Kb

HPC Storage Systems HPC Storage Systems Nasir YILMAZ Cyberinfrastructure Engineer HPC Storage ● Performance (Input/Output Per Second / iops) ● Availability (uptime) ● File System (FS) ● Recovery requirements ( Snapshot / Backup / Replication ) ● Cost of the media ( SAS,SATA,Tape ) Refer to Storage and Quota in our Google site: https://sites.google.com/a/case.edu/hpcc/servers-and-storage/storage Available Storage Options on HPC Tier Type of Storage Typical Workload Storage System/ Accessibility File System Tier 0 HPC Storage High Performance / Panasas mounted in HPC as Parallel Computing PanFS /home /mnt/pan /scratch Tier 1 Research Storage High Performance / Dell Fluid FS / mounted in HPC as Parallel Computing Qumulo FS /mnt/projects Tier 2 Research Dedicated High Volume Storage ZFS (Zettabyte mounted in HPC as Storage (RDS) FS) /mnt/rds Tier 3 Research Archive Nearline Storage / Spectra for Archival purpose Object Storage BlackPearl Tier 4 Cloud Storage Cloud Storage Google Drive, Google Drive & Box are free for Box, Amazon S3 Case and can be accessed from HPC via Rclone or WebDav Storage Mount Points in HPC (df -hTP) Comments Mount Points Quota Lifetime 700/920 GiB for HPC Group valid as HPC default user space in HPC /home 150/260 GiB for Guests users Snapshots,Tape /scratch/pbsjobs 1TiB for /scratch/pbsjobs per any group space for temporary job 14 days /scratch/users 1TiB on /scratch/users for members files, No Backups according to the storage space acquired by the PI Snapshot /mnt/pan lease term according to the /mnt/projects storage space acquired by the PI Snapshot,Replication lease term 5 Years storage system acquired by the PI Snapshot,Replication /mnt/rds Warranty DATA LIFECYCLE power of the 2 Useful Commands - id id - print real and effective user and group IDs id -g, --group print only the effective group ID id -gn,--name print a name instead of a number Useful Commands - du du - estimate file space usage -h --human-readable -h --max-depth=1 | sort -hr -m --block-size=1M -k --block-size=1K -sk --time Useful Commands - df df -- report file system disk space usage -h --human-readable print sizes in human readable format (e.g., 1K 234M 2G) -T --print file system type -H likewise, but use powers of 1000 not 1024 Monitoring your Quota Panasas /home & /scratch ● Check /home and /scratch space usage for your group including you (updated few times a day). For instantaneous usage use panfs_quota command. $ quotachk <CaseID> ● Check the breakdown list of usage for all users in a group (updated few times a day): $ quotagrp <PI_CaseID> HPC Storage (Panasas) /mnt/pan $ pan_df /mnt/pan/... Research Storage (FluidFS) $ quotaprj <PI_CaseID> If you want to know which directory is occupying lots of your space $ du -h --max-depth=1 Reference: HPC Storage & Quota Managing your Space Important Note: You will receive soft quota limit warnings with email the respective subject line if anybody in the groups exceed the Soft Quota limit . Once the hard quota limit is reached, nobody in the group are not able to run jobs. As soon as you receive the Soft Quota warning, manage the space by: ● Compressing the directory using tar command as showed: $ tar -czf <dir-name>.tar.gz <dir-name> ● Delete files that you no longer need $ rm -r <unnecessary folder> ● If you continually have many jobs running, delete the scratch space immediately by including the following line at the end of the job file to limit /scratch quota $ rm -r "$PFSDIR"/* ● Transfering the Files from HPC to your local space ○ Visit HPC site Transferring File @HPC ● Contact us at [email protected] for questions ● Emptying the scratch space via job script as soon as the job completes #!/bin/bash #SBATCH -N 1 -n 1 #SBATCH --time=2:10:00 module load ansys cp flowinput-serial.in flow-serial.cas $PFSDIR # copy to scratch cd $PFSDIR #Run fluent fluent 2ddp -g < flowinput-serial.in cp -ru * $SLURM_SUBMIT_DIR # copy back from scratch rm -r $PFSDIR/* # Empty the scratch space ● If you are keeping the scratch space for analysis and want to delete them later, use find /scratch/pbsjobs -maxdepth 1 -user <CaseID> -name "*" | xargs rm -rf How to check usage /scratch/pbsjobs/ For instantaneous usage, use: ● panfs_quota /scratch/pbsjobs ● panfs_quota -G /scratch/pbsjobs # for group quota For the breakdown list of usage (updated few times a day) ● quotachk <CaseID> ● quotagrp <PI_CaseID> For most detailed output (It may take longer but it gives instantaneous usage) ● find /scratch/pbsjobs/ -user <caseid> -exec du -sh {} + 2>/dev/null > myscratch.log -- find searches recursively on the target directory -- '-user <caseid>' targets the directories owned by account -- '-exec du -sh {} +' performs the disk usage evaluation -- '2>/dev/null' redirects the standard error away from standard output, necessary here because so many of the job directories belong to other accounts. -- '> myscratch.log' redirect standard output to a file. Data Transfer Tools ● Globus ● Sftp/Scp Tools such as ○ Cyberduck,FileZilla,WinSCP,WebDrive ● Rclone https://rclone.org/ ○ Google Drive ○ Box ○ Dropbox HPC Guide to Data Transfer Storage Type Supported at Approved for Approved for Approved for Approved for Cost CWRU Public Data Internal Use Only Data Restricted Data Restricted TiB/$ Regulated Data HPC Storage ✔ ✔ ✔ ❌ ❌ Research Storage ✔ ✔ ✔ ❌ ❌ Research ✔ ✔ ✔ ❌ ❌ Dedicated Storage Research Archive ✔ ✔ ✔ ❌ ❌ SRE ✔ ✔ ✔ ✔ ✔ Amazon S3 ✔ ✔ ✔ ✔ ❌ Dropbox ❌ ❌ ❌ ❌ ❌ BOX hosted by ✔ ✔ ✔ ❌ ❌ Free CWRU Google Drive File ✔ ✔ ✔ ❌ ❌ Free Stream CLOUD STORAGE from Amazon AWS S3 AWS EBS AWS EFS AWS GLACIER Can be publicly accessible Accessible only via the given EC2 Machine Accessible via several EC2 machines and AWS services Web interface File System interface Web and file system interface Object Storage Block Storage Object storage Object Storage Scalable Hardly scalable Scalable Slower than EBS and EFS Faster than S3 and EFS Faster than S3, slower than EBS Good for storing backups Is meant to be EC2 drive Good for shareable applications Good for Archiving and workloads Questions ?.
Recommended publications
  • IBM Cloud Unit 2016 IBM Cloud Unit Leadership Organization
    IBM Cloud Technical Academy IBM Cloud Unit 2016 IBM Cloud Unit Leadership Organization SVP IBM Cloud Robert LeBlanc GM Cloud Platform GM Cloud GM Cloud Managed GM Cloud GM Cloud Object Integration Services Video Storage Offering Bill Karpovich Mike Valente Braxton Jarratt Line Execs Line Execs Marie Wieck John Morris GM Strategy, GM Client Technical VP Development VP Service Delivery Business Dev Engagement Don Rippert Steve Robinson Harish Grama Janice Fischer J. Comfort (GM & CTO) J. Considine (Innovation Lab) Function Function Leadership Leadership VP Marketing GM WW Sales & VP Finance VP Human Quincy Allen Channels Resources Steve Cowley Steve Lasher Sam Ladah S. Carter (GM EcoD) GM Design VP Enterprise Mobile GM Digital Phil Gilbert Phil Buckellew Kevin Eagan Missions Missions Enterprise IBM Confidential IBM Hybrid Cloud Guiding Principles Choice with! Hybrid ! DevOps! Cognitive Powerful, Consistency! Integration! Productivity! Solutions! Accessible Data and Analytics! The right Unlock existing Automation, tooling Applications and Connect and extract workload in the IT investments and composable systems that insight from all types right place and Intellectual services to increase have the ability to of data Property speed learn Three entry points 1. Create! 2. Connect! 3. Optimize! new cloud apps! existing apps and data! any app! 2016 IBM Cloud Offerings aligned to the Enterprise’s hybrid cloud needs IBM Cloud Platform IBM Cloud Integration IBM Cloud Managed Offerings Offerings Services Offerings Mission: Build true cloud platform
    [Show full text]
  • Securebox Azure
    General Electronics E-Commerce General Web General General General Electronics General Location Electronics E-Commerce Electronics Electronics Arrows General Web Electronics E-Commerce E-Commerce E-Commerce Web Location GeneraE-Col mmerce Weather Web Arrows WeWeb b Electronics Location Weather Miscellaneous Location Arrows Location Location Arrows Arrows Miscellaneous Electronics E-Commerce Arrows Weather Weather Weather Miscellaneous Web 28/03/2019 Weather SecureBox Azure penta.ch General Miscellaneous Access, backup, sync and share your data on Microsoft Azure SecureBox is a secure corporate cloud file sharing platform. Access your data anywhere and backup, view, sync and share – all under your control. Miscellaneous E-Commerce Browser & apps General SecureBox Devices Files Tablet Email Mobile Calendar PC Word processing Mac Spreadsheets Miscellaneous Photos Private cloud Bank-level security Local sync Backups & disaster recovery File sharing Independent auditing Password protection Editing permissions Link expiry dates Internal users External users SecureLo filecati sharingon and backup Long-term backups and data recovery The secure alternative to public cloud Instantly meet data protection and backup file sharing and backup with bank-level legal requirements with up to five-year data authentication. In multiple languages. backup and recovery options. English, French, German, Italian, Spanish and Arabic Web Control your data Versioning SecureBox data is stored and encrypted in Previous version of modified files are Microsft Azure data center. Manage groups automatically retained and can be restored, and security policies, backed by audit logs. while automatically managing storage space. Share files Regulatory compliance Send Arrolinks to wspeople inside or outside your Auditor-ready compliance reports included. company. Set unique passwords, expiry IndependentElec ISAEtr onic3402 auditss for regulatory dates, edit and download permissions.
    [Show full text]
  • Mimioclassroom User Guide for Windows
    MimioClassroom User Guide For Windows mimio.com © 2012 Sanford, L.P. All rights reserved. Revised 12/4/2012. No part of this document or the software may be reproduced or transmitted in any form or by any means or translated into another language without the prior written consent of Sanford, L.P. Mimio, MimioClassroom, MimioTeach, MimioCapture, MimioVote, MimioView, MimioHub, MimioPad, and MimioStudio are registered marks in the United States and other countries. All other trademarks are the property of their respective holders. Contents About MimioClassroom 1 MimioStudio 1 MimioTeach 1 Mimio Interactive 1 MimioCapture 2 Mimio Capture Kit 2 MimioVote 2 MimioView 2 MimioPad 2 Minimum System Requirements 2 Using this Guide 3 MimioStudio 7 About MimioStudio 7 About MimioStudio Notebook 7 About MimioStudio Tools 7 About MimioStudio Gallery 9 Getting Started with MimioStudio 9 Accessing MimioStudio Notebook 9 Accessing MimioStudio Tools 10 Accessing MimioStudio Gallery 10 Using MimioStudio Notebook 10 Working with Pages 11 Creating an Activity 14 Creating an Activity - Step 1: Define 14 Creating an Activity - Step 2: Select 14 Creating an Activity - Step 3: Refine 15 Creating an Activity - Step 4: Review 16 Working with an Activity 17 Writing an Objective 17 Attaching Files 18 Using MimioStudio Tools 18 Creating Objects 18 Manipulating Objects 21 Adding Actions to Objects 25 Using MimioStudio Gallery 26 iii Importing Gallery Items into a Notebook 27 Customizing the Content of the Gallery 27 Exporting a Gallery Folder to a Gallery File 29 Working
    [Show full text]
  • Wandisco Fusion® Microsoft Azure Data Box
    WANDISCO FUSION® MICROSOFT AZURE DATA BOX Use WANdisco Fusion with Data Box for bulk transfer of changing data WANdisco Fusion is the only solution that enables Microsoft customers to use the bulk transfer capabilities of the Azure Data Box to transfer static and changing Guaranteed data consistency information from Big Data applications to Azure Cloud with guaranteed data consistency. Users can continue to Take advantage of the storage on Azure Data Box for write to their local cluster while the Azure Data Box is in bulk data transfer while continuing to write to a local transit so when the Azure Data Box is subsequently being cluster and replicate those changes while the Azure uploaded, any changes are replicated to the Azure Cloud Data Box is uploaded to the Azure Cloud. with guaranteed consistency. Easy and intuitive step-by-step operation • Applications write to Azure Data Box using the same API that they use to interact with the Azure Cloud. No downtime and • WANdisco Fusion for Azure Data Box requires no change to applications which can continue to use the no business disruption API as they would normally. Write data to a local HDFS-compatible endpoint • Replication is continuous and recovers from on-premises and replicate to a storage location in intermittent network or system failures automatically. Microsoft Azure with no modification or disruption to applications on-premises. AZURE 2 STORAGE Cost saving MICROSOFT 1 3 AZURE DATA BOX Avoid the high network costs common to large scale data transfers and benefit from a range of FUSION applications available in Azure Cloud.
    [Show full text]
  • Building a Cloud-Enabled File Storage Infrastructure
    F5 White Paper Building a Cloud-Enabled File Storage Infrastructure A cloud-enabled infrastructure can help your organization seamlessly integrate cloud storage and maximize cost savings, while also offering significant benefits to your traditional file storage environments. by Renny Shen Product Marketing Manager White Paper Building a Cloud-Enabled File Storage Infrastructure Contents Introduction 3 What Makes a Cloud? 3 Types of Cloud Storage 4 What Makes Cloud Storage Different? 4 Accessing Files Remotely over the Network 5 Accessing Files on Object-Based Storage 5 Unique Cost Structure 6 Where Clouds Make Sense 7 Fitting the Cloud into a Tiered Storage Framework 7 Expanding the Parameters for Tiering with the Cloud 8 Defining Cloud-Enabled 9 Integrating Different Types of Storage 10 Non-Disruptive File Migration 11 Automated Storage Tiering 11 Benefits of a Cloud-Enabled Infrastructure 12 Reduced Storage Costs 12 Reduced Backup Times and Costs 13 Reduced Operational Costs 13 The F5 Cloud Storage Model 13 Creating a Private Cloud 15 Conclusion 18 2 White Paper Building a Cloud-Enabled File Storage Infrastructure Introduction Cloud storage offers enterprise organizations the opportunity to bring constantly rising file storage costs and management burden under control. By moving appropriate types of files to the cloud, organizations can reduce not only the amount of storage capacity that they need to purchase, but also the operational overhead involved in managing it. In addition, the cloud enables storage capacity to be increased on demand, while charging organizations only for the amount of storage that is actually utilized. Cloud storage will bring many changes to the way enterprises manage storage.
    [Show full text]
  • 1) Installation 2) Configuration
    rclone 1) Installation........................................................................................................................................1 2) Configuration...................................................................................................................................1 2.1) Server setup..............................................................................................................................1 2.2) Client setup...............................................................................................................................2 2.3) Server setup - part 2..................................................................................................................2 2.4) Client verification.....................................................................................................................3 2.5) rclone - part 1............................................................................................................................3 2.6) rclone - part 2............................................................................................................................4 3) Backup configuration.......................................................................................................................5 4) Usage................................................................................................................................................5 1) Installation https://rclone.org/install/ Script installation To install rclone on Linux/macOS/BSD
    [Show full text]
  • Lock-Free Collaboration Support for Cloud Storage Services With
    Lock-Free Collaboration Support for Cloud Storage Services with Operation Inference and Transformation Jian Chen, Minghao Zhao, and Zhenhua Li, Tsinghua University; Ennan Zhai, Alibaba Group Inc.; Feng Qian, University of Minnesota - Twin Cities; Hongyi Chen, Tsinghua University; Yunhao Liu, Michigan State University & Tsinghua University; Tianyin Xu, University of Illinois Urbana-Champaign https://www.usenix.org/conference/fast20/presentation/chen This paper is included in the Proceedings of the 18th USENIX Conference on File and Storage Technologies (FAST ’20) February 25–27, 2020 • Santa Clara, CA, USA 978-1-939133-12-0 Open access to the Proceedings of the 18th USENIX Conference on File and Storage Technologies (FAST ’20) is sponsored by Lock-Free Collaboration Support for Cloud Storage Services with Operation Inference and Transformation ⇤ 1 1 1 2 Jian Chen ⇤, Minghao Zhao ⇤, Zhenhua Li , Ennan Zhai Feng Qian3, Hongyi Chen1, Yunhao Liu1,4, Tianyin Xu5 1Tsinghua University, 2Alibaba Group, 3University of Minnesota, 4Michigan State University, 5UIUC Abstract Pattern 1: Losing updates Alice is editing a file. Suddenly, her file is overwritten All studied This paper studies how today’s cloud storage services support by a new version from her collaborator, Bob. Sometimes, cloud storage collaborative file editing. As a tradeoff for transparency/user- Alice can even lose her edits on the older version. services friendliness, they do not ask collaborators to use version con- Pattern 2: Conflicts despite coordination trol systems but instead implement their own heuristics for Alice coordinates her edits with Bob through emails to All studied handling conflicts, which however often lead to unexpected avoid conflicts by enforcing a sequential order.
    [Show full text]
  • Hybrid Cloud Storage with Cloudian Hyperstore® and Amazon S3
    SOLUTION BRIEF Hybrid Cloud Storage with Cloudian HyperStore® and Amazon S3 NEW DATA STORAGE CHALLENGES With the popularity of rich media, the nt,%and%the%proli.eraon%o.%mobile%de8ices2%there%has%bee digiN5aon%o.%conte n% e:ponenNal%growth%in%the%amount o.%unstructured%data that IT%is%tasked%with%managing?and%the%rate%o.%this%growth%is% only%accelerang.%TradiNonal%networked%storage%approaches2%such%as%SAC%and%CAS2%.ail%to%meet the%per.ormance%and% throughput demands% o.% this% new% generaon% o.% IT% as% they% lack% the% scalability2% De:ibility% and% agility% which% modern% business%reEuires.%In%.act,%whole%legacy%storage%systems%are%breaking%down.%Fackups%and%restores%take%longer.%Migraon% to%new%storage%systems%is%labor%intensi8e.%And%the%list goes%on%and%on.%As%a result,%many%enterprise%organi5aons%are% mo8ing%to%hybrid%IT%en8ironments2%combining%the%De:ibility%and%scale%o.%the%cloud%with%the%security%and%control%o.%their% on-­‐premises%IT%capabiliNes.%In%this%respect,%Iloudian%and%Amazon%are%changing%the%game.%% HYBRID CLOUD SOLUTION BENEFITS Iloudian% HyperStore% soKware% pro8ides% cost-­‐eLecN8e% SbOect storage% has% changed% the% li8es% o.% enterprise% pri8ate% cloud% storage% which% deploys% on% commodity% customers2% sa8ing% them% Nme% and% money% while% hardware%housed%within%enterprise%IT%data centers%and% increasing% per.ormance% and% peace% o.% mind.% Tey% seamlessly% integrates% with% the% oL-­‐premises% Amazon% benePts%o.%using%Iloudian%and%Amazon%to%build%hybrid% cloud% in.rastructure.% IT% managers% can% dynamically% cloud9%"nclude: control%bi-­‐direcNonal%data Nering%to%and%.rom%Amazon% • Impro8ed% security% and% per.ormance% by% using% on-­‐ S3% and% Nlacier% at the% bucket le8el2% 8ia bucket li.ecycle% premises%cloud%storage%.or%criNcal%business%content rules.%Fucket rules%can%also%be%set up%to%automacally% and% the% most cost eLecN8e% and% highly% reliable% oL-­‐ e:pire%obOects%aer%a predetermined%amount o.%Nme%or% premises%cloud%storage%.or%long%term%data archi8al.% specific date.
    [Show full text]
  • Security Services Using ECDSA in Cloud Computing
    Volume 4, Issue 5, May 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Security Services using ECDSA in Cloud Computing S.Sathish* D.Sumathi P.Sivaprakash Computer Science and Engineering Computer Science and Engineering Computer Science and Engineering Jaisriram Group of Institutions, India PPG Institute of Technology, India PPG Institute of Technology, India Abstract— Cloud computing security is the set of control-based technologies and policies designed to comply to the rules and regulations framed by the provider team to support and protect information, data applications and infrastructure associated with cloud computing use. Cloud computing security process should address the issues faced by the cloud users. Cloud Service Provider needs to incorporate the maintenance activity in order to provide the customer's data security, privacy and compliance with necessary regulations. The Elliptic Curve Digital Signature Algorithm (ECDSA) is a public key cryptosystem used for creation and verification of digital signatures in securing data uploaded by the cloud users. Information security concerns have been focused so that identifying unwanted modification of data, deletion of data is identified. Keywords— Cloud Computing,ECDSA,Crptography,RSA, I. INTRODUCTION Cloud computing is internet-based computing, where by shared resources, software and information are provided to computers and other devises on demand. It is a culmination of numerous attempts at large scale computing with seamless access to virtually limitless resources. Cloud Computing providers offer their services according to three fundamental models, namely Infrastructure-as-a- Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS) which is illustrated in figure 1.
    [Show full text]
  • Overview of Cloud Storage Allan Liu, Ting Yu
    Overview of Cloud Storage Allan Liu, Ting Yu To cite this version: Allan Liu, Ting Yu. Overview of Cloud Storage. International Journal of Scientific & Technology Research, 2018. hal-02889947 HAL Id: hal-02889947 https://hal.archives-ouvertes.fr/hal-02889947 Submitted on 6 Jul 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Overview of Cloud Storage Allan Liu, Ting Yu Department of Computer Science and Engineering Shanghai Jiao Tong University, Shanghai Abstract— Cloud computing is an emerging service and computing platform and it has taken commercial computing by storm. Through web services, the cloud computing platform provides easy access to the organization’s storage infrastructure and high-performance computing. Cloud computing is also an emerging business paradigm. Cloud computing provides the facility of huge scalability, high performance, reliability at very low cost compared to the dedicated storage systems. This article provides an introduction to cloud computing and cloud storage and different deployment modules. The general architecture of the cloud storage is also discussed along with its advantages and disadvantages for the organizations. Index Terms— Cloud Storage, Emerging Technology, Cloud Computing, Secure Storage, Cloud Storage Models —————————— u —————————— 1 INTRODUCTION n this era of technological advancements, Cloud computing Ihas played a very vital role in changing the way of storing 2 CLOUD STORAGE information and run applications.
    [Show full text]
  • Wandisco Fusion® Microsoft Azure Data Box®
    FUSION WANDISCO FUSION® MICROSOFT AZURE DATA BOX® Use WANdisco Fusion with Data Box for bulk transfer of changing data WANdisco Fusion is the only solution that enables Microsoft customers to use the bulk transfer capabilities of the Azure Data Box to transfer static and changing Always accurate information from Big Data applications to Azure Cloud with guaranteed data consistency. Users can continue to Take advantage of the storage on Azure Data Box for write to their local cluster while the Azure Data Box is in bulk data transfer while continuing to write to a local transit so when the Azure Data Box is subsequently being cluster and replicate those changes while the Azure uploaded, any changes are replicated to the Azure Cloud Data Box is uploaded to the Azure Cloud. with guaranteed consistency. Easy and intuitive step-by-step operation • Applications write to Azure Data Box using the same API that they use to interact with the Azure Cloud. Always available • WANdisco Fusion for Azure Data Box requires no change to applications which can continue to use the Write data to a local HDFS-compatible endpoint API as they would normally. on-premises and replicate to a storage location in • Replication is continuous and recovers from Microsoft Azure with no modification or disruption to intermittent network or system failures automatically. applications on-premises. AZURE 2 STORAGE Lower cost structure MICROSOFT 1 3 AZURE DATA BOX Avoid the high network costs common to large scale data transfers and benefit from a range of applications available in Azure Cloud. FUSION FUSION FUSION HADOOP ON-PREMISES Copyright © 2018 WANdisco, Inc.
    [Show full text]
  • Rclone VFS and Mergerfs Setup
    Rclone VFS and MergerFS Setup This guide is for advanced users only and it serves as a guide for you to use rclone and mergerFS. The files here are the recommended settings for our slots and will subject to change whenever there are new configurations that are appropriate for the slots. Furthermore, USB is not responsible for any data loss or application errors due to this setup should you proceed and will not provide official support for it due to the large volume of variables and different configurations possible with rclone and mergerFS. You may visit the community discord server or the software's respective forums for assistance. Please make yourself aware of the Ultra.cc Fair Usage Policy. It is very important not to mount your Cloud storage to any of the premade folders, this creates massive instability for both you and everyone else on your server. Always follow the documentation and create a new folder for mounting. It is your responsibility to ensure usage is within acceptable limits. Ignorance is not an excuse. Please do not mount to any of the default directories such as: files media bin .apps .config www /homexx/username/ or any pre-created directory found on your Ultra.cc Slot This section will teach you how to set up a rclone VFS mount and MergerFS on Ultra.cc slots, and it assumes the following: You have a working rclone setup, especially correctly configured remotes of your preferred cloud storage provider. In this tutorial, we'll be using Google Drive. If you use another cloud storage provider, change the flags that are appropriate to your setup and visit rclone documentation for more information.
    [Show full text]