Efficient Dataset Archiving And

Total Page:16

File Type:pdf, Size:1020Kb

Efficient Dataset Archiving And Bachelor Thesis Efficient dataset archiving and ver- sioning at large scale Bernhard Kurz Subject Area: Information Business Studienkennzahl: 033 561 Supervisor: Fernández García, Javier David, Dr. Co-Supervisor: Neumaier, Sebastian, Dipl.-Ing., B.Sc. Date of Submission: 23. June 2018 Department of Information Systems and Operations, Vienna University of Economics and Business, Welthandelsplatz 1, 1020 Vienna, Austria Contents 1 Introduction 5 1.1 Motivation . .6 1.2 Outline of research . .6 2 Requirements 7 2.1 Archiving and Versioning of dataset . .8 2.2 Performance and Scalability . .8 3 Background and Related Work 9 3.1 Storage Types . .9 3.2 Version Control Systems . 10 3.2.1 DEC’s VMS . 11 3.2.2 Subversion . 11 3.2.3 GIT . 11 3.2.4 Scalable Version Control Systems . 12 3.3 Redundant Data Reduction Techniques . 12 3.3.1 Compression . 13 3.4 Deduplication . 14 3.4.1 Basic Workflow . 15 3.4.2 Workflow Improvements . 15 3.4.3 Key Design Decisions . 17 4 Operating Systems and Filesystems 18 4.1 Definitions . 18 4.1.1 Operating System . 19 4.1.2 File System . 20 4.1.3 Data Access . 20 4.2 Classic vs Modern Filesystems . 20 4.3 ZFS . 21 4.3.1 Scalability . 21 4.3.2 Virtual Devices . 22 4.3.3 ZFS Blocks . 23 4.3.4 ZFS Pools . 24 4.3.5 ZFS Architecture . 24 4.4 Data Integrity and Reliability . 25 4.4.1 Replication . 26 4.5 Transactional Semantics . 26 4.5.1 Copy-on-Write and Snapshots . 27 4.6 Btrfs . 28 4.7 NILFS . 28 4.8 Operating Systems and Solutions . 29 5 Approaches 29 5.1 Version Control Systems . 29 5.2 Snapshots and Clones . 30 5.3 Deduplication . 30 6 Design of the Benchmarks 31 6.1 Datasets . 31 6.1.1 Portal Watch Datasets . 31 6.1.2 BEAR Datasets . 32 6.1.3 Wikipedia Dumps . 33 6.2 Testsystem and Methods . 34 6.2.1 Testing Enviroment . 34 6.2.2 Benchmarking Tools . 34 6.2.3 Bonnie++ . 35 6.3 Conditions and Parameters . 35 6.3.1 Deduplication . 36 6.3.2 Compression . 36 6.3.3 Blocksize . 37 7 Results 37 7.1 Performance of ext4 . 37 7.2 Performance of ZFS . 38 7.3 Deduplication Ratios . 39 7.3.1 Portal Watch Datasets . 39 7.3.2 BEAR Datasets . 41 7.3.3 Wikipedia Dumps . 41 7.4 Blocksize Tuning . 42 7.5 Compression Performance . 42 7.5.1 Compression Sizes . 44 7.5.2 Compression Ratios . 45 8 Conclusion and further research 47 9 Appendix 57 List of Figures 1 Redundant Data Reduction Techniques and approximate dates of initial research[92] . 13 2 Overview of a Deduplication Process . 15 3 Block-based vs object-based storage systems[42] . 23 4 Traditional Volumes vs ZFS Pooled Storage[12] . 24 5 ZFS Layers[12] . 25 6 Diskusage of datasets in GB per Month . 32 7 Bonnie Benchmark for ext4 . 37 8 Block I/O for ZFS . 38 9 Block I/O for ext4 compared to ZFS . 39 10 Deduplication Table for the Portal Watch Datasets . 40 11 Block I/O for ZFS: with Blocksize=4k and Compression or Deduplication . 43 12 Block I/O for ZFS: with Blocksize=128k and Compression or Deduplication . 43 13 Block I/O for ZFS: with Blocksize=1M and Compression or Deduplication . 44 14 Compression sizes in GB for the Datasets . 45 15 Compression Ratios for the Datasets . 46 List of Tables 1 BEAR Datasets Compression Ratios . 33 2 Wikipedia Dumps Compression Ratios . 33 3 Compression Ratios for different Compression Levels per Dataset 45 Abstract As the amount of generated data is increasing rapidly, there is a high need for reducing storage costs. In addition, the requirements for storage systems have evolved and archiving and versioning data have become a major challenge. Modern filesystems target key fea- tures like scalability, flexibility and data integrity. Furthermore, they provide mechanisms to reduce data redundancy and version control. This thesis discusses recent related research fields and evaluates how well archiving and versioning can be integrated in the filesystem layer to reduce administration overhead. The effectiveness for deduplication and compression is benchmarked on some datasets to ensure scalabil- ity and feasibility. Finally, a conclusion and an overview for further research is given. 1 Introduction Due to the fact, that the amount of data is increasing exponentially, storage costs are increasing too. On the one hand prices for hard drive disks sunk, but on the other hand, there are higher requirements, which means storing much data can be expensive. In computing, file systems are used to control how data is handled and stored on an at least physical device. "Some of today’s most popular filesystems are, in computing scale, ancient. We discard hardware because it’s five years old and too slow to be borne—then put a 30-year-old filesystem on its replacement. Even more modern filesystems like extfs, UFS2, and NTFS use older ideas at their core." [56] Classic filesystems have become less practical by fulfilling evolved demands. In many times it is efficient to make duplicates instead of sorting files or versions. For archiving and backup reasons there is a demand for version histories, to "look back" in time and be able to recreate older versions of documents or another kind of data. Due to the nature of backups, they allocate a lot of storage, even, if there have been just slight changes. To spare disk space incremental backups are used, which may cause a very complex and effortful recovering of older files. This leads also to a big impact on performance and it is difficult to manage versions. Another way could be an application based version control systems, like GIT, but they are slow and problematic if it comes to large files and scalability, or non-text and binary files.[9, 47] Another weakness of classic filesystems are size limits and partition caps, which may be still sufficient for the next five to ten years, but 5 thinking in a larger scale means, that they have to be recorded and adapted in the future. Besides there is the so-called bitrot, which is also known as silent-data-corruption.[6] Bitrot is problematic because it is mainly invisible and can cause problematic data inconsistency. Modern filesystems, like ZFS and Btrfs, deal with these problems. To counter this phenomenon, storage systems use levels of redundancy (RAID) combined with checksums. Overall, it will be tested how well fulfilling these requirements can be included in the filesystem layer. The aim of this thesis is to compare these features and find a plausible solution for the specific use case mentioned below. 1.1 Motivation The Department of Information Systems and Operations at the Vienna Uni- versity of Economics and Business is hosting the Open Data Portal Watch. The projects aim is to have a "scalable quality assessment and evolution mon- itoring framework for Open Data (Web) portals."[91] Therefore, Datasets from around 260 Web catalogues are downloaded and saved to disk each week. To reduce the storage costs, datasets are hashed and only saved, if they have changed. Otherwise, they are logged in a database to enable ac- cess to older versions and the datasets histories. In fact, changing can mean adding just one line or even remove rows from the dataset. That means, there could be thousands of duplicates or strictly speaking nearly duplicates. As this is simply speaking, a deduplication on a file-basis, we wanted to test, how well chunk-sized deduplication will provide additional gains. 1.2 Outline of research The scope of this thesis includes a literature review about modern filesystems like ZFS, Btrfs and the versioning filesystem NILFS. They will be compared with features known from version control systems like GIT and Subversion as well as deduplication. Attention will be spent on scalability and perfor- mance of these implementations. In conclusion, the following features will be reviewed and compared to each other: Targeted Factors • Saving Disk Space • IO Performance Required Factors • Archiving and Versioning 6 • Performance and Scalability Additional Factors • Hardware Requirements • Compatibility • Reliability Approaches • Version Control Systems • ZFS Snapshots and Clones • ZFS Deduplication A model, which shows the best ratio between the two targeted factors, would be a possible view to structure and visualize the results. These two can be derived from the requirements. It may be difficult to reduce the factors to the model without loosing too much information. The important thing is testing and benchmarking of the approaches. Due to its nature of filesystems and the importance of data consistency they should be tested for a long period of time to satisfy demands and eliminate as many errors. It will be discussed how well specific filesystems have been documented and tested. 2 Requirements Important for the Open Data Portal watch are the archiving and versioning of datasets and the access to them. On the one hand, this is similar to a backup schema of user data, but the difference is, that on the other hand, it is strictly speaking primary storage because it requires a performant read access. For backup cases, data is often compressed and deduplicated to save space, but for active storage, this can be too slow. It is given, that the storage system uses spinning disks. Hard drive disks have a very low rate of I/O operations per second compared to solid state disks.[44] Also, the kind of access is a random one, because, datasets are added from time to time and are not written sequentially.
Recommended publications
  • Venti Analysis and Memventi Implementation
    Master’s thesis Venti analysis and memventi implementation Designing a trace-based simulator and implementing a venti with in-memory index Mechiel Lukkien [email protected] August 8, 2007 Committee: Faculty of EEMCS prof. dr. Sape J. Mullender DIES, Distributed and Embedded Systems ir. J. Scholten University of Twente ir. P.G. Jansen Enschede, The Netherlands 2 Abstract [The next page has a Dutch summary] Venti is a write-once content-addressed archival storage system, storing its data on magnetic disks: each data block is addressed by its 20-byte SHA-1 hash (called score). This project initially aimed to design and implement a trace-based simula- tor matching Venti behaviour closely enough to be able to use it to determine good configuration parameters (such as cache sizes), and for testing new opti- misations. A simplistic simulator has been implemented, but it does not model Venti behaviour accurately enough for its intended goal, nor is it polished enough for use. Modelled behaviour is inaccurate because the advanced optimisations of Venti have not been implemented in the simulator. However, implementation suggestions for these optimisations are presented. In the process of designing the simulator, the Venti source code has been investigated, the optimisations have been documented, and disk and Venti per- formance have been measured. This allowed for recommendations about per- formance, even without a simulator. Beside magnetic disks, also flash memory and the upcoming mems-based storage devices have been investigated for use with Venti; they may be usable in the near future, but require explicit support. The focus of this project has shifted towards designing and implementing memventi, an alternative implementation of the venti protocol.
    [Show full text]
  • Copy on Write Based File Systems Performance Analysis and Implementation
    Copy On Write Based File Systems Performance Analysis And Implementation Sakis Kasampalis Kongens Lyngby 2010 IMM-MSC-2010-63 Technical University of Denmark Department Of Informatics Building 321, DK-2800 Kongens Lyngby, Denmark Phone +45 45253351, Fax +45 45882673 [email protected] www.imm.dtu.dk Abstract In this work I am focusing on Copy On Write based file systems. Copy On Write is used on modern file systems for providing (1) metadata and data consistency using transactional semantics, (2) cheap and instant backups using snapshots and clones. This thesis is divided into two main parts. The first part focuses on the design and performance of Copy On Write based file systems. Recent efforts aiming at creating a Copy On Write based file system are ZFS, Btrfs, ext3cow, Hammer, and LLFS. My work focuses only on ZFS and Btrfs, since they support the most advanced features. The main goals of ZFS and Btrfs are to offer a scalable, fault tolerant, and easy to administrate file system. I evaluate the performance and scalability of ZFS and Btrfs. The evaluation includes studying their design and testing their performance and scalability against a set of recommended file system benchmarks. Most computers are already based on multi-core and multiple processor architec- tures. Because of that, the need for using concurrent programming models has increased. Transactions can be very helpful for supporting concurrent program- ming models, which ensure that system updates are consistent. Unfortunately, the majority of operating systems and file systems either do not support trans- actions at all, or they simply do not expose them to the users.
    [Show full text]
  • Achieving Superior Manageability, Efficiency, and Data Protection With
    An Oracle White Paper December 2010 Achieving Superior Manageability, Efficiency, and Data Protection with Oracle’s Sun ZFS Storage Software Achieving Superior Manageability, Efficiency, and Data Protection with Oracle’s Sun ZFS Storage Software Introduction ......................................................................................... 2 Oracle’s Sun ZFS Storage Software ................................................... 3 Simplifying Storage Deployment and Management ............................ 3 Browser User Interface (BUI) ......................................................... 3 Built-in Networking and Security ..................................................... 4 Transparent Optimization with Hybrid Storage Pools ...................... 4 Shadow Data Migration ................................................................... 5 Third-party Confirmation of Management Efficiency ....................... 6 Improving Performance with Real-time Storage Profiling .................... 7 Increasing Storage Efficiency .............................................................. 8 Data Compression .......................................................................... 8 Data Deduplication .......................................................................... 9 Thin Provisioning ............................................................................ 9 Space-efficient Snapshots and Clones ......................................... 10 Reducing Risk with Industry-leading Data Protection ........................ 10 Self-Healing
    [Show full text]
  • PDF, 32 Pages
    Helge Meinhard / CERN V2.0 30 October 2015 HEPiX Fall 2015 at Brookhaven National Lab After 2004, the lab, located on Long Island in the State of New York, U.S.A., was host to a HEPiX workshop again. Ac- cess to the site was considerably easier for the registered participants than 11 years ago. The meeting took place in a very nice and comfortable seminar room well adapted to the size and style of meeting such as HEPiX. It was equipped with advanced (sometimes too advanced for the session chairs to master!) AV equipment and power sockets at each seat. Wireless networking worked flawlessly and with good bandwidth. The welcome reception on Monday at Wading River at the Long Island sound and the workshop dinner on Wednesday at the ocean coast in Patchogue showed more of the beauty of the rather natural region around the lab. For those interested, the hosts offered tours of the BNL RACF data centre as well as of the STAR and PHENIX experiments at RHIC. The meeting ran very smoothly thanks to an efficient and experienced team of local organisers headed by Tony Wong, who as North-American HEPiX co-chair also co-ordinated the workshop programme. Monday 12 October 2015 Welcome (Michael Ernst / BNL) On behalf of the lab, Michael welcomed the participants, expressing his gratitude to the audience to have accepted BNL's invitation. He emphasised the importance of computing for high-energy and nuclear physics. He then intro- duced the lab focusing on physics, chemistry, biology, material science etc. The total head count of BNL-paid people is close to 3'000.
    [Show full text]
  • CERIAS Tech Report 2017-5 Deceptive Memory Systems by Christopher N
    CERIAS Tech Report 2017-5 Deceptive Memory Systems by Christopher N. Gutierrez Center for Education and Research Information Assurance and Security Purdue University, West Lafayette, IN 47907-2086 DECEPTIVE MEMORY SYSTEMS ADissertation Submitted to the Faculty of Purdue University by Christopher N. Gutierrez In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy December 2017 Purdue University West Lafayette, Indiana ii THE PURDUE UNIVERSITY GRADUATE SCHOOL STATEMENT OF DISSERTATION APPROVAL Dr. Eugene H. Spa↵ord, Co-Chair Department of Computer Science Dr. Saurabh Bagchi, Co-Chair Department of Computer Science Dr. Dongyan Xu Department of Computer Science Dr. Mathias Payer Department of Computer Science Approved by: Dr. Voicu Popescu by Dr. William J. Gorman Head of the Graduate Program iii This work is dedicated to my wife, Gina. Thank you for all of your love and support. The moon awaits us. iv ACKNOWLEDGMENTS Iwould liketothank ProfessorsEugeneSpa↵ord and SaurabhBagchi for their guidance, support, and advice throughout my time at Purdue. Both have been instru­ mental in my development as a computer scientist, and I am forever grateful. I would also like to thank the Center for Education and Research in Information Assurance and Security (CERIAS) for fostering a multidisciplinary security culture in which I had the privilege to be part of. Special thanks to Adam Hammer and Ronald Cas­ tongia for their technical support and Thomas Yurek for his programming assistance for the experimental evaluation. I am grateful for the valuable feedback provided by the members of my thesis committee, Professor Dongyen Xu, and Professor Math­ ias Payer.
    [Show full text]
  • HTTP-FUSE Xenoppix
    HTTP-FUSE Xenoppix Kuniyasu Suzaki† Toshiki Yagi† Kengo Iijima† Kenji Kitagawa†† Shuichi Tashiro††† National Institute of Advanced Industrial Science and Technology† Alpha Systems Inc.†† Information-Technology Promotion Agency, Japan††† {k.suzaki,yagi-toshiki,k-iijima}@aist.go.jp [email protected], [email protected] Abstract a CD-ROM. Furthermore it requires remaking the entire CD-ROM when a bit of data is up- dated. The other solution is a Virtual Machine We developed “HTTP-FUSE Xenoppix” which which enables us to install many OSes and ap- boots Linux, Plan9, and NetBSD on Virtual plications easily. However, that requires in- Machine Monitor “Xen” with a small bootable stalling virtual machine software. (6.5MB) CD-ROM. The bootable CD-ROM in- cludes boot loader, kernel, and miniroot only We have developed “Xenoppix” [1], which and most part of files are obtained via Internet is a combination of CD/DVD bootable Linux with network loopback device HTTP-FUSE “KNOPPIX” [2] and Virtual Machine Monitor CLOOP. It is made from cloop (Compressed “Xen” [3, 4]. Xenoppix boots Linux (KNOP- Loopback block device) and FUSE (Filesys- PIX) as Host OS and NetBSD or Plan9 as Guest tem USErspace). HTTP-FUSE CLOOP can re- OS with a bootable DVD only. KNOPPIX construct a block device from many small block is advanced in automatic device detection and files of HTTP servers. In this paper we describe driver integration. It prepares the Xen environ- the detail of the implementation and its perfor- ment and Guest OSes don’t need to worry about mance. lack of device drivers.
    [Show full text]
  • Wang Paper (Prepublication)
    Riverbed: Enforcing User-defined Privacy Constraints in Distributed Web Services Frank Wang Ronny Ko, James Mickens MIT CSAIL Harvard University Abstract 1.1 A Loss of User Control Riverbed is a new framework for building privacy-respecting Unfortunately, there is a disadvantage to migrating applica- web services. Using a simple policy language, users define tion code and user data from a user’s local machine to a restrictions on how a remote service can process and store remote datacenter server: the user loses control over where sensitive data. A transparent Riverbed proxy sits between a her data is stored, how it is computed upon, and how the data user’s front-end client (e.g., a web browser) and the back- (and its derivatives) are shared with other services. Users are end server code. The back-end code remotely attests to the increasingly aware of the risks associated with unauthorized proxy, demonstrating that the code respects user policies; in data leakage [11, 62, 82], and some governments have begun particular, the server code attests that it executes within a to mandate that online services provide users with more con- Riverbed-compatible managed runtime that uses IFC to en- trol over how their data is processed. For example, in 2016, force user policies. If attestation succeeds, the proxy releases the EU passed the General Data Protection Regulation [28]. the user’s data, tagging it with the user-defined policies. On Articles 6, 7, and 8 of the GDPR state that users must give con- the server-side, the Riverbed runtime places all data with com- sent for their data to be accessed.
    [Show full text]
  • Ext4 File System and Crash Consistency
    1 Ext4 file system and crash consistency Changwoo Min 2 Summary of last lectures • Tools: building, exploring, and debugging Linux kernel • Core kernel infrastructure • Process management & scheduling • Interrupt & interrupt handler • Kernel synchronization • Memory management • Virtual file system • Page cache and page fault 3 Today: ext4 file system and crash consistency • File system in Linux kernel • Design considerations of a file system • History of file system • On-disk structure of Ext4 • File operations • Crash consistency 4 File system in Linux kernel User space application (ex: cp) User-space Syscalls: open, read, write, etc. Kernel-space VFS: Virtual File System Filesystems ext4 FAT32 JFFS2 Block layer Hardware Embedded Hard disk USB drive flash 5 What is a file system fundamentally? int main(int argc, char *argv[]) { int fd; char buffer[4096]; struct stat_buf; DIR *dir; struct dirent *entry; /* 1. Path name -> inode mapping */ fd = open("/home/lkp/hello.c" , O_RDONLY); /* 2. File offset -> disk block address mapping */ pread(fd, buffer, sizeof(buffer), 0); /* 3. File meta data operation */ fstat(fd, &stat_buf); printf("file size = %d\n", stat_buf.st_size); /* 4. Directory operation */ dir = opendir("/home"); entry = readdir(dir); printf("dir = %s\n", entry->d_name); return 0; } 6 Why do we care EXT4 file system? • Most widely-deployed file system • Default file system of major Linux distributions • File system used in Google data center • Default file system of Android kernel • Follows the traditional file system design 7 History of file system design 8 UFS (Unix File System) • The original UNIX file system • Design by Dennis Ritche and Ken Thompson (1974) • The first Linux file system (ext) and Minix FS has a similar layout 9 UFS (Unix File System) • Performance problem of UFS (and the first Linux file system) • Especially, long seek time between an inode and data block 10 FFS (Fast File System) • The file system of BSD UNIX • Designed by Marshall Kirk McKusick, et al.
    [Show full text]
  • Online Layered File System (OLFS): a Layered and Versioned Filesystem and Performance Analysis
    Loyola University Chicago Loyola eCommons Computer Science: Faculty Publications and Other Works Faculty Publications 5-2010 Online Layered File System (OLFS): A Layered and Versioned Filesystem and Performance Analysis Joseph P. Kaylor Konstantin Läufer Loyola University Chicago, [email protected] George K. Thiruvathukal Loyola University Chicago, [email protected] Follow this and additional works at: https://ecommons.luc.edu/cs_facpubs Part of the Computer Sciences Commons Recommended Citation Joe Kaylor, Konstantin Läufer, and George K. Thiruvathukal, Online Layered File System (OLFS): A layered and versioned filesystem and performance analysi, In Proceedings of Electro/Information Technology 2010 (EIT 2010). This Conference Proceeding is brought to you for free and open access by the Faculty Publications at Loyola eCommons. It has been accepted for inclusion in Computer Science: Faculty Publications and Other Works by an authorized administrator of Loyola eCommons. For more information, please contact [email protected]. This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License. Copyright © 2010 Joseph P. Kaylor, Konstantin Läufer, and George K. Thiruvathukal 1 Online Layered File System (OLFS): A Layered and Versioned Filesystem and Performance Analysis Joe Kaylor, Konstantin Läufer, and George K. Thiruvathukal Loyola University Chicago Department of Computer Science Chicago, IL 60640 USA Abstract—We present a novel form of intra-volume directory implement user mode file system frameworks such as FUSE layering with hierarchical, inheritance-like namespace unifica- [16]. tion. While each layer of an OLFS volume constitutes a subvol- Namespace Unification: Unix supports the ability to ume that can be mounted separately in a fan-in configuration, the entire hierarchy is always accessible (online) and fully navigable mount external file systems from external resources or local through any mounted layer.
    [Show full text]
  • Advanced Services for Oracle Hierarchical Storage Manager
    ORACLE DATA SHEET Advanced Services for Oracle Hierarchical Storage Manager The complex challenge of managing the data lifecycle is simply about putting the right data, on the right storage tier, at the right time. Oracle Hierarchical Storage Manager software enables you to reduce the cost of managing data and storing vast data repositories by providing a powerful, easily managed, cost-effective way to access, retain, and protect data over its entire lifecycle. However, your organization must ensure that your archive software is configured and optimized to meet strategic business needs and regulatory demands. Oracle Advanced Services for Oracle Hierarchical Storage Manager delivers the configuration expertise, focused reviews, and proactive guidance to help optimize the effectiveness of your solution–all delivered by Oracle Advanced Support Engineers. KEY BENEFITS Simplify Storage Management • Preproduction Readiness Services including critical patches and Putting the right information on the appropriate tier can reduce storage costs and updates, using proven methodologies maximize return on investment (ROI) over time. Oracle Hierarchical Storage Manager and recommended practices software actively manages data between storage tiers to let companies exploit the • Production Optimization Services substantial acquisition and operational cost differences between high-end disk drives, including configuration reviews and SATA drives, and tape devices. Oracle Hierarchical Storage Manager software provides performance reviews to analyze existing
    [Show full text]
  • Netbackup ™ Enterprise Server and Server 8.0 - 8.X.X OS Software Compatibility List Created on September 08, 2021
    Veritas NetBackup ™ Enterprise Server and Server 8.0 - 8.x.x OS Software Compatibility List Created on September 08, 2021 Click here for the HTML version of this document. <https://download.veritas.com/resources/content/live/OSVC/100046000/100046611/en_US/nbu_80_scl.html> Copyright © 2021 Veritas Technologies LLC. All rights reserved. Veritas, the Veritas Logo, and NetBackup are trademarks or registered trademarks of Veritas Technologies LLC in the U.S. and other countries. Other names may be trademarks of their respective owners. Veritas NetBackup ™ Enterprise Server and Server 8.0 - 8.x.x OS Software Compatibility List 2021-09-08 Introduction This Software Compatibility List (SCL) document contains information for Veritas NetBackup 8.0 through 8.x.x. It covers NetBackup Server (which includes Enterprise Server and Server), Client, Bare Metal Restore (BMR), Clustered Master Server Compatibility and Storage Stacks, Deduplication, File System Compatibility, NetBackup OpsCenter, NetBackup Access Control (NBAC), SAN Media Server/SAN Client/FT Media Server, Virtual System Compatibility and NetBackup Self Service Support. It is divided into bookmarks on the left that can be expanded. IPV6 and Dual Stack environments are supported from NetBackup 8.1.1 onwards with few limitations, refer technote for additional information <http://www.veritas.com/docs/100041420> For information about certain NetBackup features, functionality, 3rd-party product integration, Veritas product integration, applications, databases, and OS platforms that Veritas intends to replace with newer and improved functionality, or in some cases, discontinue without replacement, please see the widget titled "NetBackup Future Platform and Feature Plans" at <https://sort.veritas.com/netbackup> Reference Article <https://www.veritas.com/docs/100040093> for links to all other NetBackup compatibility lists.
    [Show full text]
  • Installing Oracle Goldengate
    Oracle® Fusion Middleware Installing Oracle GoldenGate 12c (12.3.0.1) E85215-07 November 2018 Oracle Fusion Middleware Installing Oracle GoldenGate, 12c (12.3.0.1) E85215-07 Copyright © 2017, 2018, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency- specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs.
    [Show full text]