Evaluating File System Reliability on Solid State Drives

Total Page:16

File Type:pdf, Size:1020Kb

Evaluating File System Reliability on Solid State Drives Evaluating File System Reliability on Solid State Drives Shehbaz Jaffer, Stathis Maneas, Andy Hwang, and Bianca Schroeder, University of Toronto https://www.usenix.org/conference/atc19/presentation/jaffer This paper is included in the Proceedings of the 2019 USENIX Annual Technical Conference. July 10–12, 2019 • Renton, WA, USA ISBN 978-1-939133-03-8 Open access to the Proceedings of the 2019 USENIX Annual Technical Conference is sponsored by USENIX. Evaluating File System Reliability on Solid State Drives Shehbaz Jaffer∗ Stathis Maneas∗ Andy Hwang Bianca Schroeder University of Toronto University of Toronto University of Toronto University of Toronto Abstract (due to suspected hardware problems) are often by an order As solid state drives (SSDs) are increasingly replacing hard of magnitude lower than those of HDDs, the occurrence of disk drives, the reliability of storage systems depends on the partial drive failures that lead to errors when reading or writ- failure modes of SSDs and the ability of the file system lay- ing a block or corrupted data can be an order of magnitude ered on top to handle these failure modes. While the classical higher. Other work argues that the Flash Translation Layer paper on IRON File Systems provides a thorough study of (FTL) of SSDs might be more prone to bugs compared to the failure policies of three file systems common at the time, HDD firmware, due to their high complexity and less matu- we argue that 13 years later it is time to revisit file system rity, and demonstrate this to be the case when drives are faced reliability with SSDs and their reliability characteristics in with power faults [53]. This makes it even more important mind, based on modern file systems that incorporate jour- than before that file systems can detect and deal with device naling, copy-on-write and log-structured approaches, and are faults effectively. optimized for flash. This paper presents a detailed study, span- Second, file systems have evolved significantly since [45] ning ext4, Btrfs and F2FS, and covering a number of different was published 13 years ago; the ext family of file systems has SSD error modes. We develop our own fault injection frame- undergone major changes from the ext3 version considered work and explore over a thousand error cases. Our results in [45] to the current ext4 [38]. New players with advanced indicate that 16% of these cases result in a file system that file-system features have arrived. Most notably Btrfs [46], a cannot be mounted or even repaired by its system checker. We copy-on-write file system which is more suitable for SSDs also identify the key file system metadata structures that can with no in-place writes, has garnered wide adoption. The cause such failures and finally, we recommend some design design of Btrfs is particularly interesting as it has fewer total guidelines for file systems that are deployed on top of SSDs. writes than ext4’s journaling mechanism. Further, there are new file systems that have been designed specifically for flash, 1 Introduction such as F2FS [33], which follow a log-structured approach to optimize performance on flash. Solid state drives (SSDs) are increasingly replacing hard disk The goal of this paper is to characterize the resilience of drives as a form of secondary storage medium. With their modern file systems running on flash-based SSDs in the face growing adoption, storage reliability now depends on the of SSD faults, along with the effectiveness of their recovery reliability of these new devices as well as the ability of the mechanisms when taking SSD failure characteristics into ac- file system above them to handle errors these devices might count. We focus on three different file systems: Btrfs, ext4, generate (including for example device errors when reading or and F2FS. ext4 is an obvious choice, as it is the most com- writing a block, or silently corrupted data). While the classical monly used Linux file system. Btrfs and F2FS include features paper by Prabhakaran et al. [45] (published in 2005) studied particularly attractive with respect to flash, with F2FS being in great detail the robustness of three file systems that were tailored for flash. Moreover, these three file systems cover common at the time in the face of hard disk drive (HDD) three different points in the design spectrum, ranging from errors, we argue that there are multiple reasons why it is time journaling to copy-on-write to log-structured approaches. to revisit this work. The main contribution of this paper is a detailed study, span- The first reason is that failure characteristics of SSDs differ ning three very different file systems and their ability to detect significantly from those of HDDs. For example, recent field and recover from SSD faults, based on error injection target- studies [39, 43, 48] show that, while their replacement rates ing all key data structures. We observe huge differences across ∗These authors contributed equally to this work. file systems and describe the vulnerabilities of each in detail. USENIX Association 2019 USENIX Annual Technical Conference 783 Over the course of this work we experiment with more than ror mechanisms that originate at the flash level and can result one thousand fault scenarios and observe that around 16% of in bit corruption, including retention errors, read and program them result in severe failure cases (kernel panic, unmount- disturb errors, errors due to flash cell wear-out and failing able file system). We make a number of observations and file blocks. Virtually all modern SSDs incorporate error correct- several bug reports, some of which have already resulted in ing codes to detect and correct such bit corruption. However, patches. For our experiments, we developed an error injection recent field studies indicate that uncorrectable bit corruption, module on top of the Linux device mapper framework. where more bits are corrupted than the error correcting code The remainder of this paper is organized as follows: Sec- (ECC) can handle, occurs at a significant rate in the field. For tion2 provides a taxonomy of SSD faults and a description example, a study based on Google field data observes 2-6 out of the experimental setup we use to emulate these faults and of 1000 drive days with uncorrectable bit errors [48]. Uncor- test the reaction of the three file systems. Section3 presents rectable bit corruption manifests as a read I/O error returned the results from our fault emulation experiments. Section4 by the drive when an application tries to access the affected covers related work and finally, in Section5, we summarize data (“Read I/O errors” in Table1). our observations and insights. Silent Bit Corruption: This is a more insidious form of bit corruption, where the drive itself is not aware of the corruption 2 File System Error Injection and returns corrupted data to the application (“Corruption” in Table1). While there have been field studies on the prevalence Our goal is to emulate different types of SSD failures and of silent data corruption for HDD based systems [9], there is check the ability of different file systems to detect and recover to date no field data on silent bit corruption for SSD based from them, based on which part of the file system was affected. systems. However, work based on lab experiments shows that We limit our analysis to a local file system running on top 3 out of 15 drive models under test experience silent data of a single drive. Note that although multi-drive redundancy corruption in the case of power faults [53]. Note that there mechanisms like RAID exist, they are not general substitutes are other mechanisms that can lead to silent data corruption, for file system reliability mechanisms. First, RAID is not including mechanisms that originate at higher levels in the applicable to all scenarios, such as single drives on personal storage stack, above the SSD device level. computers. Second, errors or data corruption can originate FTL Metadata Corruption: A special case arises when from higher levels in the storage stack, which RAID can silent bit corruption affects FTL metadata. Among other neither detect nor recover. things, the FTL maintains a mapping of logical to physical partial Furthermore, our work only considers drive fail- (L2P) blocks as part of its metadata [8]; metadata corruption ures, where only part of a drive’s operation is affected, rather could lead to “Read I/O errors” or “Write I/O errors”, when fail-stop than failures, where the drive as a whole becomes the application attempts to read or write a page that does not permanently inaccessible. The reason lies in the numerous have an entry in the L2P mapping due to corruption. Corrup- studies published over the last few years, using either lab ex- tion of the L2P mapping could also result in wrong or erased periments or field data, which have identified many different data being returned on a read, manifesting as “Corruption” to SSD internal error mechanisms that can result in partial fail- the file system. Note that this is also a silent corruption - i.e. ures, including mechanisms that originate both from the flash neither the device nor the FTL is aware of these corruptions. level [10, 12, 13, 16–19, 21, 23, 26, 27, 29–31, 34, 35, 40, 41, Misdirected Writes: This refers to the situation where dur- 47, 49, 50] and from bugs in the FTL code, e.g. when it is not ing an SSD-internal write operation, the correct data is being hardened to handle power faults correctly [52, 53]. written to flash, but at the wrong location.
Recommended publications
  • The Kernel Report
    The kernel report (ELC 2012 edition) Jonathan Corbet LWN.net [email protected] The Plan Look at a year's worth of kernel work ...with an eye toward the future Starting off 2011 2.6.37 released - January 4, 2011 11,446 changes, 1,276 developers VFS scalability work (inode_lock removal) Block I/O bandwidth controller PPTP support Basic pNFS support Wakeup sources What have we done since then? Since 2.6.37: Five kernel releases have been made 59,000 changes have been merged 3069 developers have contributed to the kernel 416 companies have supported kernel development February As you can see in these posts, Ralink is sending patches for the upstream rt2x00 driver for their new chipsets, and not just dumping a huge, stand-alone tarball driver on the community, as they have done in the past. This shows a huge willingness to learn how to deal with the kernel community, and they should be strongly encouraged and praised for this major change in attitude. – Greg Kroah-Hartman, February 9 Employer contributions 2.6.38-3.2 Volunteers 13.9% Wolfson Micro 1.7% Red Hat 10.9% Samsung 1.6% Intel 7.3% Google 1.6% unknown 6.9% Oracle 1.5% Novell 4.0% Microsoft 1.4% IBM 3.6% AMD 1.3% TI 3.4% Freescale 1.3% Broadcom 3.1% Fujitsu 1.1% consultants 2.2% Atheros 1.1% Nokia 1.8% Wind River 1.0% Also in February Red Hat stops releasing individual kernel patches March 2.6.38 released – March 14, 2011 (9,577 changes from 1198 developers) Per-session group scheduling dcache scalability patch set Transmit packet steering Transparent huge pages Hierarchical block I/O bandwidth controller Somebody needs to get a grip in the ARM community.
    [Show full text]
  • MLNX OFED Documentation Rev 5.0-2.1.8.0
    MLNX_OFED Documentation Rev 5.0-2.1.8.0 Exported on May/21/2020 06:13 AM https://docs.mellanox.com/x/JLV-AQ Notice This document is provided for information purposes only and shall not be regarded as a warranty of a certain functionality, condition, or quality of a product. NVIDIA Corporation (“NVIDIA”) makes no representations or warranties, expressed or implied, as to the accuracy or completeness of the information contained in this document and assumes no responsibility for any errors contained herein. NVIDIA shall have no liability for the consequences or use of such information or for any infringement of patents or other rights of third parties that may result from its use. This document is not a commitment to develop, release, or deliver any Material (defined below), code, or functionality. NVIDIA reserves the right to make corrections, modifications, enhancements, improvements, and any other changes to this document, at any time without notice. Customer should obtain the latest relevant information before placing orders and should verify that such information is current and complete. NVIDIA products are sold subject to the NVIDIA standard terms and conditions of sale supplied at the time of order acknowledgement, unless otherwise agreed in an individual sales agreement signed by authorized representatives of NVIDIA and customer (“Terms of Sale”). NVIDIA hereby expressly objects to applying any customer general terms and conditions with regards to the purchase of the NVIDIA product referenced in this document. No contractual obligations are formed either directly or indirectly by this document. NVIDIA products are not designed, authorized, or warranted to be suitable for use in medical, military, aircraft, space, or life support equipment, nor in applications where failure or malfunction of the NVIDIA product can reasonably be expected to result in personal injury, death, or property or environmental damage.
    [Show full text]
  • Oracle's Zero Data Loss Recovery Appliance
    GmbH Enterprise Servers, Storage and Business Continuity White Paper Oracle’s Zero Data Loss Recovery Appliance – and Comparison with EMC Data Domain Josh Krischer December 2015 _____________________________________________________________________________ 2015 © Josh Krischer & Associates GmbH. All rights reserved. Reproduction of this publication in any form without prior written permission is forbidden. The information contained herein has been obtained from sources believed to be reliable. Josh Krischer & Associates GmbH disclaims all warranties as to the accuracy, completeness or adequacy of such information. Josh Krischer & Associates GmbH shall have no liability for errors, omissions or inadequacies in the information contained herein or for interpretations thereof. The reader assumes sole responsibility for the selection of these materials to achieve its intended results. The opinions expressed herein are subject to change without notice. All product names used and mentioned d herein are the trademarks of their respective owners. GmbH Enterprise Servers, Storage and Business Continuity Contents Executive Summary ...................................................................................................................... 3 Oracle’s Zero Data Loss Recovery Appliance ............................................................................... 4 Oracle’s Zero Data Loss Recovery Appliance – Principles of Operation ....................................... 5 Oracle’s Maximum Availability Architecture (MAA) ......................................................................
    [Show full text]
  • Rootless Containers with Podman and Fuse-Overlayfs
    CernVM Workshop 2019 (4th June 2019) Rootless containers with Podman and fuse-overlayfs Giuseppe Scrivano @gscrivano Introduction 2 Rootless Containers • “Rootless containers refers to the ability for an unprivileged user (i.e. non-root user) to create, run and otherwise manage containers.” (https://rootlesscontaine.rs/ ) • Not just about running the container payload as an unprivileged user • Container runtime runs also as an unprivileged user 3 Don’t confuse with... • sudo podman run --user foo – Executes the process in the container as non-root – Podman and the OCI runtime still running as root • USER instruction in Dockerfile – same as above – Notably you can’t RUN dnf install ... 4 Don’t confuse with... • podman run --uidmap – Execute containers as a non-root user, using user namespaces – Most similar to rootless containers, but still requires podman and runc to run as root 5 Motivation of Rootless Containers • To mitigate potential vulnerability of container runtimes • To allow users of shared machines (e.g. HPC) to run containers without the risk of breaking other users environments • To isolate nested containers 6 Caveat: Not a panacea • Although rootless containers could mitigate these vulnerabilities, it is not a panacea , especially it is powerless against kernel (and hardware) vulnerabilities – CVE 2013-1858, CVE-2015-1328, CVE-2018-18955 • Castle approach : it should be used in conjunction with other security layers such as seccomp and SELinux 7 Podman 8 Rootless Podman Podman is a daemon-less alternative to Docker • $ alias
    [Show full text]
  • CERIAS Tech Report 2017-5 Deceptive Memory Systems by Christopher N
    CERIAS Tech Report 2017-5 Deceptive Memory Systems by Christopher N. Gutierrez Center for Education and Research Information Assurance and Security Purdue University, West Lafayette, IN 47907-2086 DECEPTIVE MEMORY SYSTEMS ADissertation Submitted to the Faculty of Purdue University by Christopher N. Gutierrez In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy December 2017 Purdue University West Lafayette, Indiana ii THE PURDUE UNIVERSITY GRADUATE SCHOOL STATEMENT OF DISSERTATION APPROVAL Dr. Eugene H. Spa↵ord, Co-Chair Department of Computer Science Dr. Saurabh Bagchi, Co-Chair Department of Computer Science Dr. Dongyan Xu Department of Computer Science Dr. Mathias Payer Department of Computer Science Approved by: Dr. Voicu Popescu by Dr. William J. Gorman Head of the Graduate Program iii This work is dedicated to my wife, Gina. Thank you for all of your love and support. The moon awaits us. iv ACKNOWLEDGMENTS Iwould liketothank ProfessorsEugeneSpa↵ord and SaurabhBagchi for their guidance, support, and advice throughout my time at Purdue. Both have been instru­ mental in my development as a computer scientist, and I am forever grateful. I would also like to thank the Center for Education and Research in Information Assurance and Security (CERIAS) for fostering a multidisciplinary security culture in which I had the privilege to be part of. Special thanks to Adam Hammer and Ronald Cas­ tongia for their technical support and Thomas Yurek for his programming assistance for the experimental evaluation. I am grateful for the valuable feedback provided by the members of my thesis committee, Professor Dongyen Xu, and Professor Math­ ias Payer.
    [Show full text]
  • F2FS) Overview
    Flash Friendly File System (F2FS) Overview Leon Romanovsky [email protected] www.leon.nu November 17, 2012 Leon Romanovsky [email protected] F2FS Overview Disclaimer Everything in this lecture shall not, under any circumstances, hold any legal liability whatsoever. Any usage of the data and information in this document shall be solely on the responsibility of the user. This lecture is not given on behalf of any company or organization. Leon Romanovsky [email protected] F2FS Overview Introduction: Flash Memory Definition Flash memory is a non-volatile storage device that can be electrically erased and reprogrammed. Challenges block-level access wear leveling read disturb bad blocks management garbage collection different physics different interfaces Leon Romanovsky [email protected] F2FS Overview Introduction: Flash Memory Definition Flash memory is a non-volatile storage device that can be electrically erased and reprogrammed. Challenges block-level access wear leveling read disturb bad blocks management garbage collection different physics different interfaces Leon Romanovsky [email protected] F2FS Overview Introduction: General System Architecture Leon Romanovsky [email protected] F2FS Overview Introduction: File Systems Optimized for disk storage EXT2/3/4 BTRFS VFAT Optimized for flash, but not aware of FTL JFFS/JFFS2 YAFFS LogFS UbiFS NILFS Leon Romanovsky [email protected] F2FS Overview Background: LFS vs. Unix FS Leon Romanovsky [email protected] F2FS Overview Background: LFS Overview Leon Romanovsky [email protected] F2FS Overview Background: LFS Garbage Collection 1 A victim segment is selected through referencing segment usage table. 2 It loads parent index structures of all the data in the victim identified by segment summary blocks.
    [Show full text]
  • Dm-X: Protecting Volume-Level Integrity for Cloud Volumes and Local
    dm-x: Protecting Volume-level Integrity for Cloud Volumes and Local Block Devices Anrin Chakraborti Bhushan Jain Jan Kasiak Stony Brook University Stony Brook University & Stony Brook University UNC, Chapel Hill Tao Zhang Donald Porter Radu Sion Stony Brook University & Stony Brook University & Stony Brook University UNC, Chapel Hill UNC, Chapel Hill ABSTRACT on Amazon’s Virtual Private Cloud (VPC) [2] (which provides The verified boot feature in recent Android devices, which strong security guarantees through network isolation) store deploys dm-verity, has been overwhelmingly successful in elim- data on untrusted storage devices residing in the public cloud. inating the extremely popular Android smart phone rooting For example, Amazon VPC file systems, object stores, and movement [25]. Unfortunately, dm-verity integrity guarantees databases reside on virtual block devices in the Amazon are read-only and do not extend to writable volumes. Elastic Block Storage (EBS). This paper introduces a new device mapper, dm-x, that This allows numerous attack vectors for a malicious cloud efficiently (fast) and reliably (metadata journaling) assures provider/untrusted software running on the server. For in- volume-level integrity for entire, writable volumes. In a direct stance, to ensure SEC-mandated assurances of end-to-end disk setup, dm-x overheads are around 6-35% over ext4 on the integrity, banks need to guarantee that the external cloud raw volume while offering additional integrity guarantees. For storage service is unable to remove log entries documenting cloud storage (Amazon EBS), dm-x overheads are negligible. financial transactions. Yet, without integrity of the storage device, a malicious cloud storage service could remove logs of a coordinated attack.
    [Show full text]
  • In Search of Optimal Data Placement for Eliminating Write Amplification in Log-Structured Storage
    In Search of Optimal Data Placement for Eliminating Write Amplification in Log-Structured Storage Qiuping Wangy, Jinhong Liy, Patrick P. C. Leey, Guangliang Zhao∗, Chao Shi∗, Lilong Huang∗ yThe Chinese University of Hong Kong ∗Alibaba Group ABSTRACT invalidated by a live block; a.k.a. the death time [12]) to achieve Log-structured storage has been widely deployed in various do- the minimum possible WA. However, without obtaining the fu- mains of storage systems for high performance. However, its garbage ture knowledge of the BIT pattern, how to design an optimal data collection (GC) incurs severe write amplification (WA) due to the placement scheme with the minimum WA remains an unexplored frequent rewrites of live data. This motivates many research stud- issue. Existing temperature-based data placement schemes that ies, particularly on data placement strategies, that mitigate WA in group blocks by block temperatures (e.g., write/update frequencies) log-structured storage. We show how to design an optimal data [7, 16, 22, 27, 29, 35, 36] are arguably inaccurate to capture the BIT placement scheme that leads to the minimum WA with the fu- pattern and fail to group the blocks with similar BITs [12]. ture knowledge of block invalidation time (BIT) of each written To this end, we propose SepBIT, a novel data placement scheme block. Guided by this observation, we propose SepBIT, a novel data that aims to minimize the WA in log-structured storage. It infers placement algorithm that aims to minimize WA in log-structured the BITs of written blocks from the underlying storage workloads storage.
    [Show full text]
  • ECE 598 – Advanced Operating Systems Lecture 19
    ECE 598 { Advanced Operating Systems Lecture 19 Vince Weaver http://web.eece.maine.edu/~vweaver [email protected] 7 April 2016 Announcements • Homework #7 was due • Homework #8 will be posted 1 Why use FAT over ext2? • FAT simpler, easy to code • FAT supported on all major OSes • ext2 faster, more robust filename and permissions 2 btrfs • B-tree fs (similar to a binary tree, but with pages full of leaves) • overwrite filesystem (overwite on modify) vs CoW • Copy on write. When write to a file, old data not overwritten. Since old data not over-written, crash recovery better Eventually old data garbage collected • Data in extents 3 • Copy-on-write • Forest of trees: { sub-volumes { extent-allocation { checksum tree { chunk device { reloc • On-line defragmentation • On-line volume growth 4 • Built-in RAID • Transparent compression • Snapshots • Checksums on data and meta-data • De-duplication • Cloning { can make an exact snapshot of file, copy-on- write different than link, different inodles but same blocks 5 Embedded • Designed to be small, simple, read-only? • romfs { 32 byte header (magic, size, checksum,name) { Repeating files (pointer to next [0 if none]), info, size, checksum, file name, file data • cramfs 6 ZFS Advanced OS from Sun/Oracle. Similar in idea to btrfs indirect still, not extent based? 7 ReFS Resilient FS, Microsoft's answer to brtfs and zfs 8 Networked File Systems • Allow a centralized file server to export a filesystem to multiple clients. • Provide file level access, not just raw blocks (NBD) • Clustered filesystems also exist, where multiple servers work in conjunction.
    [Show full text]
  • Chapter 19 RECOVERING DIGITAL EVIDENCE from LINUX SYSTEMS
    Chapter 19 RECOVERING DIGITAL EVIDENCE FROM LINUX SYSTEMS Philip Craiger Abstract As Linux-kernel-based operating systems proliferate there will be an in­ evitable increase in Linux systems that law enforcement agents must process in criminal investigations. The skills and expertise required to recover evidence from Microsoft-Windows-based systems do not neces­ sarily translate to Linux systems. This paper discusses digital forensic procedures for recovering evidence from Linux systems. In particular, it presents methods for identifying and recovering deleted files from disk and volatile memory, identifying notable and Trojan files, finding hidden files, and finding files with renamed extensions. All the procedures are accomplished using Linux command line utilities and require no special or commercial tools. Keywords: Digital evidence, Linux system forensics !• Introduction Linux systems will be increasingly encountered at crime scenes as Linux increases in popularity, particularly as the OS of choice for servers. The skills and expertise required to recover evidence from a Microsoft- Windows-based system, however, do not necessarily translate to the same tasks on a Linux system. For instance, the Microsoft NTFS, FAT, and Linux EXT2/3 file systems work differently enough that under­ standing one tells httle about how the other functions. In this paper we demonstrate digital forensics procedures for Linux systems using Linux command line utilities. The ability to gather evidence from a running system is particularly important as evidence in RAM may be lost if a forensics first responder does not prioritize the collection of live evidence. The forensic procedures discussed include methods for identifying and recovering deleted files from RAM and magnetic media, identifying no- 234 ADVANCES IN DIGITAL FORENSICS tables files and Trojans, and finding hidden files and renamed files (files with renamed extensions.
    [Show full text]
  • Legality and Ethics of Web Scraping
    Murray State's Digital Commons Faculty & Staff Research and Creative Activity 12-15-2020 Tutorial: Legality and Ethics of Web Scraping Vlad Krotov Leigh Johnson Murray State University Leiser Silva University of Houston Follow this and additional works at: https://digitalcommons.murraystate.edu/faculty Recommended Citation Krotov, V., Johnson, L., & Silva, L. (2020). Tutorial: Legality and Ethics of Web Scraping. Communications of the Association for Information Systems, 47, pp-pp. https://doi.org/10.17705/1CAIS.04724 This Peer Reviewed/Refereed Publication is brought to you for free and open access by Murray State's Digital Commons. It has been accepted for inclusion in Faculty & Staff Research and Creative Activity by an authorized administrator of Murray State's Digital Commons. For more information, please contact [email protected]. See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/343555462 Legality and Ethics of Web Scraping, Communications of the Association for Information Systems (forthcoming) Article in Communications of the Association for Information Systems · August 2020 CITATIONS READS 0 388 3 authors, including: Vlad Krotov Murray State University 42 PUBLICATIONS 374 CITATIONS SEE PROFILE Some of the authors of this publication are also working on these related projects: Addressing barriers to big data View project Web Scraping Framework: An Integrated Approach to Retrieving Big Qualitative Data from the Web View project All content following this
    [Show full text]
  • MC-1200 Series Linux Software User's Manual
    MC-1200 Series Linux Software User’s Manual Version 1.0, November 2020 www.moxa.com/product © 2020 Moxa Inc. All rights reserved. MC-1200 Series Linux Software User’s Manual The software described in this manual is furnished under a license agreement and may be used only in accordance with the terms of that agreement. Copyright Notice © 2020 Moxa Inc. All rights reserved. Trademarks The MOXA logo is a registered trademark of Moxa Inc. All other trademarks or registered marks in this manual belong to their respective manufacturers. Disclaimer Information in this document is subject to change without notice and does not represent a commitment on the part of Moxa. Moxa provides this document as is, without warranty of any kind, either expressed or implied, including, but not limited to, its particular purpose. Moxa reserves the right to make improvements and/or changes to this manual, or to the products and/or the programs described in this manual, at any time. Information provided in this manual is intended to be accurate and reliable. However, Moxa assumes no responsibility for its use, or for any infringements on the rights of third parties that may result from its use. This product might include unintentional technical or typographical errors. Changes are periodically made to the information herein to correct such errors, and these changes are incorporated into new editions of the publication. Technical Support Contact Information www.moxa.com/support Moxa Americas Moxa China (Shanghai office) Toll-free: 1-888-669-2872 Toll-free: 800-820-5036 Tel: +1-714-528-6777 Tel: +86-21-5258-9955 Fax: +1-714-528-6778 Fax: +86-21-5258-5505 Moxa Europe Moxa Asia-Pacific Tel: +49-89-3 70 03 99-0 Tel: +886-2-8919-1230 Fax: +49-89-3 70 03 99-99 Fax: +886-2-8919-1231 Moxa India Tel: +91-80-4172-9088 Fax: +91-80-4132-1045 Table of Contents 1.
    [Show full text]