Fstream: Managing Flash Streams in the File System

Total Page:16

File Type:pdf, Size:1020Kb

Fstream: Managing Flash Streams in the File System FStream: Managing Flash Streams in the File System Eunhee Rho, Kanchan Joshi, Seung-Uk Shin, Nitesh Jagadeesh Shetty, Joo-Young Hwang, Sangyeun Cho, Daniel DG Lee, and Jaeheon Jeong, Samsung Electronics. Co., Ltd. https://www.usenix.org/conference/fast18/presentation/rho This paper is included in the Proceedings of the 16th USENIX Conference on File and Storage Technologies. February 12–15, 2018 • Oakland, CA, USA ISBN 978-1-931971-42-3 Open access to the Proceedings of the 16th USENIX Conference on File and Storage Technologies is sponsored by USENIX. FStream: Managing Flash Streams in the File System Eunhee Rho, Kanchan Joshi, Seung-Uk Shin, Nitesh Jagadeesh Shetty Joo-Young Hwang, Sangyeun Cho, Daniel DG Lee, Jaeheon Jeong Samsung Electronics Co., Ltd. Abstract out issue [3,7]. To the same end, we focus on how to take The performance and lifespan of a solid-state drive advantage of the multi-streamed SSD mechanism [8]. (SSD) depend not only on the current input workload This mechanism opens up a way to dictate data place- but also on its internal media fragmentation formed over ment on an SSD’s underlying physical media, abstracted time, as stale data are spread over a wide range of phys- by streams. In principle, if the host system perfectly ical space in an SSD. The recently proposed streams maps data having the same lifetime to the same streams, gives a means for the host system to control how data are an SSD’s write amplification becomes one, completely placed on the physical media (abstracted by a stream) eliminating the media fragmentation problem. and effectively reduce the media fragmentation. This Prior works have revealed two strategies to leverage work proposes FStream, a file system approach to tak- streams. The first strategy would map application data ing advantage of this facility. FStream extracts streams to disparate streams based on an understanding of the at the file system level and avoids complex application expected lifetime of those data. For example, files in dif- level data mapping to streams. Experimental results ferent levels of a log-structured merge tree could be as- show that FStream enhances the filebench performance signed to a separate stream. Case studies show that this by 5%∼35% and reduces WAF (Write Amplification strategy works well for NoSQL databases like Cassandra Factor) by 7%∼46%. For a NoSQL database benchmark, and RocksDB [8, 14]. Unfortunately, this application- performance is improved by up to 38% and WAF is re- level customization strategy requires that a system de- duced by up to 81%. signer understand her target application’s internal work- ing fairly well, remaining a challenge to the designer. The other strategy aimed to “automate” the process of 1 Introduction mapping write I/O operations to an SSD stream with no Solid-state drives (SSDs) are rapidly replacing hard disk application changes. For example, the recently proposed drives (HDDs) in enterprise data centers. SSDs maintain AutoStream scheme assigns a stream to each write re- the traditional logical block device abstraction with the quest based on estimated lifetime from past LBA access help from internal software, commonly known as flash patterns [15]. However, this scheme has not been proven translation layer (FTL). The FTL allows SSDs to sub- to work well under complex workload scenarios, partic- stitute HDDs without complex modification in the block ularly when the workload changes dynamically. More- device interface of an OS. over, LBA based pattern detection is not practical when Prior work has revealed, however, that this compatibil- file data are updated in an out-of-place manner, as in ity comes at a cost; when the underlying media is frag- copy-on-write and log-structured file systems. The above mented as a device is aged, the operational efficiency of sketched strategies capture the two extremes in the de- the SSD deteriorates dramatically due to garbage collec- sign space—application level customization vs. block tion overheads [11, 13]. More specifically, a user write level full automation. I/O translates into an amplified amount of actual media In this work, we take another strategy, where we sep- writes [9], which shortens device lifetime and hampers arate streams at the file system layer. Our approach is performance. The ratio of the actual media writes to the motivated by the observation that file system metadata user I/O is called write amplification factor (WAF). and journal data are short-lived and are good targets for A large body of prior work has been undertaken to ad- separation from user data. Naturally, the primary compo- dress the write amplification problem and the SSD wear- nent of our scheme, when applied to a journaling file sys- USENIX Association 16th USENIX Conference on File and Storage Technologies 257 tem like ext4 and xfs, is to allocate a separate stream for 2.2 Multi-streamed SSD metadata and journal data, respectively. As a corollary The multi-streamed SSD [8] endeavors to keep WAF in component of our scheme, we also propose to separate check by focusing on the placement of data. It allows databases redo/undo log file as a distinct stream at the the host to pass a hint (stream ID) along with write re- file system layer. We implement our scheme, FStream, quests. The stream ID provides a means to convey hints in Linux ext4 and xfs file systems and perform experi- on lifetime of data that are getting written [5]. The multi- ments using a variety of workloads and a stream-capable streamed SSD groups data having the same stream ID to NVMe (NVM Express) SSD. Our experiments show that be stored to the same NAND block. By avoiding mixing FStream robustly achieves a near-optimal WAF (close to data of different lifetime, fragmentation is reduced inside 1) across the workloads we examined. We make the fol- NAND blocks. Figure1 shows an example of how the lowing contributions in this work. data placement using multi-stream could reduce media • We provide an automated multi-streaming of differ- fragmentation. ent types of file system generated data with respect to their lifetime; • We enhance the existing journaling file systems, ext4 and xfs, to use the multi-streamed SSDs with minimally invasive changes; and • We achieve stream classification for application data using file system layer information. The remainder of this paper is organized as follows. First, we describe the background of our study in Sec- tion2, with respect to the problems of previous multi- stream schemes. Section3 describes FStream and its im- plementation details. We show experimental results in Section4 and conclude in Section5. Figure 1: Data placement comparison for two write sequences. Here 2 Background H=Hot, C=Cold, and we assume there are four pages per flash block. 2.1 Write-amplification in SSDs Multi-stream is now a part of T10 (SCSI) standard, Flash memory has an inherent characteristic of erase- and under discussion in NVMe (NVM Express) working before-program. Write, also called “program” operation, group. NVMe 1.3 specification introduces support for happens at the granularity of a NAND page. A page can- multi-stream in the form of “directives” [1]. An NVMe not be rewritten unless it is “erased” first. In-place update write command has the provision to carry a stream ID. is not possible due to this erase-before-write characteris- tic. Hence, overwrite is handled by placing the data in 2.3 Leveraging Streams a new page, and invalidating the previous one. Erase operation is done in the unit of a NAND block, which While the multi-streamed SSD provides the facility of is a collection of multiple NAND pages. Before eras- segregating data into streams, its benefit largely depends ing a NAND block, all its valid pages need to be copied upon how well the streams are leveraged by the host. out elsewhere; this is done in a process called garbage- Identifying what should and should not go into the same collection (GC). These valid page movements cause ad- stream is of cardinal importance for maximum bene- ditional writes that consume bandwidth, thereby leading fit. Previous work [8, 14] shows benefits of application- to performance degradation and fluctuation. These ad- assigned streams. This approach has the benefit of de- ditional writes also reduce endurance as program-erase termining data lifetime accurately, but it involves mod- cycles are limited for NAND blocks. One way to mea- ifying the source code of the target application, leading sure GC overheads is write amplification factor (WAF), to increased deployment effort. Also when multiple ap- which is described as the ratio of writes performed on plications try to assign streams, a centralized stream as- flash memory to writes requested from the host system. signment is required to avoid conflicts. Instead of di- rect assignment of stream IDs, recently Linux (from 4.13 Amount of writes committed to flash WAF = kernel) supports fcntl() interface to send data life- Amount of writes that arrived from the host time hints to file systems to exploit the multi-streamed WAF may soar more often than not as an SSD experi- SSDs [2,6]. AutoStream [15] takes stream management ences aging [8]. to the NVMe device driver layer. It monitors requests 258 16th USENIX Conference on File and Storage Technologies USENIX Association from file systems and estimates data lifetime. However, other metadata regions (inode table, block bitmap, inode only limited information (e.g., request size, block ad- bitmap) within the block group. Another type of meta- dresses) is available in the driver layer, and even worse, data is a directory block. the address-based algorithm may be ineffective under a copy-on-write file system.
Recommended publications
  • SYSTEM V RELEASE 4 Migration Guide
    - ATlaT UN/~ SYSTEM V RELEASE 4 Migration Guide UNIX Software Operation Copyright 1990,1989,1988,1987,1986,1985,1984,1983 AT&T All Rights Reserved Printed In USA Published by Prentice-Hall, Inc. A Division of Simon & Schuster Englewood Cliffs, New Jersey 07632 No part of this publication may be reproduced or transmitted in any form or by any means-graphic, electronic, electrical, mechanical, or chemical, including photocopying, recording in any medium, tap­ ing, by any computer or information storage and retrieval systems, etc., without prior permissions in writing from AT&T. IMPORTANT NOTE TO USERS While every effort has been made to ensure the accuracy of all information in this document, AT&T assumes no liability to any party for any loss or damage caused by errors or omissions or by state­ ments of any kind in this document, its updates, supplements, or special editions, whether such er­ rors are omissions or statements resulting from negligence, accident, or any other cause. AT&T furth­ er assumes no liability arising out of the application or use of any product or system described herein; nor any liability for incidental or consequential damages arising from the use of this docu­ ment. AT&T disclaims all warranties regarding the information contained herein, whether expressed, implied or statutory, including implied warranties of merchantability or fitness for a particular purpose. AT&T makes no representation that the interconnection of products in the manner described herein will not infringe on existing or future patent rights, nor do the descriptions contained herein imply the granting or license to make, use or sell equipment constructed in accordance with this description.
    [Show full text]
  • Fast Linux I/O in the Unix Tradition
    — preprint only: final version will appear in OSR, July 2008 — PipesFS: Fast Linux I/O in the Unix Tradition Willem de Bruijn Herbert Bos Vrije Universiteit Amsterdam Vrije Universiteit Amsterdam and NICTA [email protected] [email protected] ABSTRACT ory wall” [26]). To improve throughput, it is now essential This paper presents PipesFS, an I/O architecture for Linux to avoid all unnecessary memory operations. Inefficient I/O 2.6 that increases I/O throughput and adds support for het- primitives exacerbate the effects of the memory wall by in- erogeneous parallel processors by (1) collapsing many I/O curring unnecessary copying and context switching, and as interfaces onto one: the Unix pipeline, (2) increasing pipe a result of these cache misses. efficiency and (3) exploiting pipeline modularity to spread computation across all available processors. Contribution. We revisit the Unix pipeline as a generic model for streaming I/O, but modify it to reduce overhead, PipesFS extends the pipeline model to kernel I/O and com- extend it to integrate kernel processing and complement it municates with applications through a Linux virtual filesys- with support for anycore execution. We link kernel and tem (VFS), where directory nodes represent operations and userspace processing through a virtual filesystem, PipesFS, pipe nodes export live kernel data. Users can thus interact that presents kernel operations as directories and live data as with kernel I/O through existing calls like mkdir, tools like pipes. This solution avoids new interfaces and so unlocks all grep, most languages and even shell scripts.
    [Show full text]
  • Linux Fast-STREAMS Installation and Reference Manual Version 0.9.2 Edition 4 Updated 2008-10-31 Package Streams-0.9.2.4
    Linux Fast-STREAMS Installation and Reference Manual Version 0.9.2 Edition 4 Updated 2008-10-31 Package streams-0.9.2.4 Brian Bidulock <[email protected]> for The OpenSS7 Project <http://www.openss7.org/> Copyright c 2001-2008 OpenSS7 Corporation <http://www.openss7.com/> Copyright c 1997-2000 Brian F. G. Bidulock <[email protected]> All Rights Reserved. Published by OpenSS7 Corporation 1469 Jefferys Crescent Edmonton, Alberta T6L 6T1 Canada This is texinfo edition 4 of the Linux Fast-STREAMS manual, and is consistent with streams 0.9.2. This manual was developed under the OpenSS7 Project and was funded in part by OpenSS7 Corporation. Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the con- ditions for verbatim copying, provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this manual into another lan- guage, under the same conditions as for modified versions. i Short Contents Preface ::::::::::::::::::::::::::::::::::::::::::::::::: 1 Quick Start Guide :::::::::::::::::::::::::::::::::::::::: 9 1 Introduction :::::::::::::::::::::::::::::::::::::::: 15 2 Objective ::::::::::::::::::::::::::::::::::::::::::: 17 3 Reference ::::::::::::::::::::::::::::::::::::::::::: 21 4 Development ::::::::::::::::::::::::::::::::::::::::
    [Show full text]
  • Introduction to the Linux for HPC
    Introduction to the Linux for HPC Basic Linux for Beginner HPC Users Purpose of This Lecture • Fundamentals of using Linux and Linux-like systems on HPC systems • History of Linux • Shell and basic commands • File, data and process management and utilities History of Linux Linux in the Real World 95% Top500 are Linux-like Operating System # of Systems Percentage Linux 456 91.20% Unix 22 4.40% Windows 6 1.20% BSD Based 1 0.20% Mixed 15 3.00% Unix A Little History • Q: How old is Unix (5, 10, 20 years, or greater)? A: > 40 Years • Unix dates back to 1969 with a group at Bell Laboratories • The original Unix operating system was written in assembler • First 1972 Unix installations had 3 users and a 500KB disk DEC PDP-11, 1972 Linux Bringing Unix to the Desktop • Unix was very expensive • MINIX, tried but was not a full port • An open source solution was needed! 1990’s Movers and Shakers Richard Stallman, father of the GNU Project Linus Torvalds What is Linux? • Linux is a clone of the Unix operating system written from scratch by Linus Torvalds with assistance from developers around the globe (technically speaking, Linux is not Unix) • Torvalds uploaded the first version - 0.01 in September 1991 • Only about 2% of the current Linux kernel is written by Torvalds himself but he remains the ultimate authority on what new code is incorporated into the Linux kernel. • Developed under the GNU General Public License , the source code for Linux is freely available • A large number of Linux-based distributions exist (for free or purchase) Why use LINUX? • Performance: as we’ve seen, supercomputers generally run Linux; rich-multi user environment • Functionality: a number of community driven scientific applications and libraries are developed under Linux (molecular dynamics, linear algebra, fast-fourier transforms, etc).
    [Show full text]
  • UNIX History Page 1 Tuesday, December 10, 2002 7:02 PM
    UNIX History Page 1 Tuesday, December 10, 2002 7:02 PM CHAPTER 1 UNIX Evolution and Standardization This chapter introduces UNIX from a historical perspective, showing how the various UNIX versions have evolved over the years since the very first implementation in 1969 to the present day. The chapter also traces the history of the different attempts at standardization that have produced widely adopted standards such as POSIX and the Single UNIX Specification. The material presented here is not intended to document all of the UNIX variants, but rather describes the early UNIX implementations along with those companies and bodies that have had a major impact on the direction and evolution of UNIX. A Brief Walk through Time There are numerous events in the computer industry that have occurred since UNIX started life as a small project in Bell Labs in 1969. UNIX history has been largely influenced by Bell Labs’ Research Editions of UNIX, AT&T’s System V UNIX, Berkeley’s Software Distribution (BSD), and Sun Microsystems’ SunOS and Solaris operating systems. The following list shows the major events that have happened throughout the history of UNIX. Later sections describe some of these events in more detail. 1 UNIX History Page 2 Tuesday, December 10, 2002 7:02 PM 2 UNIX Filesystems—Evolution, Design, and Implementation 1969. Development on UNIX starts in AT&T’s Bell Labs. 1971. 1st Edition UNIX is released. 1973. 4th Edition UNIX is released. This is the first version of UNIX that had the kernel written in C. 1974. Ken Thompson and Dennis Ritchie publish their classic paper, “The UNIX Timesharing System” [RITC74].
    [Show full text]
  • Veritas Backup Exec from Arcserve Netware.Pdf
    for NetWare Using v8.5 of VERITAS Backup Exec™ for NetWare’s Enhanced ARCserveIT Tape Read Feature With this release, Backup Exec for NetWare extends its ability to read ARCserveIT tapes, allowing you to more easily migrate from ARCserveIT to Backup Exec and to restore data from ARCserveIT tapes. Backup Exec will read: • ARCserveIT for NetWare versions 4.x, 5.x, and 6.x • ARCserveIT for Windows NT version 6.x • ARCserveIT’s native tape format • ARCserveIT’s embedded SMS tape format How the Enhanced ARCserveIT Tape Read Works When backing up a NetWare server, ARCserveIT 6.x by default includes Novell’s SMS data streams within the ARCserveIT format, allowing ARCserveIT to get its data stream from a Novell TSA. ARCserveIT then packages this data stream within the ARCserveIT format and places this data on tape. To read the data, Backup Exec unpackages the original TSA data and sends it back to the Novell TSA for interpretation. The NetWare server files and directories backed up by ARCserveIT can thus be fully restored by Backup Exec. Special Limitations If the default for ARCserveIT is changed so that ARCserveIT does not use SMS for backing up NetWare servers, the backup sets on the tape will be in ARCserveIT’s native format. Backup Exec will restore file and directory data, but will not always restore the long name for files. Other limitations are that Backup Exec cannot read the following: • ARCserveIT backup sets that are software compressed, software encrypted, or interleaved • ARCserveIT backups of NDS data sets • ARCserveIT backups of the NT registry • ARCserveIT backups of Microsoft Exchange or Microsoft SQL data Note NDS backups rapidly become out of date; therefore, you probably would not be restoring ARCserveIT backups of NDS data using Backup Exec.
    [Show full text]
  • COS 318: Operating Systems I/O Device Interactions and Drivers
    Topics COS 318: Operating Systems u So far: l Management of CPU and concurrency l Management of main memory and virtual memory I/O Device Interactions and u Next: Management of the I/O system Drivers l Interacting with I/O devices l Device drivers Jaswinder Pal Singh l Storage Devices Computer Science Department Princeton University u Then, File Systems l File System Structure (http://www.cs.princeton.edu/courses/cos318/) l Naming and Directories l Efficiency/Performance l Reliability and Protection 2 Input and Output Revisit Hardware u A computer u Compute hardware CPU l CPU cores and caches CPU l Computation (CPU, memory hierarchy) CPUCPU CPU l Memory $ l Move data into and out of a system (locketween I/O devices Chip l I/O and memory hierarchy) l Controllers and logic Memory I/O bridge u Challenges with I/O devices I/O bus l Different categories with different characteristics: storage, u I/O Hardware networking, displays, keyboard, mouse ... l I/O bus or interconnect l Large number of device drivers to support l I/O device l I/O controller or adapter l Device drivers run in kernel mode and can crash systems • Often on parent board Network u Goals of the OS • Cable connects it to device • Often using standard interfaces: IDE, l Provide a generic, consistent, convenient and reliable way to SATA, SCSI, USB, FireWire… access I/O devices • Has registers for control, data signals • Processor gives commands and/or l Achieve potential I/O performance in a system data to controller to do I/O • Special I/O instructions (w.
    [Show full text]
  • Appletalk Protocol Architecture Physical and Data Link Protocols
    PATHWORKS for Macintosh Introduction to the AppleTalk Network System Order Number AA-PBFEB-TE January 1991 Revision/Update Information: This is a revised manual. Software Version: PATHWORKS for Macintosh, Version 1.0 VMS Version 5.3 or greater digital equipment corporation maynard, massachusetts First Published, September 1990 Revised, January 1991 The information in this document is subject to change without notice and should not be construed as a commitment by Digital Equipment Corporation. Digital Equipment Corporation assumes no responsibility for any errors that may appear in this document. Any software described in this document is furnished under a license and may be used or copied only in accordance with the terms of such license. No responsibility is assumed for the use or reliability of software or equipment that is not supplied by Digital Equipment Corporation or its affiliated companies. Restricted Rights: Use, duplication, or disclosure by the U.S. Government is subject to restrictions as set forth in subparagraph (c)(l)(ii) of the Rights in Technical Data and Computer Software clause at DFARS 252.227-7013. © Digital Equipment Corporation 1990, 1991. All rights reserved. Printed in U.S.A. The postpaid Reader's Comments forms at the end of this document request your critical evaluation to assist in preparing future documentation. The following are trademarks of Digital Equipment Corporation: DEC, DECnet, DECnet-DOS, DECnet-VAX, DECstation, DECterm, DECwindows, LAT, LN03, LN03 Plus, LN03 ScriptPrinter, PATHWORKS, PCSA, PrintServer, RMS, RSTS, RSX-ll, RT, RX33, ThinWire, ULTRIX, VAX, VAXcluster, VAXserver, VAXshare, VMS, and the DIGITAL Logo. The following are trademarks of Apple Computer Inc.: Apple, the Apple logo, AppleShare, AppleTalk, Apple IIGS, EtherTalk, ImageWriter, LaserWriter, LocalTalk, Macintosh, MacTerminal, MultiFinder, ProDOS, and TokenTalk are registered trademarks of Apple Computer, Inc.
    [Show full text]
  • Novell Groupwise Idataagent
    Features - Novell GroupWise iDataAgent Features - Novell GroupWise iDataAgent TABLE OF CONTENTS OVERVIEW SYSTEM REQUIREMENTS - NOVELL GROUPWISE IDATAAGENT INSTALLATION z Install the NetWare Server iDataAgents z Install the NetWare Server iDataAgents - Clustered Environment -Virtual Server z Install the Novell GroupWise iDataAgent - OES Linux BACKUP - NETWARE SERVER RESTORE DATA - NETWARE SERVER CONFIGURATION z Backup Sets - File System, NetWare Server, Active Directory, and Microsoft Data Protection Manager iDataAgents z Subclients - NetWare/NDS/GroupWise MANAGEMENT z Backup Job History z Restore Job History Page 1 of 74 Features - Novell GroupWise iDataAgent Overview - Novell GroupWise iDataAgent Choose from the following topics: z Introduction z Supported Data Types z Tree Levels in the NetWare iDataAgents z License Requirement z Using the Appropriate NetWare Server iDataAgent z Disaster Recovery Considerations INTRODUCTION There are three iDataAgents available for backing up and restoring NetWare Servers: z NetWare File System iDataAgent (See Overview - NetWare iDataAgents for comprehensive information) z Novell Directory Services (NDS) iDataAgent (See Overview - NetWare iDataAgents for comprehensive information) z Novell GroupWise iDataAgent The GroupWise iDataAgent allows you to back up and restore the GroupWise databases residing on a NetWare server. The following example shows the iDataAgents needed to fully secure a hypothetical heterogeneous computing environment: SUPPORTED DATA TYPES To secure a GroupWise database, you may use the Novell GroupWise iDataAgent, NetWare File System iDataAgent, or the OES File System iDataAgent, depending on your environment. Any other agent (e.g., Windows File System iDataAgent) cannot be used to secure a GroupWise database. Note that to secure a GroupWise database using the NetWare File System iDataAgent or OES File System iDataAgent, Novell's TSAFS.NLM must be loaded with the EnableGW switch.
    [Show full text]
  • STREAMS Vs. Sockets Performance Comparison for UDP
    STREAMS vs. Sockets Performance Comparison for UDP Experimental Test Results for Linux Brian F. G. Bidulock∗ OpenSS7 Corporation June 16, 2007 Abstract cations facilities of the kernel: With the objective of contrasting performance between Transport Layer Interface (TLI). TLI is an acronym for the STREAMS and legacy approaches to system facilities, a com- Transport Layer Interface [TLI92]. The TLI was the non- parison is made between the tested performance of the Linux Na- standard interface provided by SVR4, later standardized by tive Sockets UDP implementation and STREAMS TPI UDP and X/Open as the XTI described below. This interface is now XTIoS UDP implementations using the Linux Fast-STREAMS deprecated. package [LfS]. X/Open Transport Interface (XTI). XTI is an acronym for the X/Open Transport Interface [XTI99]. The X/Open Trans- 1 Background port Interface is a standardization of the UNIX System V UNIX networking has a rich history. The TCP/IP protocol suite Release 4, Transport Layer Interface. The interface con- was first implemented by BBN using Sockets under a DARPA re- sists of an Application Programming Interface implemented search project on 4.1aBSD and then incorporated by the CSRG as a shared object library. The shared object library com- into 4.2BSD [MBKQ97]. Lachmann and Associates (Legent) sub- municates with a transport provider Stream using a service sequently implemented one of the first TCP/IP protocol suite primitive interface called the Transport Provider Interface. based on the Transport Provider Interface (TPI) [TLI92] and While XTI was implemented directly over STREAMS de- STREAMS [GC94]. Two other predominant TCP/IP implemen- vices supporting the Transport Provider Interface (TPI) tations on STREAMS surfaced at about the same time: Wollon- [TPI99] under SVR4, several non-traditional approaches ex- gong and Mentat.
    [Show full text]
  • Streams Version 3.03
    Streams Version 3.03 Installation, Execution, and Registration Installation and Registration © Dr Roger Nokes February 2021 Department of Civil and Natural Resources Engineering University of Canterbury Christchurch, NZ [email protected] Design and layout: Melody Callahan Streams v3.03 2 Installation and Registration 1 Installation and Execution The Streams 3.03 application is deployed as a ZIP file downloadable from the following website. https://www.canterbury.ac.nz/engineering/schools/cnre/software/stre ams/ Different ZIP files are provided for Macintosh, Windows and Linux systems. The only system requirement for running Streams on either a Macintosh, Linux, or Windows computers is the Java Development Kit (JDK) version 1.8.0, or later, installed. This can be downloaded from the Oracle website below: http://www.oracle.com/technetwork/java/javase/downloads/jdk8- downloads-2133151.html 1.1 Installation on Windows The ZIP file required for installation on a Windows computer is named Streams303_Win.zip To begin, create the following directory C:\StreamsLibrary\Streams3.03 The contents of the zip file should be extracted into this directory. The zip file contains the following: § Streams3.03.exe– the executable launch file § Streams3.03.jar – the java archive file containing the Streams program. § a directory containing the four software manuals in PDF format. The file Streams3.03.exe and the directory of manuals can be moved to any desired location in the file system. 3 Streams v3.03 Installation and Registration 1.2 Installation on Mac OS The ZIP file required for installation on a Macintosh computer running Mac OS is named Streams303_Mac.zip The contents of the zip file should be extracted into the Applications.
    [Show full text]
  • Chapter 2 Unix
    Chapter 2 Unix UNIX is basically a simple operating system, but you have to be a genius to understand the simplicity. –DennisRitchie 2.1 Unix History For many people the term “System Administrator” implies operation of Unix systems, even though the same concepts, tasks and practices apply largely to the maintenance of hosts running any operating system. In this book, we strive to describe principles that are universally applicable and not bound by a specific operating system. We will regularly use Unix as the prime example and cite its features and specific aspects because of its academic background, long history of openness, high penetration of the infrastructure marketplace, and its role as a cornerstone of the Internet. 2.1.1 The Operating System How the Unix operating system came to be and how that relates to the de- velopment of the Internet and various related technologies is fascinating; just 28 CHAPTER 2. UNIX 29 about every other Unix-related book already covers this topic in great detail. In this chapter, we summarize these developments with a focus on the major milestones along the road from the birth of Unix as a test platform for Ken Thompson’s “Space Travel” game running on a PDP-7 to the most widely used server operating system that nowadays also happens to power consumer desktops and laptops (in the form of Linux and Apple’s OS X), mobile de- vices (Apple’s iOS is OS X based and thus Unix derived; Google’s Android is a Linux flavor), TVs, commodity home routers, industry scale networking equipment, embedded devices on the Internet of Things (IoT), and virtually all supercomputers1.
    [Show full text]