Installing and Operating 4.4BSD UNIX July 27, 1993
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Freenas® 11.0 User Guide
FreeNAS® 11.0 User Guide June 2017 Edition FreeNAS® IS © 2011-2017 iXsystems FreeNAS® AND THE FreeNAS® LOGO ARE REGISTERED TRADEMARKS OF iXsystems FreeBSD® IS A REGISTERED TRADEMARK OF THE FreeBSD Foundation WRITTEN BY USERS OF THE FreeNAS® network-attached STORAGE OPERATING system. VERSION 11.0 CopYRIGHT © 2011-2017 iXsystems (https://www.ixsystems.com/) CONTENTS WELCOME....................................................1 TYPOGRAPHIC Conventions...........................................2 1 INTRODUCTION 3 1.1 NeW FeaturES IN 11.0..........................................3 1.2 HarDWARE Recommendations.....................................4 1.2.1 RAM...............................................5 1.2.2 The OperATING System DeVICE.................................5 1.2.3 StorAGE Disks AND ContrOLLERS.................................6 1.2.4 Network INTERFACES.......................................7 1.3 Getting Started WITH ZFS........................................8 2 INSTALLING AND UpgrADING 9 2.1 Getting FreeNAS® ............................................9 2.2 PrEPARING THE Media.......................................... 10 2.2.1 On FreeBSD OR Linux...................................... 10 2.2.2 On WindoWS.......................................... 11 2.2.3 On OS X............................................. 11 2.3 Performing THE INSTALLATION....................................... 12 2.4 INSTALLATION TROUBLESHOOTING...................................... 18 2.5 UpgrADING................................................ 19 2.5.1 Caveats:............................................ -
Tortoisemerge a Diff/Merge Tool for Windows Version 1.11
TortoiseMerge A diff/merge tool for Windows Version 1.11 Stefan Küng Lübbe Onken Simon Large TortoiseMerge: A diff/merge tool for Windows: Version 1.11 by Stefan Küng, Lübbe Onken, and Simon Large Publication date 2018/09/22 18:28:22 (r28377) Table of Contents Preface ........................................................................................................................................ vi 1. TortoiseMerge is free! ....................................................................................................... vi 2. Acknowledgments ............................................................................................................. vi 1. Introduction .............................................................................................................................. 1 1.1. Overview ....................................................................................................................... 1 1.2. TortoiseMerge's History .................................................................................................... 1 2. Basic Concepts .......................................................................................................................... 3 2.1. Viewing and Merging Differences ...................................................................................... 3 2.2. Editing Conflicts ............................................................................................................. 3 2.3. Applying Patches ........................................................................................................... -
Enhancing the Accuracy of Synthetic File System Benchmarks Salam Farhat Nova Southeastern University, [email protected]
Nova Southeastern University NSUWorks CEC Theses and Dissertations College of Engineering and Computing 2017 Enhancing the Accuracy of Synthetic File System Benchmarks Salam Farhat Nova Southeastern University, [email protected] This document is a product of extensive research conducted at the Nova Southeastern University College of Engineering and Computing. For more information on research and degree programs at the NSU College of Engineering and Computing, please click here. Follow this and additional works at: https://nsuworks.nova.edu/gscis_etd Part of the Computer Sciences Commons Share Feedback About This Item NSUWorks Citation Salam Farhat. 2017. Enhancing the Accuracy of Synthetic File System Benchmarks. Doctoral dissertation. Nova Southeastern University. Retrieved from NSUWorks, College of Engineering and Computing. (1003) https://nsuworks.nova.edu/gscis_etd/1003. This Dissertation is brought to you by the College of Engineering and Computing at NSUWorks. It has been accepted for inclusion in CEC Theses and Dissertations by an authorized administrator of NSUWorks. For more information, please contact [email protected]. Enhancing the Accuracy of Synthetic File System Benchmarks by Salam Farhat A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor in Philosophy in Computer Science College of Engineering and Computing Nova Southeastern University 2017 We hereby certify that this dissertation, submitted by Salam Farhat, conforms to acceptable standards and is fully adequate in scope and quality to fulfill the dissertation requirements for the degree of Doctor of Philosophy. _____________________________________________ ________________ Gregory E. Simco, Ph.D. Date Chairperson of Dissertation Committee _____________________________________________ ________________ Sumitra Mukherjee, Ph.D. Date Dissertation Committee Member _____________________________________________ ________________ Francisco J. -
Tigersharc DSP Hardware Specification, Revision 1.0.2, Direct Memory Access
7 DIRECT MEMORY ACCESS Figure 7-0. Table 7-0. Listing 7-0. Overview Direct Memory Access (DMA) is a mechanism for transferring data with- out core being involved. The TigerSHARC® DSP’s on-chip DMA controller relieves the core processor of the burden of moving data between internal memory and an external device, external memory, or between link ports and internal or external memory. The fully-integrated DMA controller allows the TigerSHARC® DSP core processor, or an external device, to specify data transfer operations and return to normal processing while the DMA controller carries out the data transfers in the background. The TigerSHARC® DSP DMA competes with other masters for internal memory access. For more information, see “Architecture and Microarchi- tecture Overview” on page 6-7. This conflict is minimized due to the large internal memory bandwidth that is available. The DMA includes 14 DMA channels, four of which are dedicated to external memory devices, eight to link ports, and two to AutoDMA registers. TigerSHARC DSP Hardware Specification 7 - 1 Overview Figure 7-1 shows a block diagram of the TigerSHARC® DSP’s DMA controller. TRANSMITTER RECEIVER TCB TCB REGISTERS REGISTERS Internal DMA DMA CONTROLLER Bus Requests Interface Figure 7-1. DMA Block Diagram Data Transfers — General Information The DMA controller can perform several types of data transfers: • Internal memory ⇒ external memory and memory-mapped periph- erals • Internal memory ⇒ internal memory of other TigerSHARC® DSPs residing on the cluster bus • Internal memory ⇒ host processor • Internal memory ⇒ link port I/O • External memory ⇒ external peripherals 7 - 2 TigerSHARC DSP Hardware Specification Direct Memory Access • External memory ⇒ internal memory • External memory ⇒ link port I/O • Link port I/O ⇒ internal memory • Link port I/O ⇒ external memory • Cluster bus master via AutoDMA registers ⇒ internal memory Internal-to-internal memory transfers are not directly supported. -
Using EMC VNX Storage with Vmware Vsphere Techbook CONTENTS
Using EMC® VNX® Storage with VMware vSphere Version 4.0 TechBook P/N H8229 REV 05 Copyright © 2015 EMC Corporation. All rights reserved. Published in the USA. Published January 2015 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date regulatory document for your product line, go to EMC Online Support (https://support.emc.com). 2 Using EMC VNX Storage with VMware vSphere TechBook CONTENTS Preface Chapter 1 Configuring VMware vSphere on VNX Storage Technology overview................................................................................... 18 EMC VNX family..................................................................................... 18 FLASH 1st.............................................................................................. 18 MCx multicore optimization.................................................................. -
Twenty Years of Berkeley Unix : from AT&T-Owned to Freely
Twenty Years of Berkeley Unix : From AT&T-Owned to Freely Redistributable Marshall Kirk McKusick Early History Ken Thompson and Dennis Ritchie presented the first Unix paper at the Symposium on Operating Systems Principles at Purdue University in November 1973. Professor Bob Fabry, of the University of California at Berkeley, was in attendance and immediately became interested in obtaining a copy of the system to experiment with at Berkeley. At the time, Berkeley had only large mainframe computer systems doing batch processing, so the first order of business was to get a PDP-11/45 suitable for running with the then-current Version 4 of Unix. The Computer Science Department at Berkeley, together with the Mathematics Department and the Statistics Department, were able to jointly purchase a PDP-11/45. In January 1974, a Version 4 tape was delivered and Unix was installed by graduate student Keith Standiford. Although Ken Thompson at Purdue was not involved in the installation at Berkeley as he had been for most systems up to that time, his expertise was soon needed to determine the cause of several strange system crashes. Because Berkeley had only a 300-baud acoustic-coupled modem without auto answer capability, Thompson would call Standiford in the machine room and have him insert the phone into the modem; in this way Thompson was able to remotely debug crash dumps from New Jersey. Many of the crashes were caused by the disk controller's inability to reliably do overlapped seeks, contrary to the documentation. Berkeley's 11/45 was among the first systems that Thompson had encountered that had two disks on the same controller! Thompson's remote debugging was the first example of the cooperation that sprang up between Berkeley and Bell Labs. -
[13주차] Sysfs and Procfs
1 7 Computer Core Practice1: Operating System Week13. sysfs and procfs Jhuyeong Jhin and Injung Hwang Embedded Software Lab. Embedded Software Lab. 2 sysfs 7 • A pseudo file system provided by the Linux kernel. • sysfs exports information about various kernel subsystems, HW devices, and associated device drivers to user space through virtual files. • The mount point of sysfs is usually /sys. • sysfs abstrains devices or kernel subsystems as a kobject. Embedded Software Lab. 3 How to create a file in /sys 7 1. Create and add kobject to the sysfs 2. Declare a variable and struct kobj_attribute – When you declare the kobj_attribute, you should implement the functions “show” and “store” for reading and writing from/to the variable. – One variable is one attribute 3. Create a directory in the sysfs – The directory have attributes as files • When the creation of the directory is completed, the directory and files(attributes) appear in /sys. • Reference: ${KERNEL_SRC_DIR}/include/linux/sysfs.h ${KERNEL_SRC_DIR}/fs/sysfs/* • Example : ${KERNEL_SRC_DIR}/kernel/ksysfs.c Embedded Software Lab. 4 procfs 7 • A special filesystem in Unix-like operating systems. • procfs presents information about processes and other system information in a hierarchical file-like structure. • Typically, it is mapped to a mount point named /proc at boot time. • procfs acts as an interface to internal data structures in the kernel. The process IDs of all processes in the system • Kernel provides a set of functions which are designed to make the operations for the file in /proc : “seq_file interface”. – We will create a file in procfs and print some data from data structure by using this interface. -
A.5.1. Linux Programming and the GNU Toolchain
Making the Transition to Linux A Guide to the Linux Command Line Interface for Students Joshua Glatt Making the Transition to Linux: A Guide to the Linux Command Line Interface for Students Joshua Glatt Copyright © 2008 Joshua Glatt Revision History Revision 1.31 14 Sept 2008 jg Various small but useful changes, preparing to revise section on vi Revision 1.30 10 Sept 2008 jg Revised further reading and suggestions, other revisions Revision 1.20 27 Aug 2008 jg Revised first chapter, other revisions Revision 1.10 20 Aug 2008 jg First major revision Revision 1.00 11 Aug 2008 jg First official release (w00t) Revision 0.95 06 Aug 2008 jg Second beta release Revision 0.90 01 Aug 2008 jg First beta release License This document is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License [http:// creativecommons.org/licenses/by-nc-sa/3.0/us/]. Legal Notice This document is distributed in the hope that it will be useful, but it is provided “as is” without express or implied warranty of any kind; without even the implied warranties of merchantability or fitness for a particular purpose. Although the author makes every effort to make this document as complete and as accurate as possible, the author assumes no responsibility for errors or omissions, nor does the author assume any liability whatsoever for incidental or consequential damages in connection with or arising out of the use of the information contained in this document. The author provides links to external websites for informational purposes only and is not responsible for the content of those websites. -
Performance, Scalability on the Server Side
Performance, Scalability on the Server Side John VanDyk Presented at Des Moines Web Geeks 9/21/2009 Who is this guy? History • Apple // • Macintosh • Windows 3.1- Server 2008R2 • Digital Unix (Tru64) • Linux (primarily RHEL) • FreeBSD Systems Iʼve worked with over the years. Languages • Perl • Userland Frontier™ • Python • Java • Ruby • PHP Languages Iʼve worked with over the years (Userland Frontier™ʼs integrated language is UserTalk™) Open source developer since 2000 Perl/Python/PHP MySQL Apache Linux The LAMP stack. Time to Serve Request Number of Clients Performance vs. scalability. network in network out RAM CPU Storage These are the basic laws of physics. All bottlenecks are caused by one of these four resources. Disk-bound •To o l s •iostat •vmstat Determine if you are disk-bound by measuring throughput. vmstat (BSD) procs memory page disk faults cpu r b w avm fre flt re pi po fr sr tw0 in sy cs us sy id 0 2 0 799M 842M 27 0 0 0 12 0 23 344 2906 1549 1 1 98 3 3 0 869M 789M 5045 0 0 0 406 0 10 1311 17200 5301 12 4 84 3 5 0 923M 794M 5219 0 0 0 5178 0 27 1825 21496 6903 35 8 57 1 2 0 931M 784M 909 0 0 0 146 0 12 955 9157 3570 8 4 88 blocked plenty of RAM, idle processes no swapping CPUs A disk-bound FreeBSD machine. b = blocked for resources fr = pages freed/sec cs = context switches avm = active virtual pages in = interrupts flt = memory page faults sy = system calls per interval vmstat (RHEL5) # vmstat -S M 5 25 procs ---------memory-------- --swap- ---io--- --system- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 0 1301 194 5531 0 0 0 29 1454 2256 24 20 56 0 0 3 0 0 1257 194 5531 0 0 0 40 2087 2336 34 27 39 0 0 2 0 0 1183 194 5531 0 0 0 53 1658 2763 33 28 39 0 0 0 0 0 1344 194 5531 0 0 0 34 1807 2125 29 19 52 0 0 no blocked busy but not processes overloaded CPU in = interrupts/sec cs = context switches/sec wa = time waiting for I/O Solving disk bottlenecks • Separate spindles (logs and databases) • Get rid of atime updates! • Minimize writes • Move temp writes to /dev/shm Overview of what weʼre about to dive into. -
Text Editing in UNIX: an Introduction to Vi and Editing
Text Editing in UNIX A short introduction to vi, pico, and gedit Copyright 20062009 Stewart Weiss About UNIX editors There are two types of text editors in UNIX: those that run in terminal windows, called text mode editors, and those that are graphical, with menus and mouse pointers. The latter require a windowing system, usually X Windows, to run. If you are remotely logged into UNIX, say through SSH, then you should use a text mode editor. It is possible to use a graphical editor, but it will be much slower to use. I will explain more about that later. 2 CSci 132 Practical UNIX with Perl Text mode editors The three text mode editors of choice in UNIX are vi, emacs, and pico (really nano, to be explained later.) vi is the original editor; it is very fast, easy to use, and available on virtually every UNIX system. The vi commands are the same as those of the sed filter as well as several other common UNIX tools. emacs is a very powerful editor, but it takes more effort to learn how to use it. pico is the easiest editor to learn, and the least powerful. pico was part of the Pine email client; nano is a clone of pico. 3 CSci 132 Practical UNIX with Perl What these slides contain These slides concentrate on vi because it is very fast and always available. Although the set of commands is very cryptic, by learning a small subset of the commands, you can edit text very quickly. What follows is an outline of the basic concepts that define vi. -
Refs: Is It a Game Changer? Presented By: Rick Vanover, Director, Technical Product Marketing & Evangelism, Veeam
Technical Brief ReFS: Is It a Game Changer? Presented by: Rick Vanover, Director, Technical Product Marketing & Evangelism, Veeam Sponsored by ReFS: Is It a Game Changer? OVERVIEW Backing up data is more important than ever, as data centers store larger volumes of information and organizations face various threats such as ransomware and other digital risks. Microsoft’s Resilient File System or ReFS offers a more robust solution than the old NT File System. In fact, Microsoft has stated that ReFS is the preferred data volume for Windows Server 2016. ReFS is an ideal solution for backup storage. By utilizing the ReFS BlockClone API, Veeam has developed Fast Clone, a fast, efficient storage backup solution. This solution offers organizations peace of mind through a more advanced approach to synthetic full backups. CONTEXT Rick Vanover discussed Microsoft’s Resilient File System (ReFS) and described how Veeam leverages this technology for its Fast Clone backup functionality. KEY TAKEAWAYS Resilient File System is a Microsoft storage technology that can transform the data center. Resilient File System or ReFS is a valuable Microsoft storage technology for data centers. Some of the key differences between ReFS and the NT File System (NTFS) are: ReFS provides many of the same limits as NTFS, but supports a larger maximum volume size. ReFS and NTFS support the same maximum file name length, maximum path name length, and maximum file size. However, ReFS can handle a maximum volume size of 4.7 zettabytes, compared to NTFS which can only support 256 terabytes. The most common functions are available on both ReFS and NTFS. -
The Release Engineering of Freebsd 4.4
The Release Engineering of FreeBSD 4.4 Murray Stokely [email protected] Wind River Systems Abstract different pace, and with the general assumption that they This paper describes the approach used by the FreeBSD re- have first gone into FreeBSD-CURRENT and have been lease engineering team to make production-quality releases thoroughly tested by our user community. of the FreeBSD operating system. It details the methodol- In the interim period between releases, nightly snap- ogy used for the release of FreeBSD 4.4 and describes the shots are built automatically by the FreeBSD Project build tools available for those interested in producing customized machines and made available for download from ftp: FreeBSD releases for corporate rollouts or commercial pro- //stable.FreeBSD.org. The widespread availabil- ductization. ity of binary release snapshots, and the tendency of our user community to keep up with -STABLE development with CVSup and “make world”[8] helps to keep FreeBSD- 1 Introduction STABLE in a very reliable condition even before the qual- ity assurance activities ramp up pending a major release. The development of FreeBSD is a very open process. Bug reports and feature requests are continuously sub- FreeBSD is comprised of contributions from thousands of mitted by users throughout the release cycle. Problem people around the world. The FreeBSD Project provides reports are entered into our GNATS[9] database through anonymous CVS[1] access to the general public so that email, the send-pr(1) application, or via a web-based form. others can have access to log messages, diffs between de- In addition to the multitude of different technical mailing velopment branches, and other productivity enhancements lists about FreeBSD, the FreeBSD quality-assurance mail- that formal source code management provides.