Vault Fsck.Key

Total Page:16

File Type:pdf, Size:1020Kb

Vault Fsck.Key 1 CephFS fsck: Distributed Filesystem Checking Hi, I’m Greg Greg Farnum CephFS Tech Lead, Red Hat [email protected] Been working as a core Ceph developer since June 2009 3 4 What is Ceph? An awesome, software-based, scalable, distributed storage system that is designed for failures •Object storage (our native API) •Block devices (Linux kernel, QEMU/KVM, others) •RESTful S3 & Swift API object store •POSIX Filesystem 5 APP APP HOST/VM CLIENT RADOSGW RBD CEPH FS LIBRADOS A bucket-based REST A reliable and fully- A POSIX-compliant A library allowing gateway, compatible with distributed block device, distributed file system, apps to directly S3 and Swift with a Linux kernel client with a Linux kernel client access RADOS, and a QEMU/KVM driver and support for FUSE with support for C, C++, Java, Python, Ruby, and PHP RADOS A reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes Reliable Autonomic Distributed Object Store 6 What is CephFS? An awesome, software-based, scalable, distributed POSIX- compliant file system that is designed for failures 7 RADOS A user perspective 8 Objects in RADOS Data 01110010101 01010101010 00010101101 xattrs version: 1 omap foo -> bar baz -> qux 9 The librados API C, C++, Python, Java, shell. File-like API: • read/write (extent), truncate, remove; get/set/remove xattr or key • efficient copy-on-write clone • Snapshots — single object or pool-wide • atomic compound operations/transactions • read + getxattr, write + setxattr • compare xattr value, if match write + setxattr • “object classes” • load new code into cluster to implement new methods • calc sha1, grep/filter, generate thumbnail • encrypt, increment, rotate image • Implement your own access mechanisms — HDF5 on the node • watch/notify: use object as communication channel between clients (locking primitive) • pgls: list the objects within a placement group 10 The RADOS Cluster M M M CLIENT Object Storage Devices (OSDs) 11 Object Storage Object 1 Object 3 Object 1 Object 3 Devices (OSDs) Object 2 Object 4 Object 2 Object 4 Pool 1 Pool 2 M Monitor 12 10 10 01 01 10 10 01 11 01 10 hash(object name) % num pg CRUSH(pg, cluster state, rule set) 13 RADOS data guarantees • Any write acked as safe will be visible to all subsequent readers • Any write ever visible to a reader will be visible to all subsequent readers • Any write acked as safe will not be lost unless the whole containing PG is lost • A PG will not be lost unless all N copies are lost (N is admin- configured, usually 3)… • …and in case of OSD failure the system will try to bring you back up to N copies (no user intervention required) 14 RADOS data guarantees • Data is regularly scrubbed to ensure copies are consistent with each other, and administrators are alerted if inconsistencies arise • …and while it’s not automated, it’s usually easy to identify the correct data with “majority voting” or similar. • btrfs maintains checksums for certainty and we think this is the future 15 CephFS System Design 16 CephFS Design Goals Infinitely scalable Avoid all Single Points Of Failure Self Managing 17 CLIENT 01 10 Metadata Server (MDS) M M M 18 Scaling Metadata So we have to use multiple MetaData Servers (MDSes) Tw o I s s u e s : • Storage of the metadata • Ownership of the metadata 19 Scaling Metadata – Storage Some systems store metadata on the MDS system itself But that’s a Single Point Of Failure! • Hot standby? • External metadata storage √ 20 Scaling Metadata – Ownership Traditionally: assign hierarchies manually to each MDS • But if workloads change, your nodes can unbalance Newer: hash directories onto MDSes • But then clients have to jump around for every folder traversal 21 one tree two metadata servers 22 one tree two metadata servers 23 The Ceph Metadata Server Key insight: If metadata is stored in RADOS, ownership should be impermanent One MDS is authoritative over any given subtree, but... • That MDS doesn’t need to keep the whole tree in-memory • There’s no reason the authoritative MDS can’t be changed! 24 The Ceph MDS – Partitioning Cooperative Partitioning between servers: • Keep track of how hot metadata is • Migrate subtrees to keep heat distribution similar • Cheap because all metadata is in RADOS • Maintains locality 25 The Ceph MDS – Persistence All metadata is written to RADOS • And changes are only visible once in RADOS 26 The Ceph MDS – Clustering Benefits Dynamic adjustment to metadata workloads • Replicate hot data to distribute workload Dynamic cluster sizing: • Add nodes as you wish • Decommission old nodes at any time Recover quickly and easily from failures 27 28 29 30 31 DYNAMIC SUBTREE PARTITIONING 32 Does it work? 33 It scales! Click to edit Master text styles 34 It redistributes! Click to edit Master text styles 35 Cool Extras Besides POSIX-compliance and scaling 36 Snapshots $ mkdir foo/.snap/one # create snapshot $ ls foo/.snap one $ ls foo/bar/.snap _one_1099511627776 # parent's snap name is mangled $ rm foo/myfile $ ls -F foo bar/ $ ls foo/.snap/one myfile bar/ $ rmdir foo/.snap/one # remove snapshot 37 Recursive statistics $ ls -alSh | head total 0 drwxr-xr-x 1 root root 9.7T 2011-02-04 15:51 . drwxr-xr-x 1 root root 9.7T 2010-12-16 15:06 .. drwxr-xr-x 1 pomceph pg4194980 9.6T 2011-02-24 08:25 pomceph drwx--x--- 1 fuzyceph adm 1.5G 2011-01-18 10:46 fuzyceph drwxr-xr-x 1 dallasceph pg275 596M 2011-01-14 10:06 dallasceph $ getfattr -d -m ceph. pomceph # file: pomceph ceph.dir.entries="39" ceph.dir.files="37" ceph.dir.rbytes="10550153946827" ceph.dir.rctime="1298565125.590930000" ceph.dir.rentries="2454401" ceph.dir.rfiles="1585288" ceph.dir.rsubdirs="869113" ceph.dir.subdirs="2" 38 Different Storage strategies •Set a “virtual xattr” on a directory and all new files underneath it follow that layout. •Layouts can specify lots of detail about storage: • pool file data goes into • how large file objects and stripes are • how many objects are in a stripe set •So in one cluster you can use • one slow pool with big objects for Hadoop workloads • one fast pool with little objects for a scratch space • one slow pool with small objects for home directories • or whatever else makes sense... 39 CephFS Important Data structures 40 Directory objects •One (or more!) per directory •Deterministically named: <inode number>.<directory piece> •Embeds dentries and inodes for each child of the folder •Contains a potentially-stale versioned backtrace (path location) •Located in the metadata pool 41 File objects •One or more per file •Deterministically named <ino number>.<object number> •First object contains a potentially-stale versioned backtrace •Located in any of the data pools 42 MDS log (objects) The MDS fully journals all metadata operations. The log is chunked across objects. •Deterministically named <log inode number>.<log piece> •Log objects may or may not be replayable if previous entries are lost •each entry contains what it needs, but eg a file move can depend on a previous rename entry •Located in the metadata pool 43 MDSTable objects • Single objects • SessionMap (per-MDS) • stores the state of each client Session • particularly: preallocated inodes for each client • InoTable (per-MDS) • Tracks which inodes are available to allocate • (this is not a traditional inode mapping table or similar) • SnapTable (shared) • Tracks system snapshot IDs and their state (in use pending create/delete) • All located in the metadata pool 44 CephFS Metadata update flow 45 Client Sends Request Object Storage Devices (OSDs) Create dir CLIENT MDS log.1 log.2 log.3 dir.1 dir.2 dir.3 46 MDS Processes Request “Early Reply” and journaling Object Storage Devices (OSDs) Early Reply Journal Write CLIENT MDS log.1 log.2 log.3 dir.1 dir.2 dir.3 47 MDS Processes Request Journaling and safe reply Object Storage Devices (OSDs) CLIENT Safe Reply Journal ack MDS log.1 log.2 log.3 log.4 dir.1 dir.2 dir.3 48 …time passes… Object Storage Devices (OSDs) MDS log.4 dir.1 dir.2 dir.3 log.5 log.6 log.7 log.8 49 MDS Flushes Log Object Storage Devices (OSDs) Directory Write MDS log.4 dir.1 dir.2 dir.3 log.5 log.6 log.7 log.8 50 MDS Flushes Log Object Storage Devices (OSDs) Write ack MDS log.4 dir.1 dir.2 dir.3 dir.4 log.5 log.6 log.7 log.8 51 MDS Flushes Log Object Storage Devices (OSDs) Log Delete MDS dir.1 dir.2 dir.3 dir.4 log.5 log.6 log.7 log.8 52 Traditional fsck 53 e2fsck Drawn from “ffsck: The Fast File System Checker”, FAST ’13 and http://mm.iit.uni-miskolc.hu/Data/texts/Linux/SAG/ node92.html 54 Key data structures & checks • Superblock • check free block count, free inode count • Data bitmap: blocks marked free are not in use • Inode bitmap: inodes marked free are not in use • Directories • inodes are allocated, reasonable, and in-tree • inodes • consistent internal state • link counts • blocks claimed are valid and unique 55 Procedure • Pass 1: iterate over all inodes • check self-consistency • builds up maps of in-use blocks/inodes/etc • correct any issues with doubly-allocated blocks • Pass 2: iterate over all directories • check dentry validity and that all referenced inodes are valid • cache a tree structure • Pass 3: Check directory connectivity in-memory • Pass 4: Check inode reference counts in-memory • Pass 5: Check cached maps against on-disk maps and overwrite if needed 56 CephFS fsck What it needs to do 57 RADOS is different than a disk •We look at objects, not disk blocks •And we can’t lose them (at least not in a way we can recover) •We can deterministically identify all pieces of file data •and the inode they belong to! •It
Recommended publications
  • A Brief History of UNIX File Systems
    A Brief History of UNIX File Systems Val Henson IBM, Inc. [email protected] Summary • Review of UNIX file system concepts • File system formats, 1974-2004 • File system comparisons and recommendations • Fun trivia • Questions and answers (corrections ONLY during talk) 1 VFS/vnode architecture • VFS: Virtual File System: common object-oriented interface to fs's • vnode: virtual node: abstract file object, includes vnode ops • All operations to fs's and files done through VFS/vnode in- terface • S.R. Kleiman, \Vnodes: An Architecture for Multiple File System Types in Sun UNIX," Summer USENIX 1986 2 Some Definitions superblock: fs summary, pointers to other information inode: on-disk structure containing information about a file indirect block: block containing pointers to other blocks metadata: everything that is not user data, including directory entries 3 Disk characteristics • Track - contiguous region, can be read at maximum speed • Seek time - time to move the head between different tracks • Rotational delay - time for part of track to move under head • Fixed per I/O overhead means bigger I/Os are better 4 In the beginning: System V FS (S5FS) (c. 1974) • First UNIX file system, referred to as \FS" • Disk layout: superblock, inodes, followed by everything else • 512-1024 byte block size, no fragments • Super simple - and super slow! 2-5% of raw disk bandwidth 5 Berkeley Fast File System (FFS or UFS) (c. 1984) • Metadata spread throughout the disk in \cylinder groups" • Block size 4KB minimum, frag size 1KB (to avoid 45% wasted space) • Physical
    [Show full text]
  • Set Hadoop-Specific Environment Variables Here
    # Set Hadoop-specific environment variables here. # The only required environment variable is JAVA_HOME. All others are # optional. When running a distributed configuration it is best to # set JAVA_HOME in this file, so that it is correctly defined on # remote nodes. # The java implementation to use. Required. export JAVA_HOME=/usr/jdk64/jdk1.8.0_112 export HADOOP_HOME_WARN_SUPPRESS=1 # Hadoop home directory export HADOOP_HOME=${HADOOP_HOME:-/usr/hdp/2.6.5.0-292/hadoop} # Hadoop Configuration Directory # Path to jsvc required by secure HDP 2.0 datanode export JSVC_HOME=/usr/lib/bigtop-utils # The maximum amount of heap to use, in MB. Default is 1000. export HADOOP_HEAPSIZE="1024" export HADOOP_NAMENODE_INIT_HEAPSIZE="-Xms1024m" # Extra Java runtime options. Empty by default. export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true ${HADOOP_OPTS}" USER="$(whoami)" # Command specific options appended to HADOOP_OPTS when specified HADOOP_JOBTRACKER_OPTS="-server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC - XX:ErrorFile=/var/log/hadoop/$USER/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m - Xloggc:/var/log/hadoop/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps - XX:+PrintGCDateStamps -Xmx1024m -Dhadoop.security.logger=INFO,DRFAS -Dmapred.audit.logger=INFO,MRAUDIT - Dhadoop.mapreduce.jobsummary.logger=INFO,JSA ${HADOOP_JOBTRACKER_OPTS}" HADOOP_TASKTRACKER_OPTS="-server -Xmx1024m -Dhadoop.security.logger=ERROR,console -Dmapred.audit.logger=ERROR,console ${HADOOP_TASKTRACKER_OPTS}" SHARED_HADOOP_NAMENODE_OPTS="-server
    [Show full text]
  • Filesystem Considerations for Embedded Devices ELC2015 03/25/15
    Filesystem considerations for embedded devices ELC2015 03/25/15 Tristan Lelong Senior embedded software engineer Filesystem considerations ABSTRACT The goal of this presentation is to answer a question asked by several customers: which filesystem should you use within your embedded design’s eMMC/SDCard? These storage devices use a standard block interface, compatible with traditional filesystems, but constraints are not those of desktop PC environments. EXT2/3/4, BTRFS, F2FS are the first of many solutions which come to mind, but how do they all compare? Typical queries include performance, longevity, tools availability, support, and power loss robustness. This presentation will not dive into implementation details but will instead summarize provided answers with the help of various figures and meaningful test results. 2 TABLE OF CONTENTS 1. Introduction 2. Block devices 3. Available filesystems 4. Performances 5. Tools 6. Reliability 7. Conclusion Filesystem considerations ABOUT THE AUTHOR • Tristan Lelong • Embedded software engineer @ Adeneo Embedded • French, living in the Pacific northwest • Embedded software, free software, and Linux kernel enthusiast. 4 Introduction Filesystem considerations Introduction INTRODUCTION More and more embedded designs rely on smart memory chips rather than bare NAND or NOR. This presentation will start by describing: • Some context to help understand the differences between NAND and MMC • Some typical requirements found in embedded devices designs • Potential filesystems to use on MMC devices 6 Filesystem considerations Introduction INTRODUCTION Focus will then move to block filesystems. How they are supported, what feature do they advertise. To help understand how they compare, we will present some benchmarks and comparisons regarding: • Tools • Reliability • Performances 7 Block devices Filesystem considerations Block devices MMC, EMMC, SD CARD Vocabulary: • MMC: MultiMediaCard is a memory card unveiled in 1997 by SanDisk and Siemens based on NAND flash memory.
    [Show full text]
  • Linux System Administration
    Linux System Administration Jonathan Quick Hartebeesthoek Radio Astronomy Observatory Goals • Help you to understand how Linux starts up, keeps running, and shuts down • Give confidence in dealing with hardware and software failures • Give an overview of what you can configure and how • Show you where to find more information when you need it • For the field system and Mark5’s 2 Basic Linux Concepts • Linux Kernel – Base monolithic kernel + loadable modules – Gives standardized access to underlying hardware • Linux System / "Distribution" – Kernel + lots of software – Adds both system and application level software to the system • Background processes ("daemons") 3 System Modifications • In order to do any system-wide changes you usually have to be logged in as 'root‘ – Or have root privileges • There are a number of approaches for this – Log in as user “root” – Execute “su –” from the present user account – Execute the command directly with “sudo” • E.g. “sudo tail /var/log/kern.log” 4 Logging in as 'root' • You can change to a virtual console (Ctrl-Alt- F1) and login normally or use 'su -' • 'root' can override all permissions, start and stop anything, erase hard drives,... – So please be careful with disk names and similar! – You can browse and check many (if not most of the) things as a normal user (like 'oper'). 5 Sudo • Sudo is a program designed to allow a sysadmin to give limited root privileges to users and log root activity. • The basic philosophy is to give as few privileges as possible but still allow people to get their work
    [Show full text]
  • Altavault Cloud Integrated Storage Command-Line Reference Guide
    NetApp® AltaVault® Cloud Integrated Storage 4.2.1 Command-Line Reference Guide NetApp, Inc. Telephone: +1 (408) 822-6000 Part number: 215-11346_A0 495 East Java Drive Fax: + 1 (408) 822-4501 August 2016 Sunnyvale, CA 94089 Support telephone: +1(888) 463-8277 U.S. Web: www.netapp.com Feedback: [email protected] Contents Beta Draft Contents Chapter 1 - Using the Command-Line Interface ......................................................................................3 Connecting to the CLI ......................................................................................................................................3 Overview of the CLI.........................................................................................................................................4 Entering Commands .........................................................................................................................................5 Accessing CLI Online Help..............................................................................................................................5 Error Messages .................................................................................................................................................5 Command Negation..........................................................................................................................................5 Running the Configuration Wizard...................................................................................................................6
    [Show full text]
  • Isolation File Systems
    Physical Disentanglement in a Container-Based File System Lanyue Lu, Yupu Zhang, Thanh Do, Samer AI-Kiswany Andrea C. Arpaci-Dusseau, Remzi H. Arpaci-Dusseau University of Wisconsin Madison Motivation Isolation in various subsystems ➡ resources: virtual machines, Linux containers ➡ security: BSD jail, sandbox ➡ reliability: address spaces File systems lack isolation ➡ physical entanglement in modern file systems Three problems due to entanglement ➡ global failures, slow recovery, bundled performance Global Failures Definition ➡ a failure which impacts all users of the file system or even the operating system Read-Only ➡ mark the file system as read-only ➡ e.g., metadata corruption, I/O failure Crash ➡ crash the file system or the operating system ➡ e.g., unexpected states, pointer faults Ext3 200 Read-Only Crash 150 100 50 Number of Failure Instances 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Linux 3.X Versions Ext4 Read-Only Crash 400 350 300 250 200 150 100 50 Number of Failure Instances 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Linux 3.X Versions Slow Recovery File system checker ➡ repair a corrupted file system ➡ usually offline Current checkers are not scalable ➡ increasing disk capacities and file system sizes ➡ scan the whole file system ➡ checking time is unacceptably long Scalability of fsck on Ext3 Ext3 1007 1000 800 723 600 476 400 Fsck Time (s) 231 200 0 200GB 400GB 600GB 800GB File-system Capacity Bundled Performance Shared transaction ➡ all updates share a single transaction ➡ unrelated workloads affect each other Consistency guarantee
    [Show full text]
  • Linux System Administration
    Linux System Administration Jonathan Quick, Hartebeesthoek Radio Astronomy Observatory Ari Mujunen, Metsähovi Radio Observatory Linux Startup and Shutdown Managing Hard Disks, Partitions, Backups Rescuing a Failing PC / System Modifying Configuration Adding/Removing Packages 1 Goals Help you to understand how Linux starts up, keeps running, and shuts down Give confidence in dealing with hardware and software failures Give an overview of what you can configure and how Show you where to find more information when you need it 2 Basic Linux Concepts Linux Kernel Base monolithic kernel + loadable modules Gives standardized access to underlying hardware Linux System / "Distribution" Kernel + lots of software Adds both system and application level software to the system Background processes ("daemons") 3 Logging in as 'root' In order to do any system-wide changes you usually have to be logged in as 'root' You can change to a virtual console (Ctrl-Alt- F1) and login normally or use 'su -' 'root' can override all permissions, start and stop anything, erase hard drives,... So please be careful with disk names and similar! You can browse and check many (if not most of the) things as a normal user (like 'oper'). 4 Getting System Information ps axf, top; kill, kill -9 free df, mount netstat -an, ifconfig, route -n w, who cat /proc/cpuinfo (and others) 5 Linux PC-Level Startup PC ROM BIOS initializes hardware and boots a Master Boot Record (MBR) From a floppy, hard disk, CD-ROM, ... That MBR contains LILO, the Linux Loader
    [Show full text]
  • File System and Scheduling Utilities (FSSU)
    Preliminary Specification Systems Management: File System and Scheduling Utilities (FSSU) RELIM P I N A R Y [This page intentionally left blank] X/Open Preliminary Specification Systems Management: File System and Scheduling Utilities (FSSU) X/Open Company Ltd. October 1995, X/Open Company Limited All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the copyright owners. X/Open Preliminary Specification Systems Management: File System and Scheduling Utilities (FSSU) ISBN: 1-85912-145-4 X/Open Document Number: P521 Published by X/Open Company Ltd., U.K. Any comments relating to the material contained in this document may be submitted to X/Open at: X/Open Company Limited Apex Plaza Forbury Road Reading Berkshire, RG1 1AX United Kingdom or by Electronic Mail to: [email protected] ii X/Open Preliminary Specification Contents Chapter 1 Introduction............................................................................................... 1 1.1 Core Facilities .............................................................................................. 1 1.2 NFS Feature Group..................................................................................... 1 Chapter 2 Scheduling Management .................................................................. 3 cron...................................................................................................................
    [Show full text]
  • Maintaining a File System File System Integrity Utility: Fsck -P [Filesystem]
    Maintaining a File System File System Integrity utility: fsck -p [fileSystem] fsck (file system check) scans the specified file systems and checks them for consistency. The kind of consistency errors that can exist include: • A block is marked as free in the bitmap but is also referenced from an inode. • A block is marked as used in the bitmap but is never referenced from an inode. • More than one inode refers to the same block. • An invalid block number. • An inode's link count is incorrect. • A used inode is not referenced from any directory. 1 file system integrity fsck -p [fileSystem] If the -p option is used, fsck automatically corrects any errors that it finds. Without the -p option, it prompts the user for confirmation of any corrections that it suggests. If fsck finds a block that is used but is not associated with a named file, it connects it to a file whose name is equal to the block's inode number in the "/lost+found" directory. If no file systems are specified, fsck checks the standard file systems listed in "/etc/fstab." Linux has specialized fsck programs for different types of file systems. For example, when checking an ext2 or ext3 file system, fsck act as a front-end to e2fsck, which is the program that actually checks the file system. 2 Display disk statistics My disk is full, my files are not saved, why?!@#$ du -- display disk usage displays the number of kB that are allocated to each of the specified filenames. If a filename refers to a directory, its files are recursively described -h option displays more human-readable
    [Show full text]
  • A Ffsck: the Fast File System Checker
    A ffsck: The Fast File System Checker AO MA, University of Wisconsin, Madison; Backup Recovery Systems Division, EMC Corporation CHRIS DRAGGA, ANDREA C. ARPACI-DUSSEAU, and REMZI H. ARPACI-DUSSEAU, University of Wisconsin, Madison MARSHALL KIRK McKUSICK, McKusick.com Crash failures, hardware errors, and file system bugs can corrupt file systems and cause data loss, despite the presence of journals and similar preventive techniques. While consistency checkers such as fsck can detect this corruption and restore a damaged image to a usable state, they are generally created as an afterthought, to be run only at rare intervals. Thus, checkers operate slowly, causing significant downtime for large scale storage systems when they are needed. We address this dilemma by treating the checker as a key component of the overall file system (and not merely a peripheral add-on). To this end, we present a modified ext3 file system, rext3, to directly support the fast file system checker, ffsck. The rext3 file system colocates and self-identifies its metadata blocks, removing the need for costly seeks and tree traversals during checking. These modifications to the file system allow ffsck to scan and repair the file system at rates approaching the full sequential bandwidth of the underlying device. In addition, we demonstrate that rext3 performs competitively with ext3 in most cases and exceeds it in handling random reads and large writes. Finally, we apply our principles to FFS, the default FreeBSD file system, and its checker, doing so in a lightweight fashion that preserves the file-system layout while still providing some of the gains in performance from ffsck.
    [Show full text]
  • Freebsd Command Reference
    FreeBSD command reference Command structure Each line you type at the Unix shell consists of a command optionally followed by some arguments , e.g. ls -l /etc/passwd | | | cmd arg1 arg2 Almost all commands are just programs in the filesystem, e.g. "ls" is actually /bin/ls. A few are built- in to the shell. All commands and filenames are case-sensitive. Unless told otherwise, the command will run in the "foreground" - that is, you won't be returned to the shell prompt until it has finished. You can press Ctrl + C to terminate it. Colour code command [args...] Command which shows information command [args...] Command which modifies your current session or system settings, but changes will be lost when you exit your shell or reboot command [args...] Command which permanently affects the state of your system Getting out of trouble ^C (Ctrl-C) Terminate the current command ^U (Ctrl-U) Clear to start of line reset Reset terminal settings. If in xterm, try Ctrl+Middle mouse button stty sane and select "Do Full Reset" exit Exit from the shell logout ESC :q! ENTER Quit from vi without saving Finding documentation man cmd Show manual page for command "cmd". If a page with the same man 5 cmd name exists in multiple sections, you can give the section number, man -a cmd or -a to show pages from all sections. man -k str Search for string"str" in the manual index man hier Description of directory structure cd /usr/share/doc; ls Browse system documentation and examples. Note especially cd /usr/share/examples; ls /usr/share/doc/en/books/handbook/index.html cd /usr/local/share/doc; ls Browse package documentation and examples cd /usr/local/share/examples On the web: www.freebsd.org Includes handbook, searchable mailing list archives System status Alt-F1 ..
    [Show full text]
  • The Linux Command Line
    The Linux Command Line Second Internet Edition William E. Shotts, Jr. A LinuxCommand.org Book Copyright ©2008-2013, William E. Shotts, Jr. This work is licensed under the Creative Commons Attribution-Noncommercial-No De- rivative Works 3.0 United States License. To view a copy of this license, visit the link above or send a letter to Creative Commons, 171 Second Street, Suite 300, San Fran- cisco, California, 94105, USA. Linux® is the registered trademark of Linus Torvalds. All other trademarks belong to their respective owners. This book is part of the LinuxCommand.org project, a site for Linux education and advo- cacy devoted to helping users of legacy operating systems migrate into the future. You may contact the LinuxCommand.org project at http://linuxcommand.org. This book is also available in printed form, published by No Starch Press and may be purchased wherever fine books are sold. No Starch Press also offers this book in elec- tronic formats for most popular e-readers: http://nostarch.com/tlcl.htm Release History Version Date Description 13.07 July 6, 2013 Second Internet Edition. 09.12 December 14, 2009 First Internet Edition. 09.11 November 19, 2009 Fourth draft with almost all reviewer feedback incorporated and edited through chapter 37. 09.10 October 3, 2009 Third draft with revised table formatting, partial application of reviewers feedback and edited through chapter 18. 09.08 August 12, 2009 Second draft incorporating the first editing pass. 09.07 July 18, 2009 Completed first draft. Table of Contents Introduction....................................................................................................xvi
    [Show full text]