Lab Iv: File Recovery: Meta Data Layer

Total Page:16

File Type:pdf, Size:1020Kb

Lab Iv: File Recovery: Meta Data Layer NEW MEXICO TECH DIGITAL FORENSICS FALL 2006 LAB IV: FILE RECOVERY: META DATA LAYER Objectives - Find meta data information for evidence found in a searchlist - Recover a file based on meta data - Use the Autopsy Forensic Browser at the meta data layer - Observe file deletion behavior at the meta data layer with different file systems Procedures PART 1 Step 1 The same image file you used in Lab III is on /dev/hdb1. You will use this image for the first part of the lab. Four directories, ext2/, ext3/, fat32/ and ntfs/, have been created for you on /dev/hdb1 that you will use for the second part of the lab. You will also be using another disk that has been added to the system on /dev/hdd. Launch your “Linux – Forensics” virtual machine. MOUNT /DEV/HDB1 TO /MNT/RECOVER. THE IMAGE.DD FILE FROM LAB III IS LOCATED IN /MNT/RECOVER/LAB4. QUESTION 1: YOU WISH TO RECOVER FILE05, THE WORD DOCUMENT FROM LAB III. AS A REVIEW, LIST THE STEPS NEEDED TO FIND THE FILE BASED ON THE SEARCH WORD “KEYBOARD.” WHAT BLOCK OF THE ORIGINAL IMAGE FILE Prepared by Regis Cassidy Sandia National Laboratories Page IS THIS SEARCH WORD FOUND? # DLS -F LINUX-EXT2 /MNT/RECOVER/LAB4/IMAGE.DD > /MNT/RECOVER/LAB4/IMAGE.UNALLOC.DLS # STRINGS -A -T D /MNT/RECOVER/LAB4/IMAGE.UNALLOC.DLS > /MNT/RECOVER/LAB4/IMAGE.UNALLOC.STR # GREP “KEYBOARD” /MNT/RECOVER/LAB4/IMAGE.UNALLOC.STR # DCALC -F LINUX-EXT2 -U 625 /MNT/RECOVER/LAB4/IMAGE.DD The search word is found at block 883 in the image.dd file. Finding Meta Data Information STEP 2 THE INFORMATION PROVIDED BY THE INODE IN LINUX IS KNOWN AS META DATA INFORMATION. EACH FILE ON THE SYSTEM (INCLUDING DIRECTORIES) IS ASSOCIATED WITH A UNIQUE INODE. INODES ARE EQUIVALENT TO DIRECTORY ENTRIES IN FAT32 AND MASTER FILE TABLE ENTRIES IN NTFS. THIS META DATA INFORMATION CAN BE VERY USEFUL FOR RECOVERING FILES IF THAT INODE HAS NOT BEEN REALLOCATED TO A NEW FILE. ONE FUNCTION OF THE INODE IS TO PROVIDE A MAPPING TO ALL THE BLOCKS THAT THE FILE USES ON DISK. GIVEN A BLOCK NUMBER THE SLEUTHKIT TOOL IFIND CAN BE USED TO LOCATE THE INODE THAT THE BLOCK IS ASSOCIATED WITH. Prepared by Regis Cassidy Sandia National Laboratories Page # IFIND -F LINUX-EXT2 /MNT/RECOVER/LAB4/IMAGE.DD - D BLOCK NOTE: USE THE BLOCK NUMBER YOU FOUND IN QUESTION 1 FOR BLOCK. NOW YOU SHOULD KNOW THE INODE ASSOCIATED WITH THE WORD DOCUMENT FILE05. THE SLEUTHKIT TOOL ISTAT IS USED TO LIST THE META DATA INFORMATION CONTAINED IN THE INODE. # ISTAT -F LINUX-EXT2 /MNT/RECOVER/LAB4/IMAGE.DD INODE | LESS NOTE: USE THE INODE YOU FOUND WITH IFIND IN THE STEP ABOVE. NOTICE THAT ISTAT REPORTS THAT THE INODE IS NOT ALLOCATED. Question 2: What would it mean if istat showed the inode as being allocated? If it is unallocated, can you be certain you are viewing the inode information for the file you found in a search? The inode information is of no use to you for recovering a deleted file if it has been reallocated. This means that the meta data in the inode is for a new file. If an inode is unallocated you still can not be sure that it contains the right meta data for recovering your file. The inode could have been reallocated to a new file that has been deleted as well. When you use icat, you can verify that the meta data is associated with the file you are Prepared by Regis Cassidy Sandia National Laboratories Page meaning to recover. Question 3: Review the output of istat again. Name three important fields found in the meta data that you think are needed to recover a file and why? The direct blocks and indirect blocks are needed to find all locations on disk belonging to the file. File size is needed as well to determine what fraction of the last block contains valid data for the file. Recovering Files from Meta Data Information STEP 3 BECAUSE THIS META DATA INFORMATION IS AVAILABLE IT IS MUCH EASIER TO RECOVER A DELETED FILE IF IT IS BINARY AND/OR FRAGMENTED. RATHER THEN HAVING TO LOCATE THE DATA BLOCKS YOURSELF AND USING DCAT, YOU CAN USE THE META DATA INFORMATION AS A ROAD MAP TO THE FILE. THE SLEUTHKIT TOOL ICAT WILL USE THE META DATA TO RECOVER A FILE IN A SINGLE STEP. Recover file05 using icat # icat -f linux-ext2 /mnt/recover/lab4/image.dd inode > /mnt/recover/lab4/file05 Verify the file size (22110 bytes) and use the file command to also verify that a Word document was recovered. Compare hashes from fileinfo.txt . Prepared by Regis Cassidy Sandia National Laboratories Page YOU CAN ALSO VIEW THE FILE IN OPENOFFICE WRITER WHICH IS CAPABLE OF OPENING WORD DOCUMENTS. WRITER IS LOCATED IN THE K MENU UNDER OFFICE. Question 4: What are some reasons that would force you to still use dcat to recover a file rather than icat? Icat can only be used if the inode contains valid meta data on the file you wish to recover. That meta data will not be valid if the inode has been reallocated to a new file. It is technically possible for someone to alter the meta data in attempts to hide data or the meta data may become corrupt somehow else. Also newer file systems support security attributes that will erase the meta data when a file is deleted. Using Autopsy at the Meta Data Layer STEP 4 YOU WILL NOW USE THE AUTOPSY FORENSIC BROWSER AGAIN AND LEARN MORE FEATURES OF IT. THESE FEATURES WILL RELATE TO THE METDA DATA LAYER. MAKE AN AUTOPSY WORKING DIRECTORY # MKDIR /MNT/RECOVER/LAB4/AUTOPSY START AUTOPSY # AUTOPSY -D /MNT/RECOVER/LAB4/AUTOPSY From your toolbar, launch the mozilla web browser. From the links bar start autopsy. Prepared by Regis Cassidy Sandia National Laboratories Page Create a new case called Lab4 with your name as the investigator. Add a new host and use 'vmware-forensics' in the host name field. Enter MST for the Timezone. Add an image which is located at /mnt/recover/lab4/image.dd. Keep the Import Method at symlink. Change the file system type to linux-ext2. Mount point should be set to / . Select 'Calculate the hash value for this image' . Click the Keyword Search tab and search image.dd for “keyboard.” THERE SHOULD BE A MATCH AT THE SAME BLOCK (FRAGMENT) NUMBER YOU FIND ON THE COMMAND LINE. CLICK THE LINK FOR THE HEX OR ASCII CONTENT. YOU SHOULD SEE THE CONTENTS OF THE WORD DOCUMENT. THERE IS A PANEL LOCATED DIRECTORY ABOVE THE CONTENT WINDOW. YOU MAY HAVE TO SCROLL TO SEE THE LINK 'FIND META DATA ADDRESS' . CLICK THIS LINK TO FURTHER EXPAND INFORMATION IN THAT PANEL. THERE SHOULD NOW BE AN INODE NUMBER LISTED (SAME AS THE ONE YOU FOUND EARLIER WITH IFIND) AND THAT IS A LINK AS WELL. AFTER CLICKING ON THE INODE NUMBER LINK A NEW WINDOW IS OPENED CONTAINING META DATA INFORMATION YOU SAW WITH ISTAT. NOTICE THAT SOME ADDITIONAL INFORMATION IS PROVIDED AT Prepared by Regis Cassidy Sandia National Laboratories Page THE TOP. HAD THE FILE NOT BEEN DELETED, ITS NAME WOULD BE UNDER THE 'POINTED TO BY FILE' FIELD. THE 'FILE TYPE (RECOVERED)' FIELD IS DETERMINED BY THE FILE COMMAND WHICH YOU HAVE ALREADY USED. CLICK THE 'EXPORT CONTENTS' BUTTON AND SAVE THE FILE AS FILE05.DOC TO /MNT/RECOVER/LAB4/ . VERIFY THE RECOVERED FILE'S HASH. QUESTION 5: HOW COME, WHEN YOU EXPORT THE CONTENTS AT THE META DATA LAYER YOU DO NOT NEED TO MODIFY THE FILE LIKE WHEN YOU EXPORT A FILE AT THE DATA UNIT LAYER? THE META DATA LAYER CONTAINS INFORMATION REGARDING THE FILE SIZE OF THE FILE. THIS CAN BE USED TO EXTRACT THE CORRECT AMOUNT OF BYTES FROM THE LAST BLOCK. AT THE DATA UNIT LAYER NOTHING IS KNOW ABOUT THE FILE SIZE OF THE FILE SO THE WHOLE LAST BLOCK IS EXTRACTED. You can view the information for any inode by clicking on the 'Meta Data' tab. Click the 'Allocation List' button. THIS IS A LISTING OF ALL THE AVALIABLE INODES ON THE FILE SYSTEM AND WHETHER THEY ARE ALLOCATED OR UNALLOCTED. EVEN THOUGH THERE ARE NO FILES ON THE IMAGE, INODES 1 THOUGH 10 APPEAR TO BE ALLOCATED. HOWEVER, WHEN YOU VIEW THEM THEY ARE NOT BEING USED (EXCEPT INODE 2). INODE 1 IS RESERVED FOR A LIST OF BAD BLOCKS ON THE DEVICE. INODE 2 IS RESERVED FOR THE ROOT DIRECTORY. Prepared by Regis Cassidy Sandia National Laboratories Page SOME OF THE INODES BETWEEN 3 AND 10 HAVE SPECIAL PURPOSES AND SOME ARE SIMPLY UNUSED, BEING RESERVED FOR POSSIBLE FUTURE USE. INODE 11 IS THE FIRST INODE AVAILABLE FOR ORDINARY USE AND WILL USUALLY BE ASSIGNED TO THE LOST+FOUND DIRECTORY WHEN A EXT2 FILE SYSTEM IS FIRST MADE. YOU ARE DONE WITH THE FIRST PART OF THIS LAB. CLOSE AUTOPSY. PART 2 Understanding Meta Data on Different File Systems STEP 5 FOR THE SECOND PART OF THE LAB YOU WILL BE LOOKING AT A DISK ON /DEV/HDD THAT HAS BEEN DIVIDED INTO 4 PARTITIONS. EACH PARTITION CONTAINS A DIFFERENT FILE SYSTEM; LINUX EXT2, LINUX EXT3, FAT32 AND NTFS RESPECTIVELY. THE GOAL IS TO OBSERVE HOW THE DIFFERENT FILE SYSTEMS BEHAVE WITH FILE CREATION AND DELETION. QUESTION 6: WHAT IS THE BLOCK (CLUSTER) SIZE FOR EACH OF THE PARTITIONS? (NOTE: YOU'RE USED TO USING THE SLEUTHKIT TOOLS AGAINST A DD IMAGE FILE, BUT YOU CAN ALSO USE THEM AGAINST AN ACTUAL DISK OR PARTITION.
Recommended publications
  • Copy on Write Based File Systems Performance Analysis and Implementation
    Copy On Write Based File Systems Performance Analysis And Implementation Sakis Kasampalis Kongens Lyngby 2010 IMM-MSC-2010-63 Technical University of Denmark Department Of Informatics Building 321, DK-2800 Kongens Lyngby, Denmark Phone +45 45253351, Fax +45 45882673 [email protected] www.imm.dtu.dk Abstract In this work I am focusing on Copy On Write based file systems. Copy On Write is used on modern file systems for providing (1) metadata and data consistency using transactional semantics, (2) cheap and instant backups using snapshots and clones. This thesis is divided into two main parts. The first part focuses on the design and performance of Copy On Write based file systems. Recent efforts aiming at creating a Copy On Write based file system are ZFS, Btrfs, ext3cow, Hammer, and LLFS. My work focuses only on ZFS and Btrfs, since they support the most advanced features. The main goals of ZFS and Btrfs are to offer a scalable, fault tolerant, and easy to administrate file system. I evaluate the performance and scalability of ZFS and Btrfs. The evaluation includes studying their design and testing their performance and scalability against a set of recommended file system benchmarks. Most computers are already based on multi-core and multiple processor architec- tures. Because of that, the need for using concurrent programming models has increased. Transactions can be very helpful for supporting concurrent program- ming models, which ensure that system updates are consistent. Unfortunately, the majority of operating systems and file systems either do not support trans- actions at all, or they simply do not expose them to the users.
    [Show full text]
  • The Linux Kernel Module Programming Guide
    The Linux Kernel Module Programming Guide Peter Jay Salzman Michael Burian Ori Pomerantz Copyright © 2001 Peter Jay Salzman 2007−05−18 ver 2.6.4 The Linux Kernel Module Programming Guide is a free book; you may reproduce and/or modify it under the terms of the Open Software License, version 1.1. You can obtain a copy of this license at http://opensource.org/licenses/osl.php. This book is distributed in the hope it will be useful, but without any warranty, without even the implied warranty of merchantability or fitness for a particular purpose. The author encourages wide distribution of this book for personal or commercial use, provided the above copyright notice remains intact and the method adheres to the provisions of the Open Software License. In summary, you may copy and distribute this book free of charge or for a profit. No explicit permission is required from the author for reproduction of this book in any medium, physical or electronic. Derivative works and translations of this document must be placed under the Open Software License, and the original copyright notice must remain intact. If you have contributed new material to this book, you must make the material and source code available for your revisions. Please make revisions and updates available directly to the document maintainer, Peter Jay Salzman <[email protected]>. This will allow for the merging of updates and provide consistent revisions to the Linux community. If you publish or distribute this book commercially, donations, royalties, and/or printed copies are greatly appreciated by the author and the Linux Documentation Project (LDP).
    [Show full text]
  • Study of File System Evolution
    Study of File System Evolution Swaminathan Sundararaman, Sriram Subramanian Department of Computer Science University of Wisconsin {swami, srirams} @cs.wisc.edu Abstract File systems have traditionally been a major area of file systems are typically developed and maintained by research and development. This is evident from the several programmer across the globe. At any point in existence of over 50 file systems of varying popularity time, for a file system, there are three to six active in the current version of the Linux kernel. They developers, ten to fifteen patch contributors but a single represent a complex subsystem of the kernel, with each maintainer. These people communicate through file system employing different strategies for tackling individual file system mailing lists [14, 16, 18] various issues. Although there are many file systems in submitting proposals for new features, enhancements, Linux, there has been no prior work (to the best of our reporting bugs, submitting and reviewing patches for knowledge) on understanding how file systems evolve. known bugs. The problems with the open source We believe that such information would be useful to the development approach is that all communication is file system community allowing developers to learn buried in the mailing list archives and aren’t easily from previous experiences. accessible to others. As a result when new file systems are developed they do not leverage past experience and This paper looks at six file systems (Ext2, Ext3, Ext4, could end up re-inventing the wheel. To make things JFS, ReiserFS, and XFS) from a historical perspective worse, people could typically end up doing the same (between kernel versions 1.0 to 2.6) to get an insight on mistakes as done in other file systems.
    [Show full text]
  • File Systems and Disk Layout I/O: the Big Picture
    File Systems and Disk Layout I/O: The Big Picture Processor interrupts Cache Memory Bus I/O Bridge Main I/O Bus Memory Disk Graphics Network Controller Controller Interface Disk Disk Graphics Network 1 Rotational Media Track Sector Arm Cylinder Platter Head Access time = seek time + rotational delay + transfer time seek time = 5-15 milliseconds to move the disk arm and settle on a cylinder rotational delay = 8 milliseconds for full rotation at 7200 RPM: average delay = 4 ms transfer time = 1 millisecond for an 8KB block at 8 MB/s Bandwidth utilization is less than 50% for any noncontiguous access at a block grain. Disks and Drivers Disk hardware and driver software provide basic facilities for nonvolatile secondary storage (block devices). 1. OS views the block devices as a collection of volumes. A logical volume may be a partition ofasinglediskora concatenation of multiple physical disks (e.g., RAID). 2. OS accesses each volume as an array of fixed-size sectors. Identify sector (or block) by unique (volumeID, sector ID). Read/write operations DMA data to/from physical memory. 3. Device interrupts OS on I/O completion. ISR wakes up process, updates internal records, etc. 2 Using Disk Storage Typical operating systems use disks in three different ways: 1. System calls allow user programs to access a “raw” disk. Unix: special device file identifies volume directly. Any process that can open thedevicefilecanreadorwriteany specific sector in the disk volume. 2. OS uses disk as backing storage for virtual memory. OS manages volume transparently as an “overflow area” for VM contents that do not “fit” in physical memory.
    [Show full text]
  • ECE 598 – Advanced Operating Systems Lecture 19
    ECE 598 { Advanced Operating Systems Lecture 19 Vince Weaver http://web.eece.maine.edu/~vweaver [email protected] 7 April 2016 Announcements • Homework #7 was due • Homework #8 will be posted 1 Why use FAT over ext2? • FAT simpler, easy to code • FAT supported on all major OSes • ext2 faster, more robust filename and permissions 2 btrfs • B-tree fs (similar to a binary tree, but with pages full of leaves) • overwrite filesystem (overwite on modify) vs CoW • Copy on write. When write to a file, old data not overwritten. Since old data not over-written, crash recovery better Eventually old data garbage collected • Data in extents 3 • Copy-on-write • Forest of trees: { sub-volumes { extent-allocation { checksum tree { chunk device { reloc • On-line defragmentation • On-line volume growth 4 • Built-in RAID • Transparent compression • Snapshots • Checksums on data and meta-data • De-duplication • Cloning { can make an exact snapshot of file, copy-on- write different than link, different inodles but same blocks 5 Embedded • Designed to be small, simple, read-only? • romfs { 32 byte header (magic, size, checksum,name) { Repeating files (pointer to next [0 if none]), info, size, checksum, file name, file data • cramfs 6 ZFS Advanced OS from Sun/Oracle. Similar in idea to btrfs indirect still, not extent based? 7 ReFS Resilient FS, Microsoft's answer to brtfs and zfs 8 Networked File Systems • Allow a centralized file server to export a filesystem to multiple clients. • Provide file level access, not just raw blocks (NBD) • Clustered filesystems also exist, where multiple servers work in conjunction.
    [Show full text]
  • Ext4 File System and Crash Consistency
    1 Ext4 file system and crash consistency Changwoo Min 2 Summary of last lectures • Tools: building, exploring, and debugging Linux kernel • Core kernel infrastructure • Process management & scheduling • Interrupt & interrupt handler • Kernel synchronization • Memory management • Virtual file system • Page cache and page fault 3 Today: ext4 file system and crash consistency • File system in Linux kernel • Design considerations of a file system • History of file system • On-disk structure of Ext4 • File operations • Crash consistency 4 File system in Linux kernel User space application (ex: cp) User-space Syscalls: open, read, write, etc. Kernel-space VFS: Virtual File System Filesystems ext4 FAT32 JFFS2 Block layer Hardware Embedded Hard disk USB drive flash 5 What is a file system fundamentally? int main(int argc, char *argv[]) { int fd; char buffer[4096]; struct stat_buf; DIR *dir; struct dirent *entry; /* 1. Path name -> inode mapping */ fd = open("/home/lkp/hello.c" , O_RDONLY); /* 2. File offset -> disk block address mapping */ pread(fd, buffer, sizeof(buffer), 0); /* 3. File meta data operation */ fstat(fd, &stat_buf); printf("file size = %d\n", stat_buf.st_size); /* 4. Directory operation */ dir = opendir("/home"); entry = readdir(dir); printf("dir = %s\n", entry->d_name); return 0; } 6 Why do we care EXT4 file system? • Most widely-deployed file system • Default file system of major Linux distributions • File system used in Google data center • Default file system of Android kernel • Follows the traditional file system design 7 History of file system design 8 UFS (Unix File System) • The original UNIX file system • Design by Dennis Ritche and Ken Thompson (1974) • The first Linux file system (ext) and Minix FS has a similar layout 9 UFS (Unix File System) • Performance problem of UFS (and the first Linux file system) • Especially, long seek time between an inode and data block 10 FFS (Fast File System) • The file system of BSD UNIX • Designed by Marshall Kirk McKusick, et al.
    [Show full text]
  • W4118: Linux File Systems
    W4118: Linux file systems Instructor: Junfeng Yang References: Modern Operating Systems (3rd edition), Operating Systems Concepts (8th edition), previous W4118, and OS at MIT, Stanford, and UWisc File systems in Linux Linux Second Extended File System (Ext2) . What is the EXT2 on-disk layout? . What is the EXT2 directory structure? Linux Third Extended File System (Ext3) . What is the file system consistency problem? . How to solve the consistency problem using journaling? Virtual File System (VFS) . What is VFS? . What are the key data structures of Linux VFS? 1 Ext2 “Standard” Linux File System . Was the most commonly used before ext3 came out Uses FFS like layout . Each FS is composed of identical block groups . Allocation is designed to improve locality inodes contain pointers (32 bits) to blocks . Direct, Indirect, Double Indirect, Triple Indirect . Maximum file size: 4.1TB (4K Blocks) . Maximum file system size: 16TB (4K Blocks) On-disk structures defined in include/linux/ext2_fs.h 2 Ext2 Disk Layout Files in the same directory are stored in the same block group Files in different directories are spread among the block groups Picture from Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639 3 Block Addressing in Ext2 Twelve “direct” blocks Data Data BlockData Inode Block Block BLKSIZE/4 Indirect Data Data Blocks BlockData Block Data (BLKSIZE/4)2 Indirect Block Data BlockData Blocks Block Double Block Indirect Indirect Blocks Data Data Data (BLKSIZE/4)3 BlockData Data Indirect Block BlockData Block Block Triple Double Blocks Block Indirect Indirect Data Indirect Data BlockData Blocks Block Block Picture from Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc.
    [Show full text]
  • Solid State Copier DP-250-BD-XL Or DVD
    ® SOLE SOURCE PROVIDER OF EVIDENCE GRADE PRODUCTS EVIDENCE - SURVEILLANCE - DIGITAL & ANALOG RECORDING PRODUCTS & EQUIPMENT Solid State Copier DP-250-BD-XL or DVD Flash to Flash, Flash to Optical and Optical to Flash Media copier designed for secure, high speed transfer of data without requiring a computer or network connectivity. Ruggedized design for both desktop and portable use. ✓ Perfect for copying Original Evidence - Stand Alone operation, No computer or network connection available or required. .No Edit functions ✓ Secure Transfer, perfect for data, pictures, video and audio ✓ High Speed Capability (16x BD/24x DVD/48x CD) ✓ Ruggedized Aluminum Case designed for Desktop and Portable operation. ✓ Two Models available: ✓ DP-250-DVD (DVD/CD) ✓ DP-250-BD-XL (Blu-Ray, BD-XL/DVD/CD) Key Features (DP-250) Stand Alone operation, no computer required, LCD display of current function selected. Secure Transfer o No Hard Drive or Memory retention of the copied or original data when powered off. o No edit functions, No connectivity to a Network or external computer. o Original date and time transferred to the copy. Supports most open format USB Devices. Supports all types of Flash Cards formatted in FAT16, FAT32, exFAT, ext2, ext3, ext4, HFS, HFS+. Burn Speed Settable. (16X BD/24X DVD/48X CD) Supports Flash to Flash, Flash to Disc and Disc to Flash Copying. Allows data appending to a USB/Flash device without erasing the existing data content. Supports Selective file copying, Multi-Session and Disc Spanning. (Flash to Optical) Copy, Copy & Compare, Copy Select & Multi-Session. (Flash to Optical) Rugged Aluminum case for Desktop and Portable use.
    [Show full text]
  • Devicelock® DLP 8.3 User Manual
    DeviceLock® DLP 8.3 User Manual © 1996-2020 DeviceLock, Inc. All Rights Reserved. Information in this document is subject to change without notice. No part of this document may be reproduced or transmitted in any form or by any means for any purpose other than the purchaser’s personal use without the prior written permission of DeviceLock, Inc. Trademarks DeviceLock and the DeviceLock logo are registered trademarks of DeviceLock, Inc. All other product names, service marks, and trademarks mentioned herein are trademarks of their respective owners. DeviceLock DLP - User Manual Software version: 8.3 Updated: March 2020 Contents About This Manual . .8 Conventions . 8 DeviceLock Overview . .9 General Information . 9 Managed Access Control . 13 DeviceLock Service for Mac . 17 DeviceLock Content Security Server . 18 How Search Server Works . 18 ContentLock and NetworkLock . 20 ContentLock and NetworkLock Licensing . 24 Basic Security Rules . 25 Installing DeviceLock . .26 System Requirements . 26 Deploying DeviceLock Service for Windows . 30 Interactive Installation . 30 Unattended Installation . 35 Installation via Microsoft Systems Management Server . 36 Installation via DeviceLock Management Console . 36 Installation via DeviceLock Enterprise Manager . 37 Installation via Group Policy . 38 Installation via DeviceLock Enterprise Server . 44 Deploying DeviceLock Service for Mac . 45 Interactive Installation . 45 Command Line Utility . 47 Unattended Installation . 48 Installing Management Consoles . 49 Installing DeviceLock Enterprise Server . 52 Installation Steps . 52 Installing and Accessing DeviceLock WebConsole . 65 Prepare for Installation . 65 Install the DeviceLock WebConsole . 66 Access the DeviceLock WebConsole . 67 Installing DeviceLock Content Security Server . 68 Prepare to Install . 68 Start Installation . 70 Perform Configuration and Complete Installation . 71 DeviceLock Consoles and Tools .
    [Show full text]
  • The Linux Device File-System
    The Linux Device File-System Richard Gooch EMC Corporation [email protected] Abstract 1 Introduction All Unix systems provide access to hardware via de- vice drivers. These drivers need to provide entry points for user-space applications and system tools to access the hardware. Following the \everything is a file” philosophy of Unix, these entry points are ex- posed in the file name-space, and are called \device The Device File-System (devfs) provides a power- special files” or \device nodes". ful new device management mechanism for Linux. Unlike other existing and proposed device manage- This paper discusses how these device nodes are cre- ment schemes, it is powerful, flexible, scalable and ated and managed in conventional Unix systems and efficient. the limitations this scheme imposes. An alternative mechanism is then presented. It is an alternative to conventional disc-based char- acter and block special devices. Kernel device drivers can register devices by name rather than de- vice numbers, and these device entries will appear in the file-system automatically. 1.1 Device numbers Devfs provides an immediate benefit to system ad- ministrators, as it implements a device naming scheme which is more convenient for large systems Conventional Unix systems have the concept of a (providing a topology-based name-space) and small \device number". Each instance of a driver and systems (via a device-class based name-space) alike. hardware component is assigned a unique device number. Within the kernel, this device number is Device driver authors can benefit from devfs by used to refer to the hardware and driver instance.
    [Show full text]
  • How UNIX Organizes and Accesses Files on Disk Why File Systems
    UNIX File Systems How UNIX Organizes and Accesses Files on Disk Why File Systems • File system is a service which supports an abstract representation of the secondary storage to the OS • A file system organizes data logically for random access by the OS. • A virtual file system provides the interface between the data representation by the kernel to the user process and the data presentation to the kernel in memory. The file and directory system cache. • Because of the performance disparity between disk and CPU/memory, file system performance is the paramount issue for any OS Main memory vs. Secondary storage • Small (MB/GB) Large (GB/TB) • Expensive Cheap -2 -3 • Fast (10-6/10-7 sec) Slow (10 /10 sec) • Volatile Persistent Cannot be directly accessed • Directly accessible by CPU by CPU – Interface: (virtual) memory – Data should be first address brought into the main memory Secondary storage (disk) • A number of disks directly attached to the computer • Network attached disks accessible through a fast network - Storage Area Network (SAN) • Simple disks (IDE, SATA) have a described disk geometry. Sector size is the minimum read/write unit of data (usually 512Bytes) – Access: (#surface, #track, #sector) • Smart disks (SCSI, SAN, NAS) hide the internal disk layout using a controller type function – Access: (#sector) • Moving arm assembly (Seek) is expensive – Sequential access is x100 times faster than the random access Internal disk structure • Disk structure User Process Accessing Data • Given the file name. Get to the file’s FCB using the file system catalog (Open, Close, Set_Attribute) • The catalog maps a file name to the FCB – Checks permissions • file_handle=open(file_name): – search the catalog and bring FCB into the memory – UNIX: in-memory FCB: in-core i-node • Use the FCB to get to the desired offset within the file data: (CREATE, DELETE, SEEK, TRUNCATE) • close(file_handle): release FCB from memory Catalog Organization (Directories) • In UNIX, special files (not special device files) called directories contain information about other files.
    [Show full text]
  • High Velocity Kernel File Systems with Bento
    High Velocity Kernel File Systems with Bento Samantha Miller, Kaiyuan Zhang, Mengqi Chen, and Ryan Jennings, University of Washington; Ang Chen, Rice University; Danyang Zhuo, Duke University; Thomas Anderson, University of Washington https://www.usenix.org/conference/fast21/presentation/miller This paper is included in the Proceedings of the 19th USENIX Conference on File and Storage Technologies. February 23–25, 2021 978-1-939133-20-5 Open access to the Proceedings of the 19th USENIX Conference on File and Storage Technologies is sponsored by USENIX. High Velocity Kernel File Systems with Bento Samantha Miller Kaiyuan Zhang Mengqi Chen Ryan Jennings Ang Chen‡ Danyang Zhuo† Thomas Anderson University of Washington †Duke University ‡Rice University Abstract kernel-level debuggers and kernel testing frameworks makes this worse. The restricted and different kernel programming High development velocity is critical for modern systems. environment also limits the number of trained developers. This is especially true for Linux file systems which are seeing Finally, upgrading a kernel module requires either rebooting increased pressure from new storage devices and new demands the machine or restarting the relevant module, either way on storage systems. However, high velocity Linux kernel rendering the machine unavailable during the upgrade. In the development is challenging due to the ease of introducing cloud setting, this forces kernel upgrades to be batched to meet bugs, the difficulty of testing and debugging, and the lack of cloud-level availability goals. support for redeployment without service disruption. Existing Slow development cycles are a particular problem for file approaches to high-velocity development of file systems for systems.
    [Show full text]