Defending In-Process Memory Abuse with Mitigation and Testing

Total Page:16

File Type:pdf, Size:1020Kb

Defending In-Process Memory Abuse with Mitigation and Testing Defending In-Process Memory Abuse with Mitigation and Testing A Dissertation Presented by Yaohui Chen to The Khoury College of Computer Sciences in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science Northeastern University Boston, Massachusetts October 2019 Version Dated: October 21, 2019 To my parents, who gave me the life like a river flows, & To Boyu, my best friend, who accompanies me through the rapids and undertows. i Contents List of Figures v List of Tables viii Acknowledgments x Abstract of the Dissertation xii 1 Introduction 1 1.1 Problem Statement .................................. 1 1.2 Thesis Statment .................................... 3 1.3 Contributions ..................................... 3 1.3.1 A Hybrid Approach for Practical Fine-grained Software Randomization .. 3 1.3.2 Leave No Program Behind: Execute-only Memory Protection For COTS Binaries .................................... 4 1.3.3 Keep My Secrets: In-process Private Memory ................ 4 1.3.4 Focus on bugs: Bug-driven Hybrid Fuzzing ................. 5 1.3.5 Learning On Experience: Smart Seed Scheduling for Hybrid Fuzzing ... 6 1.4 Roadmap ....................................... 7 2 Related Works 8 2.1 Perpetual War On Memory Corruption Attacks ................... 8 2.2 In-Process Memory Isolation ............................. 10 2.3 Automatic Software Tests Generation ........................ 12 I Runtime Protections Against In-Process Abuse 15 3 Code Reuse Exploit Mitigations 16 3.1 Compiler-assisted Code Randomization ....................... 16 3.1.1 Background .................................. 16 3.1.2 Overall Approach ............................... 19 3.1.3 Compiler-level Metadata ........................... 21 3.1.4 Link-time Metadata Consolidation ...................... 25 ii 3.1.5 Code Randomization ............................. 28 3.1.6 Experimental Evaluation ........................... 28 3.2 Enabling Execute-Only Memory for COTS Binaries On AArch64 ......... 33 3.2.1 Overview ................................... 33 3.2.2 Background .................................. 34 3.2.3 Design .................................... 38 3.2.4 Evaluation .................................. 47 3.3 Limitations ...................................... 51 4 In-process Memory Isolation 52 4.1 Overview ....................................... 53 4.2 Design ......................................... 55 4.3 Implementation .................................... 64 4.4 Evaluation ....................................... 66 4.5 Limitations and Discussion .............................. 71 II Offline Software Testing To Find Memory Corruption Bugs 72 5 Bug-driven Hybrid Testing 74 5.1 Background and Motivation ............................. 74 5.1.1 In-efficiency of Existing Coverage-guided Hybrid Testing ......... 74 5.1.2 Motivation .................................. 75 5.2 Design ......................................... 77 5.2.1 Core Techniques ............................... 77 5.2.2 System Design ................................ 80 5.3 Implementation .................................... 85 5.4 Evaluation ....................................... 87 5.4.1 Evaluation with LAVA-M .......................... 88 5.4.2 Evaluation with Real-world Programs .................... 90 5.4.3 Vulnerability Triage ............................. 93 6 Learning-based Hybrid Fuzzing 98 6.1 Introduction ...................................... 98 6.2 Background ...................................... 100 6.2.1 Hybrid Fuzzing ................................ 100 6.2.2 Supervised Machine Learning ........................ 102 6.3 System Design .................................... 103 6.3.1 System Overview ............................... 103 6.3.2 System Requirements ............................ 103 6.3.3 Feature Engineering ............................. 105 6.3.4 Seed Label Inference ............................. 107 6.3.5 Model Construction and Prediction ..................... 108 6.3.6 Updating Model ............................... 109 6.4 Evaluation and Analysis ............................... 110 iii 6.4.1 Evaluation setup ............................... 110 6.4.2 Learning Effectiveness ............................ 111 6.4.3 Insights and Analyses ............................ 112 6.4.4 Model Reusability .............................. 113 6.4.5 Model Transferability ............................ 114 6.4.6 Discovered Bugs ............................... 115 6.5 Discussions ...................................... 117 6.5.1 Applicability of different machine learning models ............. 117 6.5.2 Applicability of MEUZZ on grey-box fuzzing ................ 118 7 Conclusion 123 Bibliography 126 iv List of Figures 3.1 Example of the fixup and relocation information that is involved during the compi- lation and linking process. .............................. 18 3.2 Overview of the proposed approach. A modified compiler collects metadata for each object file 1 , which is further updated and consolidated at link time into a single extra section in the final executable 2 . At the client side, a binary rewriter leverages the embedded metadata to rapidly generate randomized variants of the executable 3 . ..................................... 21 3.3 An example of the ELF layout generated by Clang (left), with the code of a par- ticular function expanded (center and right). The leftmost and rightmost columns in the code listing (“BBL” and “Fragment”) illustrate the relationships between ba- sic blocks and LLVM’s various kinds of fragments: data (DF), relaxable (RF), and alignment (AF). Data fragments are emitted by default, and may span consecutive basic blocks (e.g., BBL #1 and #2). The relaxable fragment #1 is required for the branch instruction, as it may be expanded during the relaxation phase. The padding bytes at the bottom correspond to a separate fragment, although they do not belong to any basic block. .................................. 22 3.4 Example of jump table code generated for non-PIC and PIC binaries. ........ 25 3.5 Overview of the linking process. Per-object metadata is consolidated into a single section. ........................................ 27 3.6 Performance overhead of fine-grained (function vs. basic block reordering) ran- domization for the SPEC CPU2006 benchmark tests. ................ 29 3.7 NORAX System Overview: the offline tools (left) analyze the input binary, locate all the executable data and their references (when available), and then statically patch the metadata to the raw ELF; the runtime components (right) create separated mapping for the executable data sections and update the recorded references as well as those generated at runtime. ............................. 39 3.8 The layout of ELF transformed by NORAX. The shaded parts at the end are the generated NORAX-related metadata. ......................... 44 3.9 Bionic Linker’s binary loading flow, NLoader operates in different binary preparing stages, including module loading, relocation and symbol resolution. ........ 44 3.10 Unixbench performance overhead for unixbench binaries, including runtime, peak resident memory and file size overhead (left: user tests, right: system tests) .... 50 v 4.1 Shreds, threads, and a process ............................ 52 4.2 Developers create shreds in their programs via the intuitive APIs and build the pro- grams using S-compiler, which automatically verifies and instruments the executa- bles (left); during runtime (right), S-driver handles shred entrances and exits on each CPU/thread while efficiently granting or revoking each CPU’s access to the s-pools. 54 4.3 The DACR setup for a quad-core system, where k =4. The first 3 domains (Dom Dom ) are reserved by Linux. Each core has a designated domain 0 − 2 (Dom Dom ) that it may access when executing a shred. No CPU can access 3 − 6 Dom7. ........................................ 61 4.4 A shred’s transition of states ............................. 61 4.5 The time and space overhead incurred by S-compiler during the offline compilation and instrumentation phase .............................. 67 4.6 The time needed for a context switch when: (1) a shred-active thread is switched off, (2) a regular thread is switched off but no process or address space change, and (3) a regular thread is switched off and a thread from a different process is scheduled on. ........................................... 67 4.7 Invocation time of shred APIs and reference system calls (the right-most two bars are on log scale). It shows that shred entry is faster than thread creation, and s-pool allocation is slightly slower than basic memory mapping. .............. 69 4.8 Five SPEC2000 benchmark programs tested when: (1) no shred is used, (2) shreds are used but without the lazy domain adjustment turned on in S-driver, and (3) shreds are used with the lazy domain adjustment. .................. 69 5.1 A demonstrative example of hybrid testing. Figure 5.1a presents the code under test. Figure 5.1b and 5.1c are the paths followed by two seeds from the fuzzer. Their execution follows the red line and visits the grey boxes. Note that the white boxes connected by dotted lines are non-covered code. ................... 75 5.2 A demonstrative example of limitation in finding defects by existing hybrid testing. This defect comes from objdump-2.29 [33]. ..................... 76 5.3 An example showing how to estimate the bug-detecting potential of a seed. In this example, the seed follows the path b1->b2->b3->b4. Basic block b5
Recommended publications
  • 101 Useful Linux Commands - Haydenjames.Io
    101 Useful Linux Commands - haydenjames.io Some of these commands require elevated permissions (sudo) to run. Enjoy! 1. Execute the previous command used: !! 2. Execute a previous command starting with a specific letter. Example: !s 3. Short way to copy or backup a file before you edit it. For example, copy nginx.conf cp nginx.conf{,.bak} 4. Toggle between current directory and last directory cd - 5. Move to parent (higher level) directory. Note the space! cd .. 6. Go to home directory cd ~ 7. Go to home directory cd $HOME 8. Go to home directory (when used alone) cd 9. Set permissions to 755. Corresponds to these permissions: (-rwx-r-x-r-x), arranged in this sequence: (owner-group-other) chmod 755 <filename> 10. Add execute permission to all users. chmod a+x <filename> 11. Changes ownership of a file or directory to . chown <username> 12. Make a backup copy of a file (named file.backup) cp <file> <file>.backup 13. Copy file1, use it to create file2 cp <file1> <file2> 14. Copy directory1 and all its contents (recursively) into directory2 cp -r <directory1> <directory2>/ 15. Display date date 16. Zero the sdb drive. You may want to use GParted to format the drive afterward. You need elevated permissions to run this (sudo). dd if=/dev/zero of=/dev/sdb 17. Display disk space usage df -h 18. Take detailed messages from OS and input to text file dmesg>dmesg.txt 19. Display a LOT of system information. I usually pipe output to less. You need elevated permissions to run this (sudo).
    [Show full text]
  • Wildlife Management Activities and Practices
    WILDLIFE MANAGEMENT ACTIVITIES AND PRACTICES COMPREHENSIVE WILDLIFE MANAGEMENT PLANNING GUIDELINES for the Edwards Plateau and Cross Timbers & Prairies Ecological Regions Revised April 2010 The following Texas Parks & Wildlife Department staff have contributed to this document: Mike Krueger, Technical Guidance Biologist – Lampasas Mike Reagan, Technical Guidance Biologist -- Wimberley Jim Dillard, Technical Guidance Biologist -- Mineral Wells (Retired) Kirby Brown, Private Lands and Habitat Program Director (Retired) Linda Campbell, Program Director, Private Lands & Public Hunting Program--Austin Linda McMurry, Private Lands and Public Hunting Program Assistant -- Austin With Additional Contributions From: Kevin Schwausch, Private Lands Biologist -- Burnet Terry Turney, Rare Species Biologist--San Marcos Trey Carpenter, Manager, Granger Wildlife Management Area Dale Prochaska, Private Lands Biologist – Kerr Wildlife Management Area Nathan Rains, Private Lands Biologist – Cleburne TABLE OF CONTENTS Comprehensive Wildlife Management Planning Guidelines Edwards Plateau and Cross Timbers & Prairies Ecological Regions Introduction Specific Habitat Management Practices HABITAT CONTROL EROSION CONTROL PREDATOR CONTROL PROVIDING SUPPLEMENTAL WATER PROVIDING SUPPLEMENTAL FOOD PROVIDING SUPPLEMENTAL SHELTER CENSUS APPENDICES APPENDIX A: General Habitat Management Considerations, Recommendations, and Intensity Levels APPENDIX B: Determining Qualification for Wildlife Management Use APPENDIX C: Wildlife Management Plan Overview APPENDIX D: Livestock
    [Show full text]
  • GNU Coreutils Cheat Sheet (V1.00) Created by Peteris Krumins ([email protected], -- Good Coders Code, Great Coders Reuse)
    GNU Coreutils Cheat Sheet (v1.00) Created by Peteris Krumins ([email protected], www.catonmat.net -- good coders code, great coders reuse) Utility Description Utility Description arch Print machine hardware name nproc Print the number of processors base64 Base64 encode/decode strings or files od Dump files in octal and other formats basename Strip directory and suffix from file names paste Merge lines of files cat Concatenate files and print on the standard output pathchk Check whether file names are valid or portable chcon Change SELinux context of file pinky Lightweight finger chgrp Change group ownership of files pr Convert text files for printing chmod Change permission modes of files printenv Print all or part of environment chown Change user and group ownership of files printf Format and print data chroot Run command or shell with special root directory ptx Permuted index for GNU, with keywords in their context cksum Print CRC checksum and byte counts pwd Print current directory comm Compare two sorted files line by line readlink Display value of a symbolic link cp Copy files realpath Print the resolved file name csplit Split a file into context-determined pieces rm Delete files cut Remove parts of lines of files rmdir Remove directories date Print or set the system date and time runcon Run command with specified security context dd Convert a file while copying it seq Print sequence of numbers to standard output df Summarize free disk space setuidgid Run a command with the UID and GID of a specified user dir Briefly list directory
    [Show full text]
  • Constraints in Dynamic Symbolic Execution: Bitvectors Or Integers?
    Constraints in Dynamic Symbolic Execution: Bitvectors or Integers? Timotej Kapus, Martin Nowack, and Cristian Cadar Imperial College London, UK ft.kapus,m.nowack,[email protected] Abstract. Dynamic symbolic execution is a technique that analyses programs by gathering mathematical constraints along execution paths. To achieve bit-level precision, one must use the theory of bitvectors. However, other theories might achieve higher performance, justifying in some cases the possible loss of precision. In this paper, we explore the impact of using the theory of integers on the precision and performance of dynamic symbolic execution of C programs. In particular, we compare an implementation of the symbolic executor KLEE using a partial solver based on the theory of integers, with a standard implementation of KLEE using a solver based on the theory of bitvectors, both employing the popular SMT solver Z3. To our surprise, our evaluation on a synthetic sort benchmark, the ECA set of Test-Comp 2019 benchmarks, and GNU Coreutils revealed that for most applications the integer solver did not lead to any loss of precision, but the overall performance difference was rarely significant. 1 Introduction Dynamic symbolic execution is a popular program analysis technique that aims to systematically explore all the paths in a program. It has been very successful in bug finding and test case generation [3, 4]. The research community and industry have produced many tools performing symbolic execution, such as CREST [5], FuzzBALL [9], KLEE [2], PEX [14], and SAGE [6], among others. To illustrate how dynamic symbolic execution works, consider the program shown in Figure 1a.
    [Show full text]
  • Jackson State University Department of Computer Science CSC 438-01/539-01 Systems and Software Security, Spring 2014 Instructor: Dr
    Jackson State University Department of Computer Science CSC 438-01/539-01 Systems and Software Security, Spring 2014 Instructor: Dr. Natarajan Meghanathan Project 1: Exploring UNIX Access Control in a Virtual Machine Environment Due: February 26, 2014, 7.30 PM The objective of this project is to explore the different UNIX access control commands and their features. You will do this project in a virtual machine environment. If you already have a virtual machine installed (either in VM Player or Virtual Box, you can skip the following steps and proceed to Page 4). Installing VirtualBox 4.2 and Ubuntu OS Go to https://www.virtualbox.org/wiki/Downloads and download VirtualBox for your operating system. If you work on a lab computer, you need to use the Ubuntu VM .iso file that is stored on the local machine. If you work on your personal computer, you need to download the Ubuntu .iso file from the website listed in Step # 1 and continue. You may use the following steps for installing the Ubuntu VM on the virtualbox. 1. The Ubuntu installation file is located on the desktop of your PC (it can be downloaded from http://www.ubuntu.com/download/ubuntu/download if the .iso file cannot be located on your desktop). 2. On the VirtualBox Manager screen click on “New” 1 3. When prompted, put your J # for the name of the VM and select “Linux” as OS (when you choose Linux as OS, the program should automatically choose Ubuntu as Version, if not select Ubuntu) and click Next. 4.
    [Show full text]
  • SHRED DOCUMENTATION ZONGE Data Processing GDP Data
    SHRED DOCUMENTATION ZONGE Data Processing GDP Data Reformat Program version 3.2x Barry Sanders Mykle Raymond John Rykala November, 1996 Zonge Engineering & Research Organization, Inc. 3322 East Fort Lowell Road, Tucson, AZ 85716 USA Tel:(520) 327-5501 Fax:(520) 325-1588 Email:[email protected] GDP DATA PROCESSING MANUAL TABLE OF CONTENTS SHRED ............................................................................ page Introduction.............................................................................5 Usage .......................................................................................5 Software Operation.................................................................7 Calculation of Receiver Location ......................................................... 8 Survey Configurations ........................................................................... 8 Rx Definitions ........................................................................................ 8 Sorting the data file ................................................................................ 9 Splitting the data file .............................................................................. 9 Data Processing Flags.......................................................................... 10 Comment and Program Control Lines ................................................ 11 Transmitter Current Corrections ......................................................... 11 AMT Correlation Coefficient Filter...................................................
    [Show full text]
  • Gnu Coreutils Core GNU Utilities for Version 5.93, 2 November 2005
    gnu Coreutils Core GNU utilities for version 5.93, 2 November 2005 David MacKenzie et al. This manual documents version 5.93 of the gnu core utilities, including the standard pro- grams for text and file manipulation. Copyright c 1994, 1995, 1996, 2000, 2001, 2002, 2003, 2004, 2005 Free Software Foundation, Inc. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts. A copy of the license is included in the section entitled “GNU Free Documentation License”. Chapter 1: Introduction 1 1 Introduction This manual is a work in progress: many sections make no attempt to explain basic concepts in a way suitable for novices. Thus, if you are interested, please get involved in improving this manual. The entire gnu community will benefit. The gnu utilities documented here are mostly compatible with the POSIX standard. Please report bugs to [email protected]. Remember to include the version number, machine architecture, input files, and any other information needed to reproduce the bug: your input, what you expected, what you got, and why it is wrong. Diffs are welcome, but please include a description of the problem as well, since this is sometimes difficult to infer. See section “Bugs” in Using and Porting GNU CC. This manual was originally derived from the Unix man pages in the distributions, which were written by David MacKenzie and updated by Jim Meyering.
    [Show full text]
  • Owner's Manual Series 1 Hard Drive
    AMS-150HD/ AMS-150HD-SSD/ AMS-150-SSD AMS-300HD/ AMS-300HD-SSD/ AMS-300-SSD OWNER’S MANUAL SERIES 1 HARD DRIVE NOTICE: The information contained within this manual is correct at time of printing, but due to the continuing development of prod ucts, changes in specifications are inevitable. Ameri-Shred reserves the right to implement such changes without prior notice. Service Department: 888.270.6879 TABLE OF CONTENTS UNLOADING/UNPACKING ......................................................................................................................... 3 ELECTRICAL INSTALLATION ........................................................................................................................ 4 NAMEPLATE (LOG SHREDDER SPECIFICS) ................................................................................................... 4 SAFETY WARNINGS .................................................................................................................................... 5 SHREDDER OPERATION .............................................................................................................................. 6 START UP PROCEDURE ............................................................................................................................... 7 SERVER DRIVE JAM WARNING ................................................................................................................... 8 CLEARING A JAM .......................................................................................................................................
    [Show full text]
  • Anti Forensics Analysis of File Wiping Tools
    Anti Forensics Analysis of File Wiping Tools A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science in Cyber Security by Narendra Panwar 14MS013 Under the Supervision of Dr. Babu M. Mehtre Associate Professor Center For Cyber Security, Institute For Development And Research In Banking Technology, Hyderabad (Established by Reserve Bank of India) COMPUTER SCIENCE AND ENGINEERING DEPARTMENT SARDAR PATEL UNIVERSITY OF POLICE, SECURITY AND CRIMINAL JUSTICE JODHPUR – 342304, INDIA May, 2016 UNDERTAKING I declare that the work presented in this thesis titled “Anti Forensics Analysis of File Wiping Tools”, submitted to the Computer Science and Engineering Department, Sardar Patel Uni- versity of Police, Security and Criminal Justice, Jodhpur, for the award of the Master of Science degree in Cyber Security, is my original work. I have not plagiarized or submitted the same work for the award of any other degree. In case this undertaking is found in- correct, I accept that my degree may be unconditionally withdrawn. May, 2016 Hyderabad (Narendra Panwar) ii CERTIFICATE Certified that the work contained in the thesis titled “Anti Forensics Analysis of File Wiping Tools”, by Narendra Panwar, Registration Number 14MS013 has been carried out under my supervision and that this work has not been submitted elsewhere for a degree. Dr. Babu M. Mehtre Associate Professor Center For Cyber Security, Institute For Development and Research in Banking Technology, Hyderabad May, 2016 iii Acknowledgment The success of this project work and thesis completion required a lot of guidance. I would first like to thank my supervisor, Dr. Babu M.
    [Show full text]
  • 10 Red Hat® Linux™ Tips and Tricks
    Written and Provided by Expert Reference Series of White Papers 10 Red Hat® Linux™ Tips and Tricks 1-800-COURSES www.globalknowledge.com 10 Red Hat® Linux™ Tips and Tricks Compiled by Red Hat Certified Engineers Introduction Are you looking for a quick and simple reference guide to help you navigate Red Hat® Linux™ systems? Look no further! Global Knowledge and Red Hat have assembled these 10 Tips and Tricks from Red Hat Certified Engineers® (RHCEs) to give you an edge on managing these systems. 1.Wiping a Hard Drive By Dominic Duval, Red Hat Certified Engineer Have you ever needed to completely wipe out critical data from a hard drive? As we all know, mkfs doesn’t erase a lot. (You already knew this, right?) mkfs and its variants (e.g., mkfs.ext3 and mke2fs) only get rid of a few important data structures on the filesystem, but the data is still there! For a SCSI disk connected as /dev/sdb, a quick dd if=/dev/sdb | strings will let anyone recover text data from a supposedly erased hard drive. Binary data is more complicated to retrieve, but the same basic principle applies: the data was not completely erased. To make things harder for the bad guys, an old trick was to use the ‘dd’ command as a way to erase a drive. Note: This command will erase your disk! dd if=/dev/zero of=/dev/sdb There’s one problem with this: newer, more advanced, techniques make it possible to retrieve data that were replaced with a bunch of 0s.
    [Show full text]
  • An In-Depth Guide to Meeting Federal Data Destruction Regulatory
    An In-Depth Guide to Meeting Federal Data Destruction Regulatory Compliance Data security encompasses all aspects of information protection and has been an integral part of federal policy since the Social Security Act of 1934 made it illegal to disclose an individual’s social security number and personally identifiable information (PII). Since then, numerous federal programs and processes specific to the privacy and security of personal, financial, health, and intelligence information have been instituted. Of these, the creation of the National Security Agency (NSA) in 1954 and the enactment of the Privacy Act of 1974 are two of the most pivotal. Under the Director of National Intelligence, the NSA is an intelligence agency of the United States Department of Defense (DoD) and has responsibility for global monitoring, collection, and processing of information of foreign and domestic intelligence and counterintelligence purposes. In other words, all classified information falls under the jurisdiction of the NSA. The Privacy Act of 1974, based on the fact that privacy is a fundamental right protected by the Constitution of the United States, acknowledges that “The privacy of an individual is directly affected by the collection, maintenance, use, and dissemination of personal information.” Further, the Privacy Act of 1974 extended protections to any and all records, whether paper or digital, containing PII pertaining to an individual’s education, financial, medical, criminal, or employment history as well as photographs, fingerprints, and voiceprints. While many other data security regulations exist, including the Health Insurance Portability and Accountability Act of 1996 (HIPAA) for the healthcare sector and the Fair and Accurate Credit Transactions Act of 2003 (FACTA) for financial services, and numerous other U.S.
    [Show full text]
  • Electronic Media Sanitization Standard
    Electronic Media Sanitization Background Computing systems (including desktops, laptops, tablets, networking equipment, cellular phones, smart phones and other devices) store data on a wide variety of storage media (e.g., hard drives, USB flash drives, solid-state drives, floppy disks, CD-ROM's, DVD’s, Blu-Ray’s, tapes, memory, etc.). This data must be securely removed from the media once the data and/or device is no longer required in order to prevent unauthorized disclosure of the data. This is particularly important if the device contains Export Controlled or Restricted data as defined in the Data Governance & Classification Policy. Data could be disclosed through many avenues including computers or equipment sold, recycled or disposed without appropriate media sanitization practices, equipment with storage media returned to vendors as defective or as a trade-in for new equipment, or mobile media not being properly sanitization and or destroyed after use. Standard The NIST Special Publication 800-88r1 - Guidelines for Media Sanitization is used as the primary guide for this document. NIST Special Publication 800- 88r1 explains the need for proper media sanitization, types of sanitization, roles and responsibilities and much more information related to this topic. The Electronic Media Sanitization Standard is mandatory for media that contains Export Controlled or Restricted data and is recommended for media that contains Controlled data. Export Controlled data may have additional sanitization requirements, see Export Controls Office for additional information. Each college or department must adhere to the University General Retention Schedule unless the college or department has an approved unique schedule. When Export Controlled or Restricted data has been stored on media where that media is to be reused or forwarded to UC Surplus Management for appropriate disposition, it is imperative that measures be taken to sanitize the media before it leaves control of the department of responsibility.
    [Show full text]