Proceedings of the 12Th USENIX Security Symposium

Total Page:16

File Type:pdf, Size:1020Kb

Proceedings of the 12Th USENIX Security Symposium USENIX Association Proceedings of the 12th USENIX Security Symposium Washington, D.C., USA August 4–8, 2003 THE ADVANCED COMPUTING SYSTEMS ASSOCIATION © 2003 by The USENIX Association All Rights Reserved For more information about the USENIX Association: Phone: 1 510 528 8649 FAX: 1 510 548 5738 Email: [email protected] WWW: http://www.usenix.org Rights to individual papers remain with the author or the author's employer. Permission is granted for noncommercial reproduction of the work for educational or research purposes. This copyright notice must be included in the reproduced paper. USENIX acknowledges all trademarks herein. Dynamic Detection and Prevention of Race Conditions in File Accesses Eugene Tsyrklevich Bennet Yee [email protected] [email protected] Department of Computer Science and Engineering University of California, San Diego Abstract Some security vulnerabilities, such as buffer overflows, receive a lot of attention and have a variety of solutions to prevent them. Other vulnerabilities, such as race con- Race conditions in filesystem accesses occur when se- ditions, receive far less attention and have no definite quences of filesystem operations are not carried out in solutions to stop them. In addition, race conditions can an isolated manner. Incorrect assumptions of filesystem be very hard to spot and eliminate at compile time, it namespace access isolation allow attackers to elevate is thus beneficial to develop a dynamic and automated their privileges without authorization by changing the protection against this type of flaws. namespace bindings. To address this security issue, we propose a mechanism for keeping track of all filesystem This paper presents an analysis of filesystem race con- operations and possible interferences that might arise. If dition vulnerabilities and ways to prevent them. It also a filesystem operation is found to be interfering with an- describes a prototype system that detects and stops all other operation, it is temporarily suspended allowing the realistic filesystem race condition attacks known to us. first process to access a file object to proceed, thereby The proposed solution stops the attacks with minimum reducing the size of the time window when a race condi- performance overhead, while retaining application com- tion exists. The above mechanism is shown to be effec- patibility. tive at stopping all realistic filesystem race condition at- tacks known to us with minimal performance overhead. The rest of this paper is organized as follows. Sec- tion 2 describes race conditions in filesystem accesses and presents several sample vulnerabilities. The design of the prototype system and the rationale behind it is 1 Introduction covered in Section 3. Section 4 details the implementa- tion of the prototype system, as well as some of the de- cisions made during the design phase incorporating the security of the system and code portability. Section 5 Dating back to the 1970’s, race conditions, otherwise reviews the attack scenarios used for the security testing known as the Time Of Check To Time Of Use (TOCT- of the prototype system. Section 6 describes the com- TOU) bug, are one of the oldest known security flaws. patibility testing and Section 7 covers the system perfor- They are a class of local vulnerabilities that can be used mance. Section 8 reviews related work, Section 9 pro- to elevate one’s privileges without authorization and oc- poses future work and Section 10 provides the conclu- cur when operations are not carried out atomically as sions. expected. Race conditions have been found in a variety of sce- narios ranging from tracing processes [9] to filesystem accesses [13]. This paper focuses on a subset of race 2 Race Conditions in File Accesses conditions dealing with filesystem accesses. The threat model is that of an adversary with unprivileged access to a system who is trying to elevate his or her privileges by Race conditions in filesystem accesses occur when ap- exploiting a filesystem race condition vulnerability. plications incorrectly assume that a sequence of filesys- USENIX Association 12th USENIX Security Symposium 243 if (access(fn, R_OK | W_OK) == 0) { chdir("/tmp/a") fd = open(fn, O_RDWR, 0644); chdir("b") if (fd < 0) chdir("c") { unlink("*") err(1, fn); [attacker entrypoint: "mv /tmp/a/b/c /tmp"] } chdir("..") rmdir("c") /* use the file */ unlink("*") chdir("..") Figure 1: Race Condition in Access Operation rmdir("b") unlink("*") tem operations is atomic and the filesystem namespace rmdir("/tmp/a") is otherwise static. In reality, attackers can modify the filesystem namespace at any time by replacing files with Figure 2: Race Condition in Directory Operations symbolic links or by renaming files and directories. of a recursive removal of the /tmp/a directory tree that 2.1 Access Operation contains two nested subdirectories. Consider what hap- pens when an attacker moves /tmp/a/b/c directory to /tmp before the first chdir("..") call is invoked. After The code fragment in Figure 1 is taken from an inse- removing the contents of the /tmp/a/b/c subdirectory cure setuid application that needs to check whether a (now /tmp/c) and executing two chdir("..") calls, in- user running the program has the right to write to a file stead of changing the directory to /tmp/a, the rm utility (since the application is setuid root, it can write to any would find itself in the root directory that it would con- file). tinue to delete. The source code incorrectly assumes that a file object The race condition was fixed by adding additional referenced by the filename “fn” does not change in be- checks to verify that the inode of the directories did not tween the access and open invocations. The assump- change before and after the chdir calls; in other words, tion is invalid since an attacker can replace the file ob- the directory tree namespace did not change during the ject “fn” with another file object (such as a symbolic link execution of the tool. to another file), tricking the program into accessing ar- bitrary files. To avoid this race condition, an application should change its effective id to that of a desired user and then make the open system call directly or use fstat af- 2.3 Setuid Shell Scripts ter the open instead of invoking access. The secure execution of setuid shell scripts is a well- 2.2 Directory Operations known problem in the earlier Unix systems. To deter- mine whether a file is a script and should be processed by an interpreter, a Unix kernel checks the first two bytes of Another subtle race condition was recently discovered in a file for a special “#!” character sequence. If the char- the GNU file utilities package [13]. During the recursive acter sequence is found, the rest of the line is treated as removal of a directory tree, an insecure chdir("..") the interpreter pathname (and its parameters) that should operation is performed, after removing the content of a be executed to process the script. Unfortunately, in some subdirectory, to get back to the upper directory. An at- kernels, the operation of opening and processing the first tacker, who is able to exploit this race condition, can line of a file and the subsequent execution of the inter- trick users into deleting arbitrary files and directories. preter that reopens the script is not atomic. This allows The bug is triggered when an attacker moves a subdi- attackers to trick the operating system into processing rectory, that the rm utility is processing, to a different one file while executing the interpreter with root privi- directory. leges on another file. Figure 2 contains a slightly abbreviated system call trace Various Unix systems deal with this problem in differ- 244 12th USENIX Security Symposium USENIX Association if (stat(fn, &sb) == 0) { gram is the only program accessing the filesystem. In- fd = open(fn, O_CREAT | O_RDWR, 0); stead, an attacker can change the filesystem namespace if (fd < 0) in the midst of a sequence of filesystem system calls and { exploit the race condition in the code. We call the se- quence of filesystem system calls that programmers as- err(1, fn); } sumed to be free from interference a pseudo-transaction. To address the problem of race conditions, it would be /* use the file */ sufficient to provide the illusion of a filesystem names- pace that behaves as if these pseudo-transactions were Figure 3: Race Condition in Temporary File Accesses executed in isolation and free from interference. To achieve this, the system needs to be able to detect race condition attacks and stop them. ent ways. Earlier Linux systems simply disregarded the setuid bits when executing the shell scripts. Some SysV systems as well as BSD 4.4 kernels pass to the in- 3.1 Transactions terpreter a filename in a private, per-process filesystem namespace representing the already opened descriptor to the script (e.g. /dev/fd/3) in lieu of the original path- The goal of detecting and preventing race conditions name. Passing such a pathname closes down the race would be trivially achieved if executions of processes condition vulnerability window since it is impossible for (or process groups) were transactions that enjoyed the an attacker to change the private per-process filesystem ACID properties (Atomicity, Consistency, Isolation and namespace bindings. Durability). ACID transactions were designed to permit concurrent database accesses while maintaining correct- ness. These ideas can be readily translated to filesystem 2.4 Temporary File Accesses accesses. The atomicity property requires that a transaction com- Another type of race condition occurs in the handling of mits all the involved data or none at all. temporary files. An unsafe way to create a temporary file is to use the stat system call to check whether a The consistency property guarantees that a transaction file exists and if the file does not exist, invoke open to produces only consistent results; otherwise it aborts.
Recommended publications
  • Better Performance Through a Disk/Persistent-RAM Hybrid Design
    The Conquest File System: Better Performance Through a Disk/Persistent-RAM Hybrid Design AN-I ANDY WANG Florida State University GEOFF KUENNING Harvey Mudd College PETER REIHER, GERALD POPEK University of California, Los Angeles ________________________________________________________________________ Modern file systems assume the use of disk, a system-wide performance bottleneck for over a decade. Current disk caching and RAM file systems either impose high overhead to access memory content or fail to provide mechanisms to achieve data persistence across reboots. The Conquest file system is based on the observation that memory is becoming inexpensive, which enables all file system services to be delivered from memory, except providing large storage capacity. Unlike caching, Conquest uses memory with battery backup as persistent storage, and provides specialized and separate data paths to memory and disk. Therefore, the memory data path contains no disk-related complexity. The disk data path consists of only optimizations for the specialized disk usage pattern. Compared to a memory-based file system, Conquest incurs little performance overhead. Compared to several disk-based file systems, Conquest achieves 1.3x to 19x faster memory performance, and 1.4x to 2.0x faster performance when exercising both memory and disk. Conquest realizes most of the benefits of persistent RAM at a fraction of the cost of a RAM-only solution. Conquest also demonstrates that disk-related optimizations impose high overheads for accessing memory content in a memory-rich environment. Categories and Subject Descriptors: D.4.2 [Operating Systems]: Storage Management—Storage Hierarchies; D.4.3 [Operating Systems]: File System Management—Access Methods and Directory Structures; D.4.8 [Operating Systems]: Performance—Measurements General Terms: Design, Experimentation, Measurement, and Performance Additional Key Words and Phrases: Persistent RAM, File Systems, Storage Management, and Performance Measurement ________________________________________________________________________ 1.
    [Show full text]
  • System Calls System Calls
    System calls We will investigate several issues related to system calls. Read chapter 12 of the book Linux system call categories file management process management error handling note that these categories are loosely defined and much is behind included, e.g. communication. Why? 1 System calls File management system call hierarchy you may not see some topics as part of “file management”, e.g., sockets 2 System calls Process management system call hierarchy 3 System calls Error handling hierarchy 4 Error Handling Anything can fail! System calls are no exception Try to read a file that does not exist! Error number: errno every process contains a global variable errno errno is set to 0 when process is created when error occurs errno is set to a specific code associated with the error cause trying to open file that does not exist sets errno to 2 5 Error Handling error constants are defined in errno.h here are the first few of errno.h on OS X 10.6.4 #define EPERM 1 /* Operation not permitted */ #define ENOENT 2 /* No such file or directory */ #define ESRCH 3 /* No such process */ #define EINTR 4 /* Interrupted system call */ #define EIO 5 /* Input/output error */ #define ENXIO 6 /* Device not configured */ #define E2BIG 7 /* Argument list too long */ #define ENOEXEC 8 /* Exec format error */ #define EBADF 9 /* Bad file descriptor */ #define ECHILD 10 /* No child processes */ #define EDEADLK 11 /* Resource deadlock avoided */ 6 Error Handling common mistake for displaying errno from Linux errno man page: 7 Error Handling Description of the perror () system call.
    [Show full text]
  • Life in the Fast Lane: Optimisation for Interactive and Batch Jobs Nikola Marković, Boemska T.S
    Paper 11825-2016 Life in the Fast Lane: Optimisation for Interactive and Batch Jobs Nikola Marković, Boemska T.S. Ltd.; Greg Nelson, ThotWave Technologies LLC. ABSTRACT We spend so much time talking about GRID environments, distributed jobs, and huge data volumes that we ignore the thousands of relatively tiny programs scheduled to run every night, which produce only a few small data sets, but make all the difference to the users who depend on them. Individually, these jobs might place a negligible load on the system, but by their sheer number they can often account for a substantial share of overall resources, sometimes even impacting the performance of the bigger, more important jobs. SAS® programs, by their varied nature, use available resources in a varied way. Depending on whether a SAS® procedure is CPU-, disk- or memory-intensive, chunks of memory or CPU can sometimes remain unused for quite a while. Bigger jobs leave bigger chunks, and this is where being small and able to effectively exploit those leftovers can be a great advantage. We call our main optimization technique the Fast Lane, which is a queue configuration dedicated to jobs with consistently small SASWORK directories, that, when available, lets them use unused RAM in place of their SASWORK disk. The approach improves overall CPU saturation considerably while taking loads off the I/O subsystem, and without failure results in improved runtimes for big jobs and small jobs alike, without requiring any changes to deployed code. This paper explores the practical aspects of implementing the Fast Lane on your environment.
    [Show full text]
  • Winreporter Documentation
    WinReporter documentation Table Of Contents 1.1 WinReporter overview ............................................................................................... 1 1.1.1.................................................................................................................................... 1 1.1.2 Web site support................................................................................................. 1 1.2 Requirements.............................................................................................................. 1 1.3 License ....................................................................................................................... 1 2.1 Welcome to WinReporter........................................................................................... 2 2.2 Scan requirements ...................................................................................................... 2 2.3 Simplified Wizard ...................................................................................................... 3 2.3.1 Simplified Wizard .............................................................................................. 3 2.3.2 Computer selection............................................................................................. 3 2.3.3 Validate .............................................................................................................. 4 2.4 Advanced Wizard....................................................................................................... 5 2.4.1
    [Show full text]
  • 007-2007: the FILENAME Statement Revisited
    SAS Global Forum 2007 Applications Development Paper 007-2007 The FILENAME Statement Revisited Yves DeGuire, Statistics Canada, Ottawa, Ontario, Canada ABSTRACT The FILENAME statement has been around for a long time for connecting external files with SAS®. Over the years, it has grown quite a bit to encompass capabilities that go beyond the simple sequential file. Most SAS programmers have used the FILENAME statement to read/write external files stored on disk. However, few programmers have used it to access devices other than disk or to interface with different access methods. This paper focuses on some of those capabilities by showing you some interesting devices and access methods supported by the FILENAME statement. INTRODUCTION – FILENAME STATEMENT 101 The FILENAME statement is a declarative statement that is used to associate a fileref with a physical file or a device. A fileref is a logical name that can be used subsequently in lieu of a physical name. The syntax for the FILENAME statement is as follows: FILENAME fileref <device-type> <'external-file'> <options>; This is the basic syntax to associate an external file or a device to a fileref. There are other forms that allow you to list or clear filerefs. Please refer to SAS OnlineDoc® documentation for a complete syntax of the FILENAME statement. Once the association has been made between a physical file and a fileref, the fileref can be used in as many DATA steps as necessary without having to re-run a FILENAME statement. Within a DATA step, the use of an INFILE or a FILE statement allows the programmer to determine which file to read or write using an INPUT or a PUT statement.
    [Show full text]
  • SUGI 23: It's Only Temporary
    Coders© Corner It’s Only Temporary Janet Stuelpner, ASG, Inc., Cary, North Carolina Boris Krol, Pfizer, Inc., New York, New York ABSTRACT METHODOLOGY Very often during an interactive SAS® session, it is In previous versions of the SAS system, it was not necessary to create temporary external files. From possible to define both SAS data sets and ASCII release 6.11 of the SAS System we have the ability files in the same directory. Now there is an easy to create a flat file within the same directory that way to do it. contains SAS data sets. There are benefits to using this new technique. This paper will show you how to FILENAME fileref CATALOG ‘catalog’; define this file in the WORK directory and the benefits of its use. Above is a very general FILENAME statement. The words in upper case are required to remain the INTRODUCTION same. The fileref can be any valid file reference. The ‘catalog’ must be in the form: It is easy to create a permanent file using a LIBNAME statement for SAS data sets or a library.catalog.entry.entrytype FILENAME statement for external files. However, each operating system has its own mechanism for where library must be WORK and entrytype must be creating a file that is temporary in nature. By SOURCE. If this structure is adhered to, the definition, a temporary file, whether it is a SAS data temporary file will be created in the WORK set or an external file, is one that is deleted at the directory. The most common temporary library is end of the SAS session.
    [Show full text]
  • 11.6. Tempfile — Generate Temporary Files and Directories — Python V2
    Navigation • index • modules | • next | • previous | • Python v2.6.4 documentation » • The Python Standard Library » • 11. File and Directory Access » 11.6. tempfile — Generate temporary files and directories¶ This module generates temporary files and directories. It works on all supported platforms. In version 2.3 of Python, this module was overhauled for enhanced security. It now provides three new functions, NamedTemporaryFile(), mkstemp(), and mkdtemp(), which should eliminate all remaining need to use the insecure mktemp() function. Temporary file names created by this module no longer contain the process ID; instead a string of six random characters is used. Also, all the user-callable functions now take additional arguments which allow direct control over the location and name of temporary files. It is no longer necessary to use the global tempdir and template variables. To maintain backward compatibility, the argument order is somewhat odd; it is recommended to use keyword arguments for clarity. The module defines the following user-callable functions: tempfile.TemporaryFile([mode='w+b'[, bufsize=-1[, suffix=''[, prefix='tmp'[, dir=None]]]]])¶ Return a file-like object that can be used as a temporary storage area. The file is created using mkstemp(). It will be destroyed as soon as it is closed (including an implicit close when the object is garbage collected). Under Unix, the directory entry for the file is removed immediately after the file is created. Other platforms do not support this; your code should not rely on a temporary file created using this function having or not having a visible name in the file system. The mode parameter defaults to 'w+b' so that the file created can be read and written without being closed.
    [Show full text]
  • TOCTTOU Attacks
    CS 380S TOCTTOU Attacks Don Porter Some slides courtesy Vitaly Shmatikov and Emmett Witchel slide 1 Definitions TOCTTOU – Time of Check To Time of Use Check – Establish some precondition (invariant), e.g., access permission Use – Operate on the object assuming that the invariant is still valid Essentially a race condition Most famously in the file system, but can occur in any concurrent system 2 UNIX File System Security Access control: user should only be able to access a file if he has the permission to do so But what if user is running as setuid-root? • E.g., a printing program is usually setuid-root in order to access the printer device – Runs “as if” the user had root privileges • But a root user can access any file! • How does the printing program know that the user has the right to read (and print) any given file? UNIX has a special access() system call slide 3 TOCTTOU Example – setuid Victim checks file, if its good, opens it Attacker changes interpretation of file name Victim reads secret file Victim Attacker if(access(“foo”)) { symlink(“secret”, “foo”); fd = open(“foo”); read(fd,…); … time } 4 access()/open() Exploit Goal: trick setuid-root program into opening a normally inaccessible file Create a symbolic link to a harmless user file • access() will say that file is Ok to read After access(), but before open() switch symbolic link to point to /etc/shadow • /etc/shadow is a root-readable password file Attack program must run concurrently with the victim and switch the link at exactly the right time • Interrupt victim between access() and open() • How easy is this in practice? slide 5 Broken passwd [Bishop] Password update program on HP/UX and SunOS (circa 1996) When invoked with password file as argument… 1.
    [Show full text]
  • Reclaim Disk Space by Shrinking Files 49 from the Preceding Discussion It Should Be Clear That Cfsize and Csplit Are Two Very Different Tools
    sysadmins from time to time are faced with the problem of reclaiming disk sandeep saHoRe space, a problem that lurks in the shadows waiting to buzz the pager into life. The typi- cal response is either to remove files or to compress them, or to invoke some combina- reclaim disk space tion of the two approaches. But there are many situations where the choice is not so by shrinking files cut-and-dried. Let’s say there is a file fill- Sandeep Sahore holds a Master’s degree in com- ing up a storage partition that cannot be puter science from the University of Toledo and removed because its data should be kept for has nearly 15 years of experience in the comput- ing industry. He specializes in low-level and C a rolling window of one year or because its programming, systems engineering, and system performance. contents are sensitive or because it is being [email protected] held open by a process. In such cases it is better to shrink the file in size instead of The source for cfsize.c may be found at http:// removing or compressing it. www.usenix.org/publications/login/2008-10/ cfsize.c. Having faced such scenarios a countless number of times I decided to write a program that would shrink files in size from the beginning or “head.” This program is called cfsize, which is short for “change file size”; it is described in detail in the sections that follow. cfsize is written in the C lan- guage for the UNIX platform, but it works equally well on Linux.
    [Show full text]
  • Review NTFS Basics
    Australian Journal of Basic and Applied Sciences, 6(7): 325-338, 2012 ISSN 1991-8178 Review NTFS Basics Behzad Mahjour Shafiei, Farshid Iranmanesh, Fariborz Iranmanesh Bardsir Branch, Islamic Azad University, Bardsir, Iran Abstract: The Windows NT file system (NTFS) provides a combination of performance, reliability, and compatibility not found in the FAT file system. It is designed to quickly perform standard file operations such as read, write, and search - and even advanced operations such as file-system recovery - on very large hard disks. Key words: Format, NTFS, Volume, Fat, Partition INTRODUCTION Formatting a volume with the NTFS file system results in the creation of several system files and the Master File Table (MFT), which contains information about all the files and folders on the NTFS volume. The first information on an NTFS volume is the Partition Boot Sector, which starts at sector 0 and can be up to 16 sectors long. The first file on an NTFS volume is the Master File Table (MFT). The following figure illustrates the layout of an NTFS volume when formatting has finished. Fig. 5-1: Formatted NTFS Volume. This chapter covers information about NTFS. Topics covered are listed below: NTFS Partition Boot Sector NTFS Master File Table (MFT) NTFS File Types NTFS File Attributes NTFS System Files NTFS Multiple Data Streams NTFS Compressed Files NTFS & EFS Encrypted Files . Using EFS . EFS Internals . $EFS Attribute . Issues with EFS NTFS Sparse Files NTFS Data Integrity and Recoverability The NTFS file system includes security features required for file servers and high-end personal computers in a corporate environment.
    [Show full text]
  • Incorporating Network RAM and Flash Into Fast Backing Store for Clusters
    Incorporating Network RAM and Flash into Fast Backing Store for Clusters Tia Newhall and Douglas Woos Computer Science Department Swarthmore College Swarthmore, PA 19081, USA {newhall,dwoos1}@cs.swarthmore.edu Abstract—We present Nswap2L, a fast backing storage system magnetic disk with a heterogeneous set of fast, random access for general purpose clusters. Nswap2L implements a single storage devices will require changes to OS subsystems that are device interface on top of multiple heterogeneous physical storage designed assuming that swap and local temporary file data are devices, particularly targeting fast random access devices such as Network RAM and flash SSDs. A key design feature of Nswap2L stored on local disk and that this storage is managed solely is the separation of the interface from the underlying physical by the OS running on individual cluster nodes. storage; data that are read and written to our “device” are In general purpose clusters (clusters that support multiple managed by our underlying system and may be stored in local users and run a wide range of program workloads), resource RAM, remote RAM, flash, local disk or any other cluster-wide utilization varies due to dynamic workloads. Because resource storage. Nswap2L chooses which physical device will store data based on cluster resource usage and the characteristics of various scheduling is difficult in this environment, there will often be storage media. In addition, it migrates data from one physical imbalances in resource utilization across cluster nodes. For device to another in response to changes in capacity and to example, some nodes may have idle RAM while others have take advantage of the strengths of different types of physical over-committed RAM, resulting in swapping.
    [Show full text]
  • PROM Programming with the Intel Personal Development System (Ipdstm)
    APPLICATION AP-179 infef NOTE October 1986 PROM PrOgramming with the Intel Personal Development System (iPDSTM) FRED MOSEDALE OSHO TECHNICAL PUBLICATIONS Order Number: 280015·001 7·20 inter AP·179 • Need for simple operation-You want a program­ a software development system, or a disk. For great~ ming device that satisfies all of the following needs est programming flexibility, you will want PROM and is simple to operate. You do not want to have to programming device that makes all three kinds of refer to a manual every time you wish to program a sources available. PROM. • Need to support data manipulation and modifica­ • Need to program a wide variety of PROMs-For tion-You may wish to modify a source file for your greatest flexibility, you want a programming device PROM program. For example, you may discover an that can program the various kinds of PROMs that error in the source file, or you may realize that for are available. For example, you will want to be able your new processor system, the PROM data must to program microcontrollers with EPROMs, and first be 2's complemented. For a variety of reasons, you will want to be able to program from the small your PROM programming will be much more flex­ inexpensive 16K and 32K PROMS to the latest ible if the PROM programming device offers a buff­ 256K PROMS with inteligent Programming™ al­ er for temporary'storage and PROM programming gorithms to speed programming. You will also want software that can manipulate the source program to be able to program those PROMs that use the data in a variety of ways.
    [Show full text]