CALL INTEGRITY

OF THE OPERATING SYSTEM

by

DAYLE GLENN MAJORS

ATHESIS

Presented to the Faculty of the Graduate School of the

UNIVERSITY OF MISSOURI–ROLLA

in Partial Fulfillment of the Requirements for the Degree

MASTER OF SCIENCE IN COMPUTER SCIENCE

2003

Approved by

Dr. Ann Miller, Advisor Dr. Bruce M. McMillin

Dr. Paul D. Stigall c 2003 Dayle Glenn Majors All Rights Reserved iii

ABSTRACT

”Operating System Call Integrity of the Linux Operating System” examines the security exposures of the Linux Operating System arising from functions used in its construction. The ITS4 commercial package was used to identify suspect calls. While the package highlights a large number of different calls, almost all vulnerable calls in the operating systems were string related operations including printk and printf related functions. The results showed that the drivers directory of the Linux Operating System distribution was the most vulnerable. iv

ACKNOWLEDGMENT

Several individuals have been very supportive of my thesis. I extend my sincere thanks for their assistance and guidance throughout my master’s program. First, I wish thank Dr. Ann Miller for her guidance and support as my thesis advisor. Her perceptive suggestions were always beneficial. I wish to thank Dr. Bruce McMillin for his discussions of related research and his advice as a committee member. I also appreciate Dr. Paul Stigall for all his advice and input. In addition, Mr. William Siever’s participation in our many discussions of Linux kernel internals has provided valuable insights into its structure. The support provided by the United States Department of Education through the Graduate Assistantship in Areas of National Need (GAANN) has been helpful in advancing my education. I appreciate the proof reading help provided by Clara, my wife, and my daugh- ters Kimberly McKinney and Lori Majors. However, any errors that remain are mine. Finally, I give heartfelt thanks to my wife. Her love, support and encouragement throughout my master’s program have been invaluable. v

TABLE OF CONTENTS

Page

ABSTRACT...... iii

ACKNOWLEDGMENT...... iv

LISTOFILLUSTRATIONS...... vi

LISTOFTABLES...... vii

SECTION 1.INTRODUCTION...... 1 2.OPERATINGSYSTEMENVIRONMENT...... 8 3.LINUXEVALUATION...... 14 4.RESULTSANDCONCLUSIONS...... 19

APPENDIX StringOperationsStatistics...... 24

BIBLIOGRAPHY...... 33

VITA...... 35 vi

LIST OF ILLUSTRATIONS

Figure Page

1.1 OperatingSystem...... 1

2.1 OSIModel...... 12

4.1 DistributionofITS4Flags...... 23 vii

LIST OF TABLES

Table Page

A.1 StringOperationsinLinuxKernel...... 24 1. INTRODUCTION

The general architectural diagram exhibited in the Unix documentation has become a standard for viewing the structure of modern operating systems. In the diagram, Figure 1.1, software is viewed as a series of concentric circles with the hardware at the center. This representation started with the Multics system’s rings of protections. (See Operating System Concepts [1] pages 402–404 for a discussion Multics rings of protection.) The first ring of programs around the hardware is usually the operating system kernel. The interface it exhibits is thought of as a virtual machine implemented by the operating system kernel. The other rings of complex code, which are program data, complete the operating system. Application programs operate using the interfaces provided by those rings of code.

Applications

Interpreter

Interrupt Handlers

Files Hardware Resource Allocation Scheduler Paging

Serialization Task Management Compilers

Figure 1.1. Operating System

The operating system kernel contains several major pieces of software that make up the operating system. First, there will be software to provide allocation and de- allocation of the resources provided by the operating system. Second, there will be software to schedule the use of the processor. Third, there will be software to provide a consistent view of the large variety of unique hardware that provides persistent storage of data and external sources and destinations for data that the system processes. 2

There will be software to provide communication between processes active in the system and its environment. In addition there may be other software, which does not fit into these four groups. An operating system provides software that supports communication with hu- man users, compilers, linkers, file directories, script interpreters, text editors, browsers, et cetera. The software is implemented using the interfaces exported by the kernel. The software that supports these interfaces, in systems designed to run on multiple hardware platforms, are written when possible to be independent of the hardware in programming languages that could be compiled to run on the various hardware platforms. Looking closely at the kernel, there is more than one ring of software involved. The inner one, the interest of this paper, is composed of several pieces of software just as the outer ring, mentioned previously. There is a collection of device drivers and a collection of interrupt handlers. The synchronization and serialization protocols form another component of this inner ring. Finally there is software that activates each task or process when it starts the next time slice. In this paper, this software shall be referred to as the dispatcher since it implements, but does not produce or modify, the schedule. Much of the software in the inner ring was hardware dependent and must be written specifically for a given hardware platform. For example, the inter-process communication should take advantage of hardware support for updating memory so that all updates occurred at one time as seen by all other processor elements. This was not a large concern with single central processor unit (CPU) systems but as additional CPUs were added it became critical. However, even with single CPU systems using paging, if the page were being changed while it was being backed up, incorrect results could occur. An operating system for today’s computers is a large and complex collection of software. This means that the probability of software problems remaining in produc- tion programs is certainly not zero. It is virtually impossible to remove all program- ming errors and omissions from a small program. The number of paths through the program grows by a power of two; that power is the number of branch or decision points. For example, for four branch points there are 16 paths (2 to the 4th power, 3

2 x 2 x 2 x 2) through the code. A small program may have a dozen or more deci- sion points yielding thousands of paths through the program. An operating system typically consists of hundreds of such simple programs, increasing the probability of errors. In addition, possible branch points are introduced by the possibility of hardware interrupts, events the hardware detects, which require the attention of the software. It is possible to instruct the hardware to hold these interrupts, to postpone their processing. Often portions of an operating system execute sequences of instructions with the interrupts held. In this mode the branches to the code, which provide the attention that the hardware needs, do not occur. This mode of processing is called processing with interrupts disabled. If this processing mode is used for too long or too frequently, the system will appear unresponsive and sluggish. Alternatively, it is necessary that this mode be used to save information associ- ated with the hardware events and to retain that information so it can be processed later. Also some events are so critical that the processing must occur immediately or be processed completely without the possibility of anything else happening. For example, complex structures must be kept consistent. If a change requires more than a single hardware instruction’s execution, then the set of instructions necessary to transition from one consistent state to another needs to occur without interruption. This restriction can be relaxed by use of mutual exclusion protocols. Mutual exclu- sion protocols are any procedure used to preclude two or more processes updating a set of data concurrently. Included in these protocols are semaphores, sometimes called locks, critical sections including nonpreemptable critical sections and related program structures whose purpose is to prevent data being corrupted by concurrent updates. As more and more systems have added network processing, the sources for interrupts have become more numerous. However, these additional interrupts are only a symptom of a much larger problem. Frequently, there are requests received from remote users, perhaps malicious users. These requests must be validated before they are processed, which requires the requestor to be identified as having the proper authority. 4

If a malicious user can access a system, then that user may be able to attack it. This user may be able to cause the system to execute some code that has been introduced. If that code executes in a privileged state, then that code can cause significant damage to the system or its files. In systems programmed in the C language there are several service routines that can cause buffer overflow that will overlay data stored immediately following the buffer. Such overflows frequently cause system problems: hangs, crashes, and data destruction. For example, if the data overlaid contained an address, the address would be incorrect. When that address was used, if the address was invalid, the system would crash, but if it was valid, then data might be corrupted. To better understand these problems, research was undertaken to evaluate an operating system to identify potentials for overflows and similar problems. These problems have been observed in both small and large machine environments. There- fore, analysis of an operating system designed for an individual’s use should provide insights into all machine environments. To be able to do an analysis, access to at least a functional specification for the system is needed. If a functional specification is not available, then one can be developed from source code. The conclusion is drawn that access either to a functional specification or to the source code is needed. Of the systems developed for the personal computer, only Linux has source code available. Most systems do not have complete functional specifications; none of the common operating systems have one, for example Microsoft Windows. Why would one want to analyze an operating system? Almost all programs process on systems under the control of an operating system. No application can be considered truly secure if the operating system is not reliable and if the operating system does not provide a secure environment. The media regularly reports about computer systems that have been compromised. As more and more data is moved to systems with network access, it becomes more important to prevent compromise. A compromised system can result in unauthorized data distribution or in data unavail- ability due to system outage or denial of service attacks. Research into providing secure operating systems was performed by the Secure Computing Corporation with support from University of Utah under federal contract. 5

They specified the requirements for a secure microkernel by specifying what input is needed by system calls, what must happen in system calls and what information must not be disclosed by those calls to preserve the integrity and security of the microkernel. (See The Flask Security Architecture: System Support for Diverse Security Policies [2] for a discussion of this microkernel. Also see AssuranceintheFlukeMicrokernel Formal Security Model [3].) A different approach to operating system reliability and integrity was discussed in Self-Repairing Computers [4]. Its approach was for the operating system to identify an incorrect operation and take corrective action to recover. How well this could be accomplished with attacks on the system’s integrity is not known. The security of a computer system is important for the protection of sen- sitive data and to provide system integrity. In 1985 the United States Depart- ment of Defense (DOD) Orange Book titled Department of Defense Trusted Com- puter System Evaluation Criteria [5] was published. It provided a list of require- ments that a computer system must satisfy to be considered secure. (See http: //www.fas.org/irp/nsa/rainbow.htm [6] for a description and instruction for ob- taining a copy.) The book identified six basic requirements that a trusted computer system must meet. These requirements were divided into three groups: policy, ac- countability and assurance. Policy consisted of a specific and well-defined security policy and the existence of the access control labels associated with each object. Accountability required that each subject be identified and authenticated and that audit data be selectively kept and protected relating to every security related event. Assurance required the system to contain software and/or hardware that could be evaluated to provide assurance the policy and accountability requirements were met and that the trusted mechanisms enforced the policy and accountability requirements continuously. Since all access to the system or data was provided by system calls and the storing of audit data was accomplished by calls within the system, examining calls would be a start in the evaluation to provide the required assurance. The book also identified four divisions of systems determined by how well sys- tems met the six basic requirements, labeled A, B, C, and D. The D division consisted of tested systems that failed to meet the requirements. Division C met some of the requirements, division B met more, and division A met nearly all. Divisions B and C 6

were further divided into numeric classes of increasing compliance with the require- ments. Division A might be divided in the future. An example of how the divisions were subdivided was the difference between classification division C1 and C2. Classi- fication division C1 provided discretionary access control and provided identification and authentication. Classification C2 provided the additional features of creation and capture of audit data. In recent years some operating systems have been cer- tified to be fully compliant with the requirements listed in the DOD Orange Book, for example: the Linux version called Fermi RedHat v7.3.1, the Sun operating sys- tem version called Solaris 8/SunOS 2.8 and the Silicon Graphics, Inc IRIX operating system version called IRIX 6.5. Research similar to this thesis has been reported by the Department of Computer Science of Carnegie Mellon University. (See http://www-2.cs.cmu.edu/afs/cs/ project/edrc-ballista/www/02_06_ballista_retrospective.pdf [7] for a descrip- tion of that research.) The Ballista project used wrappers to collect call integrity statistics. The wrappers are used to capture the output from system calls along with the call input data. The system instrumented with the wrappers is driven using pro- grams implementing test scenarios. The data is collected and analyzed. The tests consisted of system calls using parameters that might be valid or invalid. Invalid parameters included pointers to strings of length zero as well as pointers that did not point to valid addresses. If the system software identified the incorrect input and returns an error code or message, the system is operating correctly. If the incorrect input is not identified or the system hangs or dumps, then it is not operating correctly. In An Approach for Analyzing the Robustness of Windows NT Software [8], Ghosh, Shah and Schmidt discuss a similar project directed specifically at the Win- dows NT system from Microsoft. They note that randomly selected parameters will not test much of the functionality of the software and discuss a grammar based ap- proach to generating parameters for the calls. This grammar based approach would generate syntactically valid parameters, yet still contain parameters that contain val- ues that are not correct. The intent was to generate parameters that would cause the tested code to process farther than the initial validity checks. These parameters could still contain invalid addresses. 7

Operating system calls have been identified as a point for determining the se- curity of the system. Calls provided almost all access to the system and would be the first line of defense for a system. This thesis examined the calls within the Linux operating system to determine if these calls were susceptible to security or integrity exposures. Even though the Fermi RedHat v7.3.1 version had been certified, it was still desirable to examine the kernel calls in a generally available commercial version of Linux. This thesis will discuss the unique environment of operating system software, describe the approach taken to analyze the Linux kernel code, describe some of the exposures found, suggest methods to make the Linux kernel less vulnerable and con- clude with possible extensions to this research. 8

2. OPERATING SYSTEM ENVIRONMENT

Operating system developers have realized that a certain amount of protection was needed for the operating system code. This protection should be supported by the hardware. One of the early International Business Machine Company (IBM) operating systems, IBSYS, used on the IBM 704 and IBM 709 hardware, suffered from this lack of protection. To be sure that each new process or task would not be affected by some previous task, the system was reloaded from external storage after each task or process. This was initiated by the operating system itself. However, it was not unusual for the operator to have to intervene at the end of a process or task and reload the operating system manually. Because of this experience, starting in the early 1960s new mainframe hardware designs included support for protecting the operating system from programs running under it. The GE 645 and the IBM 360 were examples of these mainframe hardware designs. The early personal computers (PCs) did not have hardware support for operat- ing system code. By the late 1980s protection for operating system code was finding its way into the PC hardware. As expectations of control programs grew, the need for protection of the operating system increased. Without protection an operating sys- tem could do its function only if all of the applications follow the rules. However, code that was being developed or modified might violate, intentionally or unintentionally, the rules. With the operating system protected, there needed to be a way to change the hardware state from application execution to system execution and then back. IBM and Intel have used interrupts to switch the state of system execution between appli- cation and system execution. The system code then was provided a special instruction to change the other way. Other hardware manufacturers might have different schemes to effect these transitions. Regardless of how the transitions were implemented, the transitions do occur in most modern computing systems. This created a clear line of separation between the user applications and a subset of the operating system. This subset was the kernel mentioned earlier. 9

This separation of states of execution was a key attribute of operating systems. This was not the only thing that makes the system execution environment different from the application execution environment. If for any reason the kernel failed, it was not just one application that suffered. Everything currently in progress on the hard- ware platform would be adversely affected. If an application tried to access memory to which it had not been allowed access, that application would cease execution with some message indicating the error. All other applications might continue. Still, in the case of an error in the kernel, the application process to which the system was currently responding would fail. After a failure the system would attempt to recover. Such attempts to recover frequently required a complete system restart which caused all active processes to fail. Another aspect of this separation was the need to pass information between an application and the system. If an application needed data from the system, a request must be prepared describing the data and where logically to place the data in the application’s area. This request was presented to the system via a transition to system state. When the system had the data in place the application then could be scheduled to resume execution. This implied that the operating system needed to be able to access an area allocated to an application. But, an application had no need to access system data directly. Should an application need data the system had stored, a request could be made for the data to be available in its area. Notice that some time might elapse between a request for data and it being available for an application. If the system had only one application active at a time, nothing might proceed except the activity needed to make the data available. Fre- quently, this activity was performed by some component of the hardware without assistance from the software. Only this hardware activity completion should be re- ported to the application. This left the CPU idle for a period of time. Most operating systems today process with many concurrently active applications. A database management package would process at least part of the time as an application. Network communication packages would process similarly. Therefore all applications would not need to be directly user related, but might be seen by end users as a portion of the operating system. The distinction between application and 10

system processes then would come down to the execution mode of the hardware when thecodewasprocessing. The Linux operating system which was analyzed for the research reported here was a composite of sophisticated applications and kernel code. All of this together was called the Linux kernel. From an end-user’s perspective, it was the main portion of the Linux operating system. If any part of it failed, most users will be impacted. The design of Linux borrowed heavily from the design of Unix. Unix’s design was based on Multics. (See The UNIX Time-Sharing System [9] for a discussion of Unix’s design.) The book Multics: An Examination of Its Structure [10] provided a discussion of Multics design and its use of the General Electric (GE) 645 computer addressing mechanism to provide protection for programs and data. This addressing mechanism used an integer assigned to a segment to determine the protection level of the data or process in the segment. A small integer made the segment more protected than a large integer. Code operating in a segment with a large integer could not write to a segment with a smaller integer while code in a segment with a small integer could write to all segments with larger integers. These integers ranged from zero to sixty-three and all segments with the same integer were called a ring of protection. As mentioned earlier the software in some areas must take advantage of the facilities the hardware provides. Linux, while designed in the spirit of Unix to run on multiple platforms, was initially implemented on the Intel platform. This platform was a complex instruction set architecture that evolved through several versions with each new version maintaining compatibility with the earlier versions. Likewise, the documentation available mirrored that evolution. To understand the Intel platform, two books were very helpful. The first one was ISA System Architecture [11] that described the basic Intel architecture before protection for multiprocessing and for the operating system. The second was Pro- tected Mode Software Architecture [12] that described the changes to implement the protected mode operation. This book was written with the assumption that the material in the first one was understood. Paging and the associated concept of virtual memory came to large systems in the late 1960s and has been used on PCs since the late 1980s. To provide virtual 11

memory, all real CPU memory was divided into fixed chunks. The kernel allocated a set of these chunks to contain a program and its work areas when it was initially loaded. Other chunks could be added and deleted during execution. As the process executed various chunks were not used uniformly; some might go for long periods of time without reference while others were used frequently. If the system detects that a chunk had not been used for a long time, the system could remove that chunk from the memory map for the process and backup its contents to disk if it had been changed. Note that the initial load of the program and data was a change of that memory chunk. Then, if the chunk again was referenced, an interrupt would occur and the system must find an available chunk of real memory and restore that chunk from disk. These chunks were called pages. This shuffling of chunks of memory between real memory and disk is called paging. The hardware must be capable of supporting this by supplying an indication that a page has been changed or has been referenced. This relatively minor hardware change provided better performance for the system. Some operating systems mapped most of the operating system into each process; others, like Linux, did not. This meant that obtaining user request parameters in Linux required special instructions to move the data from the requesting process’ area to the system space. Likewise, to move the requested data back to the process’ area again required special instructions. These special instructions usually could be executed only by code operating in system state. In addition, the instructions to initiate activity to external devices and the instructions to test the state of an active page, referenced or changed, were also restricted to execution in system state. Now consider a process using a network. The International Standards Organi- zation (ISO) Open Systems Interface (OSI) model (See Figure 2.1) had seven levels. The top three usually processed as application code. The next two levels were usu- ally one or more system applications. The remaining bottom two levels usually were processed either directly in hardware or as system code such as device drivers. This meant there must be a way to communicate between the various levels. Requests must be made to set up memory shared between levels or to move the data between levels. If shared memory is used, then some mechanism must be in place to prevent concurrent updates to the control structures. If shared memory is not used, then the 12

7 Application

6 Presentation

5 Session

4 Transport

3 Network

2 Data Link

1 Physical

Figure 2.1. OSI Model

system must facilitate the movement of the data. Either way the exchange of data is called inter-process communication (IPC). When ready to use a network, the process called the operating system and requested that one or more IPC paths be established between it and the transport level. A path must be established for sending or receiving data. If the process would be sending and receiving data, then frequently two or more paths were used. Each macro that implemented an access to a path, formatted data and passed data via a system call would make the transition to system execution. The operating system might examine the data to determine what action was being requested or might pass the data to the network application for its action. Either way the system would be involved in this data transfer. Application code normally would communicate with the transport level, the Transmission Control Protocol (TCP) or the User Datagram Protocol (UDP) service. 13

TCP would provide a connection oriented data stream service. UDP would provide a connectionless datagram service. Since TCP would provide a reliable data stream the supported services included some encryption and authentication facilities. Because UDP was a datagram service, it usually would not provide these facilities. UDP and TCP services must communicate with the network layer to transmit data. This would be a IPC transfer, even if both the transport and network layers were implemented as kernel processes. Likewise, the network must communicate with the data link layer. The data link level frequently would be device driver code. 14

3. LINUX EVALUATION

This research was performed to identify weaknesses, potential security expo- sures, in the Linux operating system. Two books, Linux Kernel Internals [13] and Understanding the Linux Kernel [14], were helpful in explaining the various parts of the kernel and how they interact. These books were studied with frequent references to source code. As the research proceeded, these books were again examined for clarification of observed event sequences. Also Unix in a Nutshell [15] was a helpful reminder of Unix command syntax used by Linux. Linux was chosen because it was an open source package readily available for download from the Internet. Before de- ciding on Linux, other operating systems were considered. Having spent several years in industry maintaining IBM systems software, the author considered IBM systems. Likewise, the Microsoft Windows system was considered. However, of all these, only Linux provided open source for analysis. A near current development version, Linux 2.5.9, was selected from the site and downloaded. A Linux machine in the UMR Trusted Systems Laboratory was chosen and the Linux version 2.5.9 was installed as a test kernel. After installation, the machine with the test kernel was tested to insure the down- loaded version would run. Since this was a developmental version, there was a slight possibility that there might be code problems. The initial activity was to identify the kernel routine that performed the tran- sition from one address space to another address space. With the routine identified, the code would be modified to track those transitions. Afterwards the code would then be changed to perform fault injection in a process to see if unexpected situations occurred. The schedule module in the Linux kernel was identified as performing all of these transitions. A module is a collection of programs that are placed together for compilation purposes. They should provide a specific functionality. In providing this functionality, these programs usually process one or more shared data structures. If no other programs process these data structures, then the definition for these data structures 15

need only be present in the module. A module is the smallest collections of programs that can be compiled. This schedule module was comprised of several routines that decide which task should receive use of the CPU next. One of these routines effected the transitions. A call was added to that routine and a new routine added to the schedule module. That code was tested and corrected until it performed acceptably. The identity of the next process to be activated was difficult to find. The field in the process control block, which identified the process, did not have a name that described its function. After some time the field was found that contained the process identifier. Messages were formatted and sent to the console to show the transitions. This significantly degraded the performance. A normal boot process required between one and two minutes to get the system up and fully functional. With this added message traffic, the system was still not fully initialized after twenty minutes. The data collected was examined to determine why there was such a large delay though some delay was expected. This investigation showed two processes were logging all of the message traffic. These messages were then causing additional process transitions as these processes were dispatched to log the messages. The new routine was modified to only count the logging process transitions and produce a message after every one hundred transitions to a particular log process. The idle process was also entered frequently and it too was treated the same way. With these changes the system would initialize in about eight minutes. At this point the difficulty in actually performing fault injection was determined to be too large to complete the research in a reasonable amount of time. How each register was used in each routine in a process would need to be understood. The stack base and stack pointer would be consistent across all processes, but to perform meaningful fault injection other usages would be needed. Also the identity of the routine being activated was necessary to do useful fault injection. The research was modified to analyze the Linux kernel code using a package called It’s the Software Stupid Security Scanner, abbreviated ITS4, which was devel- oped by John Viega and Gary McGraw. (See ITS4: A Static Vulnerability Scanner for C and C++ Code [16] for a description of the package.) This package scanned a collection of software and located occurrences of a large number of function calls that 16

have been known to be security problems. For most of these occurrences, a function in the package analyzed the parameters and made a determination of how significant a problem the call presented. The user of this package could set a severity level above which a message will be produced so the user can manually examine the call. The package’s design required that a list of source modules to be examined be entered on a command line. The instructions also stated that all macros and include files used by the code should be included in the analysis. The Linux distribution con- sisted of eight major source directories, the include file directory and a documentation directory. The source directories contain more than two hundred subdirectories. The number of modules, without the include files, exceeded twenty-five hundred. Entering all of these on a command line was not possible. To make the analysis possible, the directory structure was used to subset the data to be analyzed. To ensure that the necessary include file data was present, the code was processed through the C preprocessor using the Linux include directory. This produced code with most macros expanded and all of the include files referenced placed directly in each module. A few of the modules were not processed successfully because the necessary include files had not been placed in the Linux include directory by the configuration process. These modules were not actually in the generated configuration. These few modules were analyzed along with the preprocessed modules, as each directory was scanned. The ITS4 package did identify calls to a large collection of functions that fre- quently have security and integrity concerns. Among the file related functions ex- amined were functions such as chdir, chgrp, mount, mkdir and umount. There were several process control related functions examined such as execve, execv, execl, getenv and getlogin. All of the C library random number functions were identified since these functions produced pseudo random number sequences with a short period. In other words, these sequences repeated their cycle after no more than 65,536 uses. For se- curity related use, sequences with periods in the billions or higher should be used. String processing functions including input and output functions scanf and printf and their variants also were examined. The ITS4 package, having been designed for application code, did not identify calls to the printk function. This was quickly realized. The printk function served 17 the same purpose as the printf function. The printk function sent messages to the system console and system logs. The calling parameters had the same format as the parameters to the printf function. Having realized the printk function was not being analyzed, the process was restarted after adding an entry to the database of function calls to be analyzed. This entry was made almost identical to the printf entry, using the same parameter analysis function. The rationale was the parameters were essentially the same format for the two functions. The parameter analysis function for printf should perform acceptably for printk. The parameter analysis function proved to be inadequate. While it only reported printf calls where the output length was not readily apparent or essentially fixed, it reported all printk calls. This made the manual analysis of routines much heavier. The kernel code frequently took advantage of the C compilers concatenation of multiple successive literal strings. Identifying and handling such constructs was not done in the printf parameter analysis routine. Such occurrences in printf could be left for manual evaluation, as they were infrequent. The frequency of such usage in printk calls would require that it be handled. That is, multiple successive fixed literal strings should be handled as a single literal string. An effect of preprocessing the code was that several functions that present secu- rity exposures were declared in the include files. If the functions were never invoked, then no exposure existed. The declaration line was identified by ITS4 for manual analysis. Additionally, in kernel code there were a number of functions called open. They performed activities similar to an application file open. For example, there was a function called open that establishes a connection to a sound card. This open func- tion required a different set of parameters than a file open. Also the kernel function that actually performed a file open was called open in the various file systems. As the file open was processed, it might be necessary to open a file system. The function that opens a file system had the function name of open, but required a different collection of parameters. The exposure associated with a file open was not the same in these other kernel open functions. As a result of previously mentioned differences between application and system code and their execution environments, almost all of the exposures identified in kernel code were string related. String functions in the C language were easy to use, but 18

were also a major source of security exposures. The string processing associated with a string selector in a sprintf function could create an output string of essentially unpredictable length. This in turn could cause a memory overlay. Malicious users frequently could use such overlays to cause problems in an application or in a system. While there were alternative forms that could avoid this exposure, such as snprintf, the safer alternatives rarely are used. Similarly, the strcpy and strcat functions were sources of security exposures. 19

4. RESULTS AND CONCLUSIONS

When all source directories had been analyzed by ITS4, there were 2,893 mod- ules in 239 directories. Each of the eight source directories was processed separately one subdirectory at a time. The last directory analyzed was the drivers source di- rectory. Some of the subdirectories had to be divided because of their size. In the other seven source directories, the lines flagged by ITS4 were examined in detail to evaluate how much of a security exposure was presented. The printk function call frequently took advantage of the C compiler’s ability to concatenate consecutive strings into one string. This string became the format string for the printk function. The ITS4 parameter analysis function, used to analyze the printk calls, considered any such usage worth flagging as a potential security risk. Also, in the few occurrences of printk calls without such strings, almost all were flagged for a reason. Nearly all printk calls were flagged as a security risk. As the flagged lines were analyzed, the specific string usage was divided into four categories based on exposure. Placed in category 1 were all string related function calls that either involved fixed literal strings or print function calls that had format strings that selected numeric data. These operations had results whose maximum length could be predicted accurately. Category 2 received string related function calls where the source string was in a control block. It could be reasonably expected that the string would have a fixed maximum length, unless the string terminator has been destroyed. Category 3 contained the function calls where the source string was returned by an imbedded function call. Again, the software developer should know the maximum length of such a string. If the call was changed, the returned string might exceed the expected maximum length. Finally, all other string related calls were placed in category 4. These would include calls where a function parameter was passed to the string operation without the code checking the length of the string. Of the 1,157 string related function calls (categories 2, 3 and 4) found in the source directories other than the drivers directory, 266 function calls fell in category 4. Thus, 23 percent, nearly a quarter of the string related function calls examined in detail, presented a security exposure. While the exposure existed, malicious use of 20

the exposure was only possible by code executing in the system address space or by code that could move data to the system address space. A representative sample of the network subdirectory of the drivers directory was examined in detail. The analysis revealed that a high percentage, 95 percent of the flagged calls involved string operations. Nearly all of the calls used strings from a control block. The name field from the device control block was the most frequently flagged string used. The distribution of modules in the eight directories was skewed. (See chart 4.1.) Over 58 percent of the modules were in the drivers directory. This directory also contained over 62 percent of modules that contained flagged statements and nearly 78 percent of the flagged statements. In a running Linux system, not all of the modules found in the drivers directory would be built. Modules from the directory would be included in the system only if the related hardware were present. The file downloaded contained all of the source code since the distribution site does not know for what hardware the code would be used. The first step in a kernel generation asked what hardware is present. Using that information the needed modules were selected for inclusion in the system. For example, a Linux system built for an Intel PC would not contain modules for the IBM System 390 or for a Macintosh machine. The three moderate size directories were the sound, net, and fs directories. Of these, sound and net contained modules that would not be included unless specific hardware were present in the target machine. However, the net directory also in- cluded the modules that provide the session, transport, and network levels of the OSI model including both support for both UDP and TCP at the transport level. The other moderate size directory, fs, contained the support for all of the file systems supported by Linux. Again there were some that might not be present in a given Linux implementation. The reason was not all file systems needed to be supported by a specific implementation. Only the wanted file systems would be included, which would result in a smaller kernel load module. The final four directories were mm, ipc, init, and kernel. These small directories contained code that would usually be present in an implementation. The mm direc- tory contained the memory management modules. These support paging and memory 21

allocation and de-allocation. The ipc directory contained the inter-process communi- cation support. The init directory contained the code that supports initialization of the Linux system including the boot process. The kernel directory contained the rest of the Linux system. For example, the schedule module was in the kernel directory. This code connected all of the other parts to make the running system. Therefore, the central portion of the Linux kernel contained the smallest portion of the code and also the fewest flags per module. This certainly contributed to the stability associated with Linux kernel. (See detail data in Appendix A, Table A.1.) This research could be extended by completing a detail analysis of the drivers directory. When the extension is undertaken, a new parameter analysis module used in ITS4 for printk functions should be written. As noted earlier, the parameter analysis function for printf did not perform an adequate analysis with printk function calls. The function would need to keep track of all string constants strings declared in a module so the concatenation of strings could be properly processed. In addition, in Linux there were many conditional expressions that selected one of two strings for inclusion in a print function. These conditionals might be nested. To reduce the number of lines to be examined manually, conditionals would need to be processed by the parameter analysis function associated with printk. Once such a function exists, it might be of interest to ITS4 user community for use with printf. This would require reprocessing the drivers directory. Since it is one of the more frequently modified portions of the Linux distributions, a later version should also be obtained for such analysis. The analysis described here could be a step in certifying an operating system under United States Department of Defense Orange Book [5]. While the RedHat v7.3.1 version of Linux has been certified as compliant with the requirements in the Orange Book, these requirements did not address system internal calls as this research examined. ITS4 identified security exposures in Linux not addressed by Orange Book standard. An exposure existed not clearly identified by ITS4. The exposure involved the interaction of multiple calls. Data was moved to the kernel using the memcpy function which required the length of the data to be copied. The code used the length of the data in the application area to allocate space in the kernel area. The length of the 22

allocated space was used as the length for memcpy to move. ITS4 considered this memcpy move to be safe since the length was that of area to receive the data. The copied data then was used as a parameter to a function call. The called program moved the data to its buffer. If the length was more than the length of the buffer in the called program, the area beyond the buffer would be overlaid. This last move would be examined by ITS4 but it might not be obvious a problem existed. This especially would be true if the parameter was declared in data structure with the string at the end of the structure declaration. A similar exposure existed in Multics. (See Multics: An Examination of Its Structure [10].) If a program operating in one of the more privileged rings copied a string from its user and then passed it as a parameter to another function at its level, the function could overlay data in its data segment. The data overlay could cause program malfunctions. The intent of the addressing structure used by Multics on the GE 645 was to facilitate data sharing without copying data. However, if data was modified during its use, then it must be copied to avoid interference between the various uses. In conclusion the Linux system has the possibility of security exposures through of variable length string parameters. Some directories, for example the drivers direc- tory, were exposed more than others. This would lead to the possibility of a compro- mised system. More security awareness training for the Linux developers could help reduce these exposures in future versions. In addition, if an analysis similar to this were performed on new versions prior to release, problems could be addressed before distribution. 23 Flags Flagged Modules 60.00% 80.00% 100.00% % of Total Found % Distribution of ITS4 Flags 0.00% 20.00% 40.00% fs ipc init net mm

sound kernel drivers Directory

Figure 4.1. Distribution of ITS4 Flags 24

APPENDIX String Operations Statistics

Table A.1. String Operations in Linux Kernel

Directory Source Files Flagged Files String Files Flags String Ops Control Block Return by Call Others /drivers/acorn/block 2 2 63 /drivers/acorn/char 11 3 12 /drivers/acorn/net 3 3 73 /drivers/acorn/scsi 11 10 270 /drivers/acpi 16 14 169 /drivers/acpi/debugger 10 1 2 /drivers/acpi/dispatcher 10 0 0 /drivers/acpi/events 8 0 0 /drivers/acpi/executer 23 0 0 /drivers/acpi/hardware 5 0 ‘ 0 /drivers/acpi/kdb 1 0 0 /drivers/acpi/namespace 13 0 0 /drivers/acpi/parser 8 0 0 /drivers/acpi/resources 11 0 0 /drivers/acpi/tables 6 0 0 /drivers/acpi/utilities 11 0 0 /drivers/atm 17 16 798 /drivers/base 5 3 6 /drivers/block 22 17 676 /drivers/block/paride 22 22 137 /drivers/bluetooth 3 3 30 /drivers/cdrom 15 15 430 /drivers/char 100 94 1465 /drivers/char/agp 2 2 34 /drivers/char/rdm 17 1 9 /drivers/char/ftape/compressor 2 1 53 /drivers/char/ftape/lowlevel 18 16 486 continued on next page 25

Table A.1. continued

Directory Source Files Flagged Files String Files Flags String Ops Control Block Return by Call Others /drivers/char/ftape/zftape 9 8 186 /drivers/char/ip2 6 4 147 /drivers/char/mwave 4 4 41 /drivers/char/pcmcia 1 1 4 /drivers/char/rio 11 10 804 /drivers/dio 1 1 11 /drivers/fc4 4 3 48 /drivers/hotplug 13 5 14 /drivers/i2c 11 11 255 /drivers/ide 56 49 875 /drivers/ieee1394 15 13 268 /drivers/input 5 4 33 /drivers/input/gameport 6 6 22 /drivers/input/joystick 19 19 114 /drivers/input/serio 2 1 5 /drivers/isdn/act2000 3 3 135 /drivers/isdn/avmb1 14 13 372 /drivers/isdn/divert 3 3 24 /drivers/isdn/eicon 19 9 70 /drivers/isdn/hisax 66 61 733 /drivers/isdn/hysdn 8 8 70 /drivers/isdn/i4l 10 9 387 /drivers/isdn/icn 1 1 39 /drivers/isdn/isdnloop 1 1 36 /drivers/isdn/pcbit 6 6 89 /drivers/isdn/sc 10 7 23 /drivers/isdn/tpam 7 5 111 /drivers/macintosh 15 15 242 /drivers/md 10 9 343 /drivers/media/radio 16 16 55 /drivers/media/video 39 38 664 /drivers/message/fusion 7 4 146 continued on next page 26

Table A.1. continued

Directory Source Files Flagged Files String Files Flags String Ops Control Block Return by Call Others /drivers/message/i2o 7 7 473 /drivers/mtd 11 9 117 /drivers/mtd/chips 12 9 169 /drivers/mtd/devices 10 8 145 /drivers/mtd/maps 23 23 79 /drivers/mtd/nand 3 2 4 /drivers/net 141 130 4645 /drivers/net/appletalk 3 3 75 /drivers/net/arcnet 10 5 21 /drivers/net/e100 6 4 50 /drivers/net/e1000 5 3 56 /drivers/net/fc 2 1 174 /drivers/net/hamradio 11 11 135 /drivers/net/hamradio/soundmodem 11 4 155 /drivers/net/irda 17 13 170 /drivers/net/pcmcia 12 12 297 /drivers/net/sk89lin 15 3 82 /drivers/net/skfp 20 7 75 /drivers/net/tokenring 10 10 561 /drivers/net/tulip 14 14 365 /drivers/net/wan 40 37 191 /drivers/net/wan/lmc 4 3 63 /drivers/net/wireless 10 10 279 /drivers/nubus 3 2 65 /drivers/parport 16 15 103 /drivers/pci 11 5 50 /drivers/pcmcia 34 33 197 /drivers/pnp 5 4 56 /drivers/s390 6 3 46 /drivers/s390/block 9 6 166 /drivers/s390/char 21 10 41 /drivers/s390/misc 1 1 58 continued on next page 27

Table A.1. continued

Directory Source Files Flagged Files String Files Flags String Ops Control Block Return by Call Others /drivers/s390/net 5 5 184 /drivers/sbus 2 2 12 /drivers/sbus/audio 5 5 61 /drivers/sbus/char 24 22 429 /drivers/scsi 124 107 4850 /drivers/scsi/aic7xxx 8 5 217 /drivers/scsi/aic7xxx/aicasm 2 2 47 /drivers/scsi/aic7xxx old 2 1 11 /drivers/scsi/pcmcia 6 6 48 /drivers/sgi/char 9 7 56 /drivers/tc 4 3 45 /drivers/telephony 3 3 172 /drivers/usb 1 1 14 /drivers/usb/class 4 3 108 /drivers/usb/core 8 5 87 /drivers/usb/host 14 2 33 /drivers/usb/image 4 2 44 /drivers/usb/input 6 5 19 /drivers/usb/media 14 8 162 /drivers/usb/misc 5 3 39 /drivers/usb/net 6 3 69 /drivers/usb/serial 18 18 713 /drivers/usb/storage 13 3 21 /drivers/video 98 62 566 /drivers/video/aty 5 2 23 /drivers/video/matrox 10 8 66 /drivers/video/riva 3 1 23 /drivers/video/sis 3 1 13 /drivers/zorro 4 3 13 /drivers Subtotal 1683 1275 0 28387 0 0 0 0 /fs 38 25 13 47 16 6 2 8 continued on next page 28

Table A.1. continued

Directory Source Files Flagged Files String Files Flags String Ops Control Block Return by Call Others /fs/adfs 7 4 1 12 3 3 0 0 /fs/affs 8 3 3 25 10 6 0 4 /fs/autofs 6 3 2 11 4 4 0 0 /fs/autofs4 6 3 1 7 1 0 0 1 /fs/bfs 3 3 2 13 9 9 0 0 /fs/ 11 10 4 63 4 3 0 1 /fs/cramfs 2 2 0 8 0 0 0 0 /fs/devfs 2 2 2 49 25 1 0 24 /fs/devp 2 1 0 4 0 0 0 0 /fs/driverfs 1 1 0 2 0 0 0 0 /fs/efs 6 4 0 25 0 0 0 0 /fs/exportfs 1 1 0 2 0 0 0 0 /fs/ 11 2 2 46 11 5 0 6 /fs/ 11 4 1 100 27 12 0 15 /fs/fat 8 4 3 48 10 7 0 3 /fs/freevxfs 8 5 0 32 0 0 0 0 /fs/hfs 27 18 3 96 3 1 2 0 /fs/hpfs 13 13 3 106 5 0 0 5 /fs/internezzo 24 14 11 170 30 7 8 15 /fs/isofs 7 3 1 47 1 1 0 0 /fs/jbd 6 3 1 30 3 0 2 1 /fs/jffs 4 3 2 308 16 14 0 2 /fs/jffs2 24 19 7 591 22 20 0 2 /fs/ 19 2 1 39 2 0 0 2 /fs/lockd 13 11 6 164 21 21 0 0 /fs/minix 8 5 2 32 5 5 0 0 /fs/msdos 2 1 1 3 2 2 0 0 /fs/ncpfs 8 3 0 7 0 0 0 0 /fs/nfs 15 10 7 160 48 40 1 7 /fs/nfsd 13 7 4 93 43 21 19 3 /fs/nls 47 1 1 3 3 0 0 3 /fs/ 9 5 2 26 6 1 0 5 continued on next page 29

Table A.1. continued

Directory Source Files Flagged Files String Files Flags String Ops Control Block Return by Call Others /fs/openpromfs 1 1 1 19 3 3 0 0 /fs/partitions 13 13 6 129 15 2 11 2 /fs/proc 10 7 5 40 10 7 0 3 /fs/qnx4 7 4 1 14 4 3 1 0 /fs/ramfs 1 0 0 0 0 0 0 0 /fs/ 20 10 7 153 36 12 10 14 /fs/romfs 1 1 1 5 2 2 0 0 /fs/smbfs 8 5 3 48 5 1 1 3 /fs/sysv 9 5 3 29 10 10 0 0 /fs/udf 17 10 4 108 13 7 0 6 /fs/ufs 11 3 1 31 6 0 0 6 /fs/umsdos 7 7 6 95 60 58 1 1 /fs/vfat 2 1 1 2 1 1 0 0 fs Subtotals 477 262 125 3042 495 295 58 142 /init 3 2 2 63 11 3 0 8 init Subtotals 3 2 2 63 11 3 0 8 /ipc 4 3 0 8 0 0 0 0 ipc Subtotals 4 3 0 8 0 0 0 0 /kernel 28 16 10 85 25 8 0 17 kernel Subtotals 28 16 10 85 25 8 0 17 /mm 24 17 5 81 23 16 0 7 mm Subtotals 24 17 5 81 23 16 0 7 /net 3 2 1 11 1 1 0 0 /net/802 12 7 3 13 3 3 0 0 /net/8021q 3 3 3 38 30 27 0 3 /net/appletalk 3 0 0 0 0 0 0 0 /net/atm 16 11 6 95 43 37 1 5 /net/ax25 18 12 3 47 12 12 0 0 /net/bluetooth 6 6 4 29 11 8 0 3 /net/bridge 12 5 3 17 10 10 0 0 /net/core 17 10 4 56 23 21 0 2 continued on next page 30

Table A.1. continued

Directory Source Files Flagged Files String Files Flags String Ops Control Block Return by Call Others /net/decnet 11 10 5 46 7 6 0 1 /net/econet 1 1 0 3 0 0 0 0 /net/ethernet 3 1 1 1 1 1 0 0 /net/ipv4 36 34 11 188 23 23 0 0 /net/ipv4/netfilter 52 44 16 277 47 23 2 22 /net/ipv6 22 18 4 92 7 7 0 0 /net/ipv6/netfilter 11 7 4 71 9 2 0 7 /net/ipx 3 2 1 18 1 0 1 0 /net/irda 22 14 7 97 17 15 0 2 /net/irda/ircomm 8 7 2 21 3 3 0 0 /net/irda/irlan 8 5 1 31 5 5 0 0 /net/irda/irnet 2 2 2 215 25 21 0 4 /net/khttpd 13 13 0 66 0 0 0 0 /net/lapb 5 2 0 2 0 0 0 0 /net/netlink 2 2 0 3 0 0 0 0 /net/netrom 9 5 1 27 3 1 0 2 /net/packet 1 1 0 1 0 0 0 0 /net/rose 10 7 1 34 2 0 2 0 /net/sched 23 10 3 48 6 4 0 2 /net/sunrpc 15 9 5 42 27 26 1 0 /net/unix 2 1 0 4 0 0 0 0 /net/wanrouter 3 2 2 49 23 22 0 1 /net/x25 10 8 1 43 2 2 0 0 net Subtotals 362 261 94 1685 341 280 7 54 /sound 3 3 3 17 7 1 0 6 /sound/core 21 12 3 45 7 4 0 3 /sound/core/ioctl32 6 0 0 0 0 0 0 0 /sound/core/oss 9 2 2 18 12 11 0 1 /sound/core/seq 18 10 5 25 12 10 0 2 /sound/core/seq/instr 4 0 0 0 0 0 0 0 /sound/core/seq/oss 11 7 1 58 1 1 0 0 continued on next page 31

Table A.1. continued

Directory Source Files Flagged Files String Files Flags String Ops Control Block Return by Call Others /sound/drivers 4 3 1 12 3 3 0 0 /sound/drivers/mpu401 2 2 1 7 1 1 0 0 /sound/drivers/opl3 6 2 1 5 2 2 0 0 /sound/i2c 3 1 0 4 0 0 0 0 /sound/isa 7 7 5 54 6 6 0 0 /sound/isa/ad1816a 2 2 1 12 3 0 3 0 /sound/isa/ad1848 2 2 2 12 4 3 1 0 /sound/isa/cs423x 5 5 5 41 12 9 3 0 /sound/isa/es1688 2 2 2 12 4 2 2 0 /sound/isa/gus 23 15 2 49 2 2 0 0 /sound/isa/opti9xx 3 3 3 70 11 9 2 0 /sound/isa/sb 14 9 5 35 8 7 0 1 /sound/isa/wavefront 4 4 2 124 12 11 0 1 /sound/oss 80 70 29 1822 57 40 1 16 /sound/oss/cs4281 3 2 0 228 0 0 0 0 /sound/oss/emu10k1 14 2 2 25 7 7 0 0 /sound/oss/dmasound 5 3 1 18 1 1 0 0 /sound/pci 15 15 8 163 19 19 0 0 /sound/pci/ac97 2 1 1 9 5 4 1 0 /sound/pci/ali5451 1 1 0 23 0 0 0 0 /sound/pci/cs46xx 2 2 1 17 1 1 0 0 /sound/pci/emu10k1 14 6 2 15 8 1 0 7 /sound/pci/korg1212 1 1 1 59 39 39 0 0 /sound/pci/nm256 2 1 0 16 0 0 0 0 /sound/pci/rme9652 2 2 1 20 10 10 0 0 /sound/pci/trident 4 4 0 17 0 0 0 0 /sound/pci/ymfpci 2 2 0 10 0 0 0 0 /sound/ppc 7 4 2 11 2 2 0 0 /sound/synth 1 0 0 0 0 0 0 0 /sound/synth/emux 8 4 2 24 6 5 0 1 sound Subtotals 312 211 94 3077 262 211 13 38 continued on next page 32

Table A.1. continued

Directory Source Files Flagged Files String Files Flags String Ops Control Block Return by Call Others Totals 2893 2047 330 36428 1157 813 78 266 33

BIBLIOGRAPHY

[1] J. L. Peterson and A. Silberschatz, Operating System Concepts. Computer Sci- ence, Reading, Massachusetts: Addison-Wesley Publishing Company, Inc., 1983.

[2] R. Ray Spencer, S. Smalley, P. Peter Loscocco, M. Hibler, D. Andersen, and J. Lepreau, “The Flask security architecture: System support for diverse security policies,” in Proceedings of the 8th USENIX Security Symposium, (Washington, D.C., USA), pp. 123–139, USENIX, Aug. 1999.

[3] Secure Computing Corporation, “Assurance in the Fluke Microkernel Formal Security Policy Model,” Feb. 1999. On web at http://citeseer.nj.nec.com 23 July 2003.

[4] A. Fox and D. Patterson, “Self-repairing computers,” Scientific American, vol. 288, pp. 54–61, June 2003.

[5] United States National Security Agency, Washington, District of Columbia, De- partment of Defense Trusted Computer System Evaluation Criteria, Dec. 1985.

[6] Federation of American Scientists, “FAS Intelligence Resource Program: The NSA Rainbow Series.” On web at http://www.fas.org/irp/nsa/rainbow.htm 23 July 2003.

[7] P. Koopman, “Software Robustness Testing: A Ballista Retrospective.” On web at http://www-2.cs.cmu.edu/afs/cs/project/edrc-ballista/www/02_ 06_ballista_retrospective.pdf 23 July 2003. [8] A. K. Ghosh, V. Shah, and M. Schmid, “An approach for analyzing the ro- bustness of windows NT software,” in Proceedings of 21st NIST-NCSC National Information Systems Security Conference, pp. 383–391, 1998.

[9] D. M. Ritchie and K. Thompson, “The UNIX Time-Sharing System,” Commu- nications of the ACM, vol. 17, pp. 365–375, July 1974.

[10] E. I. Organick, Multics System: An Examination of Its Structure. United States of America: Alpine Press, Inc., 1972.

[11] T. Shankley and D. Anderson, ISA System Architecture. Reading, Massachusetts: Addison Wesley Longman, Inc., third ed., 1995.

[12] T. Shankley, Protected Mode Software Architecture. Reading, Massachusetts: Addison Wesley Longman, Inc., 1996.

[13] M. Beck, H. B¨ohme, M. Dziadzka, U. Kunitz, R. Magnus, and D. Verworner, Linux Kernel Internals. Harlow England: Addison Wesley Longman Limited, second ed., 1998. 34

[14] D. P. Bovet and M. Cesati, Understanding the Linux Kernel. United States of America: O’Reilly & Associates, Inc., first ed., 2001.

[15] D. Gilly and the Staff of O’Reilly & Associates, UNIX in a Nutshell System V Edition. United States of America: O’Reilly & Associates, Inc., second ed., 1998.

[16] J. Viega, J. T. Bloch, T. Kohno, and G. McGraw, “ITS4: A static vulnerability scanner for C and C++ code,” in 16th Annual Computer Security Applications Conference, ACM, Dec. 2000.

[17] W. R. Stevens, UNIX Network Programming. Englewood Cliffs, New Jersey: Prentice-Hall, Inc, 1990.

[18] J. H. Wensley, L. Lamport, J. Goldberg, M. W. Green, K. N. Levitt, P. M. Melliar-Smith, R. E. Shostak, and C. B. Weinstock, “SIFT: Design and analysis of a fault-tolerant computer for aircraft control,” Proceedings of the IEEE, vol. 66, pp. 1240–1255, Oct. 1978. 35

VITA

Dayle Glenn Majors, born 16 July 1941, received his Bachelors of Arts in Math- ematics from Texas A&M University in 1965. While working at Ling-Temco-Vought (LTV), he received a Masters of Science in Mathematics from Texas Christian Uni- versity in 1974. At LTV he worked as an applications programmer and as a systems programmer on the IBM MVT and SVS operating systems. He was the program- mer responsible for installation and configuration of an IBM VM operating system environment. In 1976 he moved to McDonnell-Douglas’s McAuto systems program- ming department maintaining IBM’s MVS and MVS XA operating systems. He was the primary systems programmer on the installation and operation of an IBM 3850 Mass Storage System. He also was responsible for performance and capacity plan- ning of their large direct access storage facilities. From 1993 until 2000 he worked at Union Pacific maintaining their interactive database system internals and performing database reconfigurations. Mr. Majors has been an active member of the Association for Computing Machinery (ACM) since 1965 and the Mathematical Association of America since 1970.