REVERSE ENGINEERING A MICROCOMPUTER­BASED CONTROL UNIT

John R. Bork

A Thesis Submitted to the Graduate College of Bowling Green State University in partial fulfillment of the requirements for the degree of

MASTER OF INDUSTRIAL TECHNOLOGY

August 2005

Committee:

David Border, Advisor

Sri Kolla

Sub Ramakrishnan  2005

John R. Bork

All Rights Reserved iii

ABSTRACT

David Border, Advisor

This study demonstrated that complex process control solutions can be reverse engineered using the Linux 2.6 kernel without employing any external or real­time enhancements like RTLinux and RTAI. Reverse engineering creates knowledge through research, observation, and disassembly of a system part in order to discern elements of its design, manufacture, and use, often with the goal of producing a substitute. For this study Intel x86 compatible computer hardware running custom programs on a Fedora Core 2 GNU/Linux replaced the failure­prone microcomputer­based control unit used in over

300,000 Bally electronic pinball machines manufactured from 1977 to 1985. A pinball machine embodies a degree of complexity on par with the problems encountered in a capstone undergraduate course in electronics and is fair game for reverse engineering because its patents have expired, although copyrighted program code is still protected.

A black box technique for data development analyzed the microprocessor unit in terms of a closed­loop process control model. Knowledge of real­time computing theory was leveraged to supplant legacy circuits and firmware with modern, general­purpose computer architecture.

The research design was based on iterative, quantitatively validated prototypes. The first iteration was a user program in which control of the solenoids was accomplished but the switch matrix failed to correctly detect switch closures. The second iteration introduced a kernel iv module to handle low level control, while a supervisory user program managed game play, logging, and fault detection. In the third iteration an emulation of the digital displays was added to the user interface and it was subjected to public testing.

Three variables were manipulated: the module process period, the system load, and the use of POSIX real­time for the supervisory process. Overall game play performance was acceptable when the workqueue process was repeated every two or three milliseconds; at four milliseconds considerable lamp flicker was evident. An economic realizability measure,

25% unit cost savings, was met by minimizing expense with free, open source software and recycled computer hardware. Project cost was reduced by casting the effort in an educational context and by distributing software development among the SourceForge community, boosting overall return on investment. v

And would not a person with good reason call me a wise man, who from the time when I began to understand spoken words have never left off seeking after and learning every good thing that I could? .. Or for this, that while other men get their delicacies in the markets and pay a high price for them, I devise more pleasurable ones from the resources of my soul, with no expenditure of money?

Xenophon, Socrates' Defense vi

This work is dedicated to my grandfather, who put tools into my hands and encouraged me to take things apart. vii

ACKNOWLEDGEMENTS

My deepest gratitude goes to Sue for her patience and support through the years while I toiled with this project. I thank Dave Border and the rest of my committee for their guidance. viii

TABLE OF CONTENTS

Page

CHAPTER I. INTRODUCTION...... 1

Context of the Problem...... 2

Statement of the Problem...... 4

Objectives of the Study...... 4

Significance of the Study...... 5

Assumptions and Limitations...... 9

Definitions of Terms...... 10

CHAPTER II. REVIEW OF THE LITERATURE...... 14

Historical Context...... 16

Relevant Theory...... 22

Ingle's Four Stage Process...... 22

Prescreening Process...... 25

Stage 1: Evaluation and Verification...... 27

Stage 2: Technical Data Generation...... 29

Stage 3: Design Verification...... 31

Stage 4: Design Implementation...... 32

Legal Issues...... 32

Microcomputer Technology...... 35 ix

Process Control...... 39

Real­time Computing...... 41

GNU/Linux...... 48

Current Literature...... 52

Linux 2.6 Kernel...... 53

CHAPTER III. METHODOLOGY...... 56

Restatement of the Problem...... 58

Research Design...... 58

Methods...... 61

Evaluation and Verification...... 62

A Socratic Method...... 64

Technical Data Generation...... 70

Controlling Continuous Solenoids...... 73

Controlling Momentary Solenoids...... 74

Detecting Switches...... 79

Controlling Feature Lamps...... 85

Controlling Digital Displays...... 89

Controlling Game Operation...... 91

Design Verification...... 93

Prototype Determination...... 94 x

Prototype Testing...... 97

Design Implementation...... 99

Apparatus...... 100

Electronic Pinball Machine...... 101

Data Recorder...... 102

Sourceforge.net...... 103

Statistical Techniques...... 104

CHAPTER IV. RESULTS...... 106

Process Models...... 107

Continuous Solenoid Control...... 107

Momentary Solenoid Control...... 108

Switch Matrix Control...... 113

Feature Lamp Control...... 117

Game Operation Control...... 121

Testbed Iterations...... 129

First Iteration: The User Space Program...... 134

Second Iteration: The Kernel Module...... 136

Third Iteration: The Public Interface...... 139

Project Return on Investment...... 142

Overall Performance...... 143 xi

CHAPTER V. CONCLUSIONS...... 153

Discussion...... 155

REFERENCES...... 159

APPENDIX A: PARTS LISTS...... 166

8­Bit ISA I/O Board...... 166

Interface Circuit Between ISA Board and Pinball Machine...... 167

APPENDIX B: SOFTWARE PROGRAM CODE...... 168

APPENDIX C: OUTPUT OF ANALYTIC PROGRAMS...... 169 xii

LIST OF FIGURES

Figure Page

1. Corrosion near the battery on an AS2518­17 MPU...... 3

2. Bally electronic pinball machine...... 21

3. Go/No­go decision matrix...... 23

4. Flow of technical data through the Four Stage process...... 30

5. Motorola MC6800 CPU block diagram...... 36

6. Closed­loop process control diagram...... 40

7. Real­time periodic process...... 43

8. Socratic reverse engineering method...... 65

9. Bally pinball control system block diagram...... 67

10. AS2518 MPU electrical connections...... 68

11. AS2518 electrical schematic...... 71

12. Continuous solenoids electrical connections...... 73

13. Solenoid control detail of MPU schematic...... 75

14. Momentary solenoids electrical connections...... 75

15. Pinball machine switch matrix...... 80

16. Switch matrix control detail of MPU schematic...... 83

17. Switch matrix electrical connections...... 83

18. Feature lamps control detail of MPU schematic...... 86 xiii

19. Feature lamps electrical connections...... 88

20. Digital displays control detail of MPU schematic...... 90

21. Digital displays electrical connections...... 90

22. Reverse engineered process model...... 91

23. Momentary solenoids control timing diagram...... 108

24. Manually firing a solenoid from the user interface...... 111

25. Switch matrix control timing diagram...... 113

26. Diagnostic display showing switch closures...... 115

27. Oscilloscope display of untriggered feature lamp SCR anode...... 118

28. Oscilloscope display of triggered feature lamp SCR anode...... 118

29. Feature lamp control timing diagram...... 119

30. Kernel workqueue process program flowchart...... 124

31. Supervisory process program flowchart...... 126

32. Pinball machine instruction card...... 127

33. 8­bit ISA I/O board schematic...... 131

34. Interface Circuit Between ISA Board and Pinball Machine...... 132

35. Backboard score display by supervisory program...... 139

36. Prototype system being tested at 2005 Pinball At The Zoo...... 141

37. Test bed computer system block diagram...... 144

38. Ideal and actual SCR triggering...... 148 xiv

39. Comparison of cumulative percentage graphs...... 149 xv

LIST OF TABLES

Table Page

1. Assembly code to create 1000 Hz waveform...... 44

2. Linux system calls used to increase process determinism...... 52

3. Reverse engineering economics based on traditional development model...... 64

4. Division of MPU electrical connections into major sub systems...... 72

5. Mapping of solenoids to decoder outputs...... 77

6. Mapping of switches to switch matrix positions...... 81

7. Mapping of feature lamps to decoders...... 87

8. Timing requirements of the original MPU...... 92

9. Real­time requirements for prototype testing...... 98

10. Summary of solenoid pulse duration error...... 111

11. Summary of switch sampling and detection latencies...... 116

12. Commands available to the kernel module control process...... 123

13. Key functions in the kernel module...... 125

14. Key functions in supervisory control program...... 128

15. Summary of supervisory process execution periods...... 129

16. Reverse engineering economics based on FOSS development model...... 142

17. Number of Failed Real­Time Requirements Per Game...... 146

18. Percentage of Missed Repeating Switch Detections...... 150 xvi

19. CPU Duty Cycle of Kernel Module Process...... 151

20. CPU Duty Cycle of Supervisory Control Process...... 152 xvii

PREFACE

My mission is to promote the philosophy of computing by combining the speculative through scholarly study of ancient texts and the practical through hobbyist study of electronic technology. I was inspired to learn how computers work when I was challenged to defend an assertion that Plato's Symposium addressed the ethics of virtual reality. In reading this text I found hints that ancient thinkers pondered the implications of human beings' comportment towards technology. This notion was strengthened by reading Plato's Phaedrus, which contains fascinating critiques of rhetoric and writing. It took little imagination to conclude that writing was as much a means of computing to the ancients as electronic machinery is to us today. Being a philosopher I understood the importance of knowing my subject matter, so I became a technologist. I lived according to the ethical imperative that, if I was going to philosophize about computing, I had better understand how computers work from the ground up. Along the way I struggled with the problem of finding interesting projects in which to integrate a comprehensive understanding where specialization seemed necessary. Ten years of effort are expressed through this thesis.

The selection of an utterance of the ancient Greek philosopher Socrates, written by

Xenophon over two thousand years ago, as the frontispiece to this text, illustrates the idea that human beings take pride in their ability to learn things, and that from this knowledge they are xviii able to build, rather than buy, particular items of interest. Wisdom decides which projects are worth pursuing "from the resources of the soul," and which are best satisfied with commercial, off­the­shelf items. Reverse engineering and philosophical investigation arise from the same desire to know. So when I read Kathryn Ingle's comment, "I do not believe that reverse engineering is truly anything new although its origins are vague. In all the research I have done on the subject since 1985 I have found no definitive discussion of where it came from or how it is conducted" (1994, p. ix), I felt that I had found both an answer to her question and a path connecting the ancient to the modern, towards a philosophy of computing: reverse engineering a microcomputer­based control unit. 1

CHAPTER I. INTRODUCTION

Reverse engineering creates knowledge through research, observation, and disassembly of a part in a system in order to discern elements of its design, manufacture, and use, often with the goal of producing a substitute. The failure­prone microcomputer­based control unit used in over 300,000 Bally electronic pinball machines manufactured from 1977 to 1985 was reverse engineered and replaced with Intel x86 compatible computer hardware running custom programs on a stock Fedora Core 2 GNU/Linux operating system. This chapter explains the context of the problem, the objectives of the study, its significance, and its assumptions and limitations.

Chapter II reviews the literature relevant to reverse engineering a microcomputer­based control unit. Chapter III develops the methodology used in the study and describes the apparatus. The results of the study are presented in Chapter IV. Chapter V gives the overall conclusions of the research and a discussion of future work. Appendices contain supplementary data, computer programs, and output from programmatic analysis of testbed performance.

Context of the Problem

Reverse engineering creates knowledge through research, observation, and disassembly of a part in a system in order to discern elements of its design, manufacture, and use. Often the goal is to procure an alternative for a costly replacement part or to build one from scratch if it is 2 no longer available (Ingle, 1994; Schwartz, 2001; Cifuentes and Fitzgerald, 2000; Behrens and

Levary, 1998; Lancaster, 1996). A formal reverse engineering project will not proceed beyond an initial screening phase unless the substitute part can be reproduced at a 25% savings over the next best alternative and yield a return on investment of 25:1 or better (Ingle, 1994). Most research literature in reverse engineering deals with computer software or machined parts (Ingle,

1994). Projects that have received the most attention (and documentation) are large scale government and business undertakings where the substantial labor cost of doing the reverse engineering work is justified. The fact that only the largest and most profitable reverse engineering projects have been well documented implies that little scholarly attention has been given to reverse engineering items where cost savings and return on investment are not expected: reverse engineering complex components that combine machined parts and computer software.

The typical microcomputer control unit found in many consumer products ranging from automobiles to pinball machines consists of custom fabricated circuit boards, generic though possibly obsolete electronic components, and embedded computer programs. Cases often arise when a systemic failure of such a unit cannot be economically repaired, especially for old, first generation devices. For example, the replacement cost for the electronic control unit used in

1970s and 1980s era pinball machines can be half the book value of the machine itself (Petit,

2002; Alltek Systems, 2004).

An unfortunate trait was built into many first generation microcomputer­based control units leading to their extinction. It was the placement of a rechargeable battery on the main 3 circuit board for backup power to static memory chips. When these nickel cadmium batteries leak, a common occurrence with twenty years old devices, extensive corrosion will be present at electrical junctions. The Bally Corporation used the AS2518­17 Microprocessor Unit (MPU) and a slightly revised AS2518­35 model from 1977 to 1985 in over 300,000 pinball machines.

Games now malfunction or do not work at all because the MPU boards have suffered systemic battery acid damage. Note the bluish corrosion surrounding the battery and on the IC sockets in

Figure 1.

Figure 1. Corrosion near the battery on an AS2518­17 MPU

Troubleshooting is very difficult since electrical blockages due to corrosion produce the same symptoms as failed components, many of which are expensive, obsolete items. The original manufacturer has long been out of business, so the supply of "new old stock" (NOS) units continues to decrease. After market replacements exist; however, they cost nearly one half the book value of most machines. The outcome of this original design fault is that a large number of games remain idle, are parted out for spare parts, or are destroyed.

Reverse engineering this control system involves replacing both hardware and software.

It is expedient to utilize generic, "off the shelf" components wherever possible. An option is to 4 use a second hand personal computer rather than a build a custom control unit. However, additional hardware is required to interface an x86 based personal computer to the approximately

40 digital I/O points that connect the AS2518­35 to the rest of the pinball machine. Furthermore, running a general purpose, open operating system like GNU/Linux presents significant problems for satisfying timing requirements due to nondeterminism inherent in program execution. Care must be taken when developing software programs to duplicate the operation of a single­tasking computer in a non real­time, multi­tasking environment.

Statement of the Problem

The failure­prone microcomputer­based control unit used in over 300,000 Bally electronic pinball machines manufactured from 1977 to 1985 was reverse engineered and replaced with Intel x86 compatible computer hardware running custom programs on a stock

Fedora Core 2 GNU/Linux operating system.

Objectives of the Study

1. Determine relevant specifications of the Bally AS2518­17 MPU. The scope of control

operations included:

a. Energizing continuous and momentary solenoids, 5

b. Sensing switch closures,

c. Illuminating feature lamps, and

d. Providing rudimentary game play operations for evaluation.

2. Build GNU/Linux, PC based substitute controller for the MPU. This included:

a. Electrical interface to pinball machine and

b. Control software framework design and implementation.

3. Evaluate project success. Criteria included:

a. Satisfying physical, electrical, safety requirements,

b. Performing comparably to the original unit judged by correctness of the scope of

operation including timeliness requirements, and

c. Meeting cost target of $150 or less using recycled components for a 25 percent

cost savings over the $200 Alltek System Ultimate MPU unit.

Significance of the Study

Why reverse engineer a microcomputer­based control unit, specifically one from a pinball machine? Undertaking this study has practical, pedagogical, ethical, and ultimately philosophical significance. Reverse engineering scholarship addressing microcomputer­based control units has hitherto focused on non­commercial, low­production, high­cost military systems for the very good reason that reverse engineering is only justified when no commercial, 6 off­the­shelf replacement is available for a given part, and the system must be preserved at any cost. This sentiment is captured in Ingle's Go/No­Go Matrix, which is introduced in Chapter II.

If the proposed methodology succeeds, then a substantial cost savings can be realized for replacing defective Bally pinball MPUs. The fate of tens of thousands of pinball machines is at stake. Two solutions currently exist. The damaged boards can sometimes be repaired by a skilled electronics technician, although frequently the corrosion is so extensive that the effort is futile. Alltek Systems currently manufactures a proprietary replacement MPU board selling for

$200 (Alltek Systems, 2004). Both of these solutions involve substantial costs that can cause the overall restoration of a typical pinball machine to exceed its book value. On the other hand, a well designed reverse engineering effort of moderate scale could produce a solution yielding considerable cost savings while actually raising consumer demand for electronic pinball machines in general. A project that the hobbyist community judges as an "ultimate hack" will be eagerly repeated by others. Furthermore, if demand warrants it, a business opportunity will be created for a small manufacturing operation willing to produce and market reverse engineering kits. In the process more pinball machines will be saved from oblivion due to this reduced cost and renewed interest. Therefore, this study can influence the future history of pinball by slowing the demise of first generation electronic games. Further practical significance will subsequently be derived when the methodology is successfully extended to other classes of microcomputer control units. 7

Reverse engineering can serve as an educational method as well as a practical tool

(Honan,1998; Lancaster,1996; Farrell, S., Hesketh, R. P., Newell, J. A., et al., 2001). Reverse engineering microcomputer control units, because it seeks knowledge of both hardware and software requirements, touches upon real­time computer programming and electronic technology. Usually the most significant projects involve expensive, proprietary systems and are not suitable for student projects (Clark,1997). Pinball machines, on the contrary, are relatively safe test beds for making the mistakes that accompany learning. With tens of thousands of units available, they can be used for teaching real­time programming or the hardware technologies that they embody, as well as for practicing reverse engineering techniques. Thoughtful analysis reveals that the key concepts of electronic microcomputer technologies can be made manifest through study of the operational parameters of the control unit and its related subsystems. These include the basic concepts of a single­tasking computer and fixed automation. At the same time, solving the problems inherent in controlling a pinball machine with a nondeterministic computer could be an excellent way to learn about real­time operating systems and programming. They are inherently easier to comprehend and better suited for experimentation on account of their being less complex, less miniaturized, and designed with less exacting tolerances than their state of the art counterparts. Furthermore, the uncertain legal status surrounding reverse engineering products with vigorously enforced patents and copyrights confounds and misdirects pedagogical applications. 8

On the ethical front, a growing sense of environmental stewardship recognizes "high tech waste" as a global problem (Averett, 2003). Ethically­minded corporate policies are beginning to drive "cradle to grave" product designs to minimize future waste. The same mentality ought to provide incentives to fund research for the preservation of certain classes of extant products, too, that are presently disregarded because they are not deemed worth the expense to repair them.

Besides those things that already have considerable value to hobbyists and collectors, such as vintage automobiles, it is possible to posit other products for which enhanced value may be derived in part from the very fact that they are amenable to reverse engineering. A carefully selected candidate for reverse engineering may harbor unexpectedly high latent value through the synergies created by actually executing the project. One example is using recycled personal computers for the replacement systems. Another is promoting low cost, hacker friendly, open source solutions like GNU/Linux. The hope is that these qualities can be leveraged to enhance other reverse engineering solutions.

As a technical thesis, this study embodies a critical step towards the development of a nascent discipline called the "philosophy of computing." Michael Heim, a well­known philosopher and technologist, made the provocative suggestion in his 1992 paper "The Computer as Component: Heidegger and McLuhan:" "The Schreibstube [writing chamber] is giving way to the computer workstation, and scholarship requires a cybersage." How does one become a cybersage? The answer is self­evident: through a profound understanding of the computer workstation. The cybersage scholar builds, not just uses, computer technology. This ethical 9 position is at the heart of the philosophy of computing. Not only is an educational program based on this study a way for technically minded scholars to learn how computers work from the ground up, but it also encourages comprehensive thinking in an intellectual climate where technology is once again forcing professionals to specialize due to the enormous complexity of the state of the art.

Assumptions and Limitations

1. The scope of this reverse engineering project was the Bally AS2518­17 MPU used in the

game Evel Knievel. The solution replaced it and interfaced to the other control elements

via the existing physical connectors.

2. It was beyond the scope of this project to implement control of the five, seven­segment,

vacuum fluorescent digital displays in the pinball machine head unit.

3. The reverse engineered solution, including the methods and means utilized to generate it,

must not illegally infringe on identifiable copyrighted materials or patents, or violate any

laws governing reverse engineering like the Digital Millennium Copyright Act (DMCA). 10

Definitions of Terms

Black Box The idea intended by this term is that the inner workings of the item under investigation are inscrutable, and must be inferred from documentation and observation of its behavior. This is contrasted to a White Box, which is better understood as transparent. The reasons for not being able to take the item apart to look inside may be technical, legal, or practical.

C A very popular computer programming language used for low level systems programs including the Linux kernel.

CPU The Central Processing Unit is the core of a von Neumann architecture computer.

Common first generation examples are the Motorola 6800, used in the Bally pinball MPU, while more recent examples include the Intel Pentium series and AMD Athlon.

Determinism In the context of computer controlled operations, determinism refers to their degree of predictability. This includes the time they take to execute, their frequency of execution in the case of a periodic process, and the latency before they begin to execute when they are 11 triggered by some event in the case of a sporadic process. Real­time operating systems seek to increase the determinism of time­critical tasks.

Firmware Computer programs and data embedded in electronic circuits such as read­only memory (ROM) and other types of programmable devices are referred to as firmware. From a legal standpoint they are protected by the same copyrights that protect software and human readable source code.

FOSS Free Open Source Software is a designation giving to computer software made available free of charge under licensing terms that permit the source code to be freely distributed as well.

GNU/Linux An operating system package that contains a version of the Linux kernel accompanied by GNU operating system utility programs. Common examples are Debian,

Mandrake, Red Hat, Slackware, and SuSE.

Memory Mapped I/O An interfacing strategy in which input/output (I/O) lines appear to the computer system as an address in memory. 12

MPU The Microprocessor Unit is the name for the main computer control board used in Bally electronic pinball machines. The specific model numbers reverse engineered in this study are the

AS2518­17 and AS2518­35.

PHP PHP Hypertext Preprocessor is a popular FOSS scripting language that can be executed by a web server program for producing dynamic web page content.

PIA The Motorola 6820 Peripheral Interface Adapter provides twenty four bits of digital input or output to the Motorola 6800 CPU bus via memory mapping.

PPI The Intel 8255 Programmable Peripheral Interface is similar to the Motorola 6820 PIA.

RTSC The Real Time Stamp Counter built into Pentium­compatible x86 CPUs increments once each clock cycle. It can be read by programs and thus provides a means of very precise timing measurement. The assembler function RDTSC is Read Time Stamp Counter.

ROI Return on Investment is a measure of the economic benefit (or loss) of a reverse engineering project. It is defined as the overall cost savings minus the project cost, divided by the project costs. 13

RTAI Real­Time Application Interface is a hard­real time extension to the Linux kernel originally developed by the Department of Aerospace Engineering of Politecnico di Milano. It is a sub­kernel that runs the Linux kernel as the idle task. The technology on which it is based has been the subject of a patent infringement claim (Laurich, 2004).

RTLinux Another real­time package originally developed by FSMLabs for the Linux kernel that has its own sub­kernel and scheduler for running real­time tasks. It adds a software layer beneath the Linux kernel with full control of interrupts and other hardware (Ripoll, et al., 2002).

RTOS A Real­time Operating System has design features intended for handling time­critical operations, often to control physical machinery.

SCR A Silicon Controlled Rectifier is an electronic device with three terminals. When a positive electrical voltage is applied to the Gate, the SCR will conduct electrical current applied from its Anode to its Cathode as long as the voltage at the Anode is positive. SCRs are used on by the Bally pinball machine to turn feature lamps on and off. 14

CHAPTER II. REVIEW OF THE LITERATURE

A review of the literature included the history of reverse engineering scholarship, relevant theories, and current literature. Kathryn Ingle's pioneering 1994 work Reverse Engineering traces its origins and offers a generic, four stage reverse engineering process. Suitable projects proceed through the stages of verification and evaluation, data generation, design verification, and finally implementation. Economic objectives and legal constraints determine which reverse engineering projects are typically pursued by professionals. Common economic objectives are

25% cost savings over commercial, off­the­shelf items and 25:1 return on investment. Project cost and overall risk are positively correlated to system complexity and inversely correlated to data availability. Many articles dealt with the legal ramifications of reverse engineering

(Behrens and Levary, 1998; Cifuentes and Fitzgerald, 2000; Duncan, 1989; Freeman, 2002;

Godwin, 2002; Honan, 1998; Miller, 1993; Schwartz, 2001). Patents, copyrights, and trade secrets restrict legal data development, further limiting the domain of potential candidates. A clean room, also referred to as black box, method may be imposed to avoid even the appearance of infringement. The literature survey revealed that most scholarly research about reverse engineering features computer software or machined parts in large scale government and business undertakings. Their research problems concern analyzing software components for porting legacy source code to state of the art platforms, and designing computer integrated manufacturing systems for duplicating precision parts. Furthermore, the sole publication in the 15 review that explicitly dealt with microcomputer­based control units focused on military rather than consumer applications (Welch, et al., 1996). While it provided a useful model for data development, it assumed a large budget and unrestricted access to program source code.

Educational applications suggested a means to generate return on investment for reverse engineering highly complex, yet relatively inexpensive items such as microcomputer­based control units found in consumer devices. Similar first generation microprocessor systems are often used for undergraduate instruction in electronics and computer technology. An electronic pinball machine embodies a degree of complexity on par with the problems encountered in a capstone course, and they are fair game for reverse engineering because their patents have expired, although copyrighted program code is still protected. A black box technique for data development utilizing an iterative method derived from ancient Greek philosophy was used to analyze the device in terms of a closed loop process control model (Bateson, 2002; Plato, 1973;

Xenophon, 1992). Knowledge of real­time computing theory was leveraged to supplant legacy circuits with modern, general­purpose computer equipment (Dankwardt, 2002; Shaw, 2001;

Silberschatz, Galvin and Gagne, 2000; Stankovic and Ramamritham, 1988). Current literature about the Linux 2.6 kernel and the free, open source software movement inspired the solution for creating a flexible, low cost framework for controlling any Bally pinball machine based on the

AS2518 MPU (Brosky, 2004; Heursch, et al., 2004; Laurich, 2004; Lindsley, 2003; Love, 2003;

O'Reilly, 2004; Salzman, 2004; Tennis, 2004; von Krogh, 2003; Weinberg, 2004). 16

Historical Context

Reverse engineering creates knowledge through research, observation, and disassembly of a system part in order to discern elements of its design, manufacture, and use, often with the goal of producing a substitute. "The goal of reverse engineering is to enable systems engineers and automated tools to understand the important features of a legacy system's hardware, software, operating system, requirements, documentation and human elements" (Welch et al.,

1996, p. 6). The United States Supreme Court and a District Court have defined it, respectively, as "a fair and honest means of starting with the known product and working backwards to divine the process which aided its development or manufacture," and "to process of starting with a finished product and working backwards to analyze how the product operates or how it was made" (Behrens and Levary, 1998, p. 27). While the basic concept of reverse engineering alludes to the sort of work done throughout history by technicians, hobbyists, and collectors attempting to preserve a precious object or system, as a formal technical operation bearing this name it is only a few decades old. The only scholarly work of substantial length devoted to reverse engineering is Kathryn Ingle's 1994 book Reverse Engineering. Subsequent publications exist that are devoted to particular reverse engineering projects, but few others address the concept of reverse engineering in general. The term first appeared in a formal, public document was the United States government's Defense Federal Acquisition Regulation Section 217.720­2: 17

As a last alternative, a design specification may be developed by the Government through

inspection and analysis of the product (i.e., reverse engineering) and used for competitive

acquisition. Reverse engineering shall not be used unless significant cost savings can be

reasonably demonstrated and the action is authorized by the Head of the Contracting

Activity (Ingle, 1994, p. 28).

This puts the origin of the term at approximately 1985. Ingle's work examines the extensive program launched by the United States Navy and Department of Energy that processed over 150 reverse engineering candidates. However, Ingle claims that the work was conducted without a mature, clearly defined process suitable for handling large sets of candidates. She details the lessons learned from her years working in this program in a methodology called Prescreening and the Four­Stage Process. It begins by identifying a deficiency, whether it is an overly expensive part, one that fails prematurely, or one that otherwise degrades overall system performance. Its goal is to create a prioritized set of parts from a pool of potential candidates for which a four stage process is to be conducted for each item.

Reverse engineering should be differentiated from related terms like concurrent engineering, re­engineering, and value engineering, although they are often used interchangeably. Concurrent engineering is a manufacturing approach in which design, prototype fabrication, production planning, packaging, distribution, and marketing aspects of a product all occur simultaneously within the context of a computer­integrated manufacturing system. Re­engineering is the activity of replacing an entire system, not just discrete items 18 within it that are faulty. Value engineering is reverse engineering aimed at adding new features to the existing system in order to increase the return on investment. Finally, reverse engineering is a significant undertaking, and should be contrasted to hobbyist activities focused on working on a single unit, which are better called hacking or tinkering. Ordinarily, reverse engineering projects take place in a corporate or institutional settings, involving a team of participants and incurring tens of thousands of dollars in costs (Ingle, 1994).

The intellectual trends that are reflected in reverse engineering include comprehensive, systems thinking, "cradle to grave" design processes, iterative design processes, analytical, technical problem solving, and a freely collaborative mentality which are epitomized today by the free, open source software movement (O'Reilly, 2004). Conjoined to this valuation of intellectual sharing are ethics aimed at recycling equipment to reduce high tech waste, and the desire to rekindle interest in science and technology in education(Averett, 2003; Farell, et al.,

2001; Godwin, 2002).

The sole publication found in the literature review that explicitly dealt with reverse engineering microcomputer­based control units focused on military rather than consumer applications. "Reverse Engineering of Computer­Based Control Systems" claims to have

"spanned the discipline of reengineering, the systematic application of methodology and tools for managing the evolutionary transformation of existing computer­based system to encompass new or altered requirements and to transport such systems into new environments and onto new technology bases" (Welch et al., 1996). The study featured the United States Navy's AEGIS 19

Weapon system, an expensive, non­commercial platform written in the Ada programming language. Its methodology depends on the availability of source code for the legacy software, which may be problematic in the case of purchased, commercial applications.

The literature survey also revealed pedagogical applications of reverse engineering. In

"The Right to Tinker" Mike Godwin suggests that the discovery process of taking things apart to learn how they work "is central to how many Americans think about things they own, from cars to televisions to computers and software and digital content" (Godwin, 2002). He argues that

American creativity is tied to our right to tinker, which in turn has led to the creation of countless technological innovations. The thrust of Godwin's argument, and many others, is to counter recent legal judgments against those who tinker in favor of those who own intellectual property.

Many times the infringing cases derive from scholarly research, such as the 2001 arrest of

Dmitry Sklyarov after delivering a lecture on reverse engineering the security behind Adobe's eBook format, and Bunnie Huang's 2003 book Hacking the X­Box: An Introduction to Reverse

Engineering, and it is feared that the fount of American creativity is at stake. An ideal application of reverse engineering would skirt any serious legal issues, for example the 2001 work of Farrell, Hesketh, Newell, et al., "Introducing Freshmen to Reverse Process Engineering and Design through Investigation of the Brewing Process."

To date, nobody has published a paper on reverse engineering obsolete electronic amusement equipment, although there is a paper entitled "A Test for Real­time Programming in

JAVA: The Pinball Player Project." It concentrates on controlling the the pinball flipper circuits 20 using a machine vision system to track the ball movement. The authors write that "[t]he system has a digital I/O board that has relays connected to the Flippers. Sending a 1 to a relay raises the flipper, a 0 lowers it. So controlling the pinball machine itself is fairly simple" (Clark and Goetz,

2001). It is implied that the remainder of the pinball machine's original control system functions normally and has not been replaced by another control system. No doubt this is because their research program is more concerned with creating an instructional example for learning real­time programming than for learning the basics of electronics and computer technology. In an earlier publication Clark suggests there is a pedagogical problem that can be address by a pinball machine based project.

[T]he problem is the lack of access to computer controlled real­time systems brought

about by the high cost of such systems. .. The goal of the Pinball Player project is to

develop a complete package (control and development computer(s), the controlled

device(s), sensors, actuators, interfaces, and basic software) that can be put together

from commonly available items and without exceptional mechanical or electronic

engineering for under US $10,000. The experiences gained from creating such a

system will be shared with the academic and research communities at large so that

many institutions can develop and expand on the initial system (Clark, 1998).

In a 2003 email communication with the author, however, Clark admitted that there has been little interest in his project by other institutions, and that it was essentially dead. 21

The impetus to resurrect a form of Clark's project is the fact that first generation microcomputer based systems, already far beyond their intended useful lives, are nearing the end of their serviceable lives as well; they represent an educational enterprise with a material payout.

In the context of this study pinball machines are coin­operated amusement devices on which a steel ball rolls on an inclined play field. The player controls solenoid powered flippers

(bats) to try to keep the ball from rolling down the middle of the play field. There are various features on the play field that can knock the ball away, capture it, or score points. Early pinball machines are referred to as electro­mechanical because they use electrical mechanisms such as relays and clockworks to control game play. Electronic pinball machines use a microcomputer­ based control system. Figure 2 shows the major components of an electronic pinball machine; the MPU assembly is colored light blue.

Figure 2. Bally electronic pinball machine 22

The first electronic pinball machines were manufactured in the late 1970s. The Bally

Corporation out produced its competitors three to one, and built over 300,000 units each using the same basic control system (Flower and Kurtz, 1988; Petit, 2002). Placement of a battery on the main circuit board has lead to systemic failures in a large percentage of these machines today.

Therefore, this system has been selected as the candidate for reverse engineering.

Relevant Theory

Ingle's Four Stage Process

Ingle points out that the United States military was only beginning in the mid 1980s to appreciate the dilemmas entailed by combining technology with a five year life span

(semiconductors) into assets with a thirty year life span (missiles and other weapon systems)

(1994, p. 139). Indeed, the reverse engineering program on which she bases her research arose from the need of the United States federal government to procure lower cost replacement parts for aging military equipment following popular uproar over the procurement of $400 hammers and $800 toilet seats. Businesses were making similar discoveries a decade later. In some cases obsolescence and lack of supply can drive a project regardless of cost. In other the commercially available replacements are judged to be too expensive. Today similar dilemmas exist for first 23 generation microcomputers found in countless consumer devices, from industrial machinery to household appliances.

When reverse engineering is driven by economics, the common objectives are 25% cost savings over commercial, off­the­shelf items and 25:1 return on investment. Project cost and risk are positively correlated to system complexity. She captures this relationship in a Go/No­go decision matrix that shows the relationships between technical complexity, reverse engineering cost, which is equated to risk, and the decision to use reverse engineering or not. It is reproduced in Figure 3.

Figure 3. Go/No­go decision matrix

Since the need to reverse engineer implies that knowledge of crucial data is lacking, Ingle summarizes the relationship between cost, risk, and data availability as follows:

More data, less risk, and lower cost are good signs pointing toward the use of reverse

engineering. When the technical data appears complete but there is some doubt as to 24

its usefulness, there is product verification. If there is some technical data available,

the project requires data enhancement. Data enhancement is performed whenever

there is any amount of missing data. If there is no technical data available, the

project requires full data development (1994, p. 54).

Data availability determines project type, ranging from product verification to data enhancement to data development. Cost and risk tend to escalate dramatically when data must be developed by reverse engineering because they often demand expensive labor and apparatus. It is also possible to have high cost, low complexity projects; that is, other factors besides complexity account for the high cost to reverse engineer. "[Data development] can become a nightmare when material compositions are not easy to determine, tolerances have to be guessed at, or the item provides a nasty surprise by possessing an internal cavity filled with an unknown lubricant or a circuit card which has three extra layers not previously mentioned" (p. 55). Likewise projects involving hardware and software components often exhibit moderate to high complexity due to all the details that may be concealed in electronic circuitry. It should be obvious that data development entails the highest cost and risk. However, there can be overriding factors that may force an "at all costs" attitude. The most common are obsolescence and lack of supply support in systems that are nonetheless deemed critical. 25

Prescreening Process

What classes of candidates will surface within the context of the Go/No­go decision matrix? In a large scale reverse engineering program, the first step is to identify the parts within the targeted systems that are actually viable candidates for project work by a cross functional team consisting of engineers, drafts persons, and shop support. Ingle defines a candidate as "a singular item, part, component, unit, or subassembly and may contain many smaller parts, but it is either purchased, sold, marketed, or otherwise described as a single entity" (1994, p. 43). A good candidate may be a part with a high failure rate, high annual usage, or simply a high cost.

It should not be overly complex or so critical to the system that its failure may cause loss of life or destruction of the the whole system. Candidates are typically judged on economics, logistics, return on investment, and technical complexity and criticality. Particular attention must be given to what parts of a particular system are protected by patents. On the one hand, reverse engineering is only recommended for low complexity parts when there is no COTS item available, due to either obsolescence or lack of supply support, unless project cost is low enough to generate a substantial return on investment. On the other hand, there is often a shortage of high complexity parts, especially for older, out of production equipment. One way to reduce the cost of a reverse engineering project is to devise a rationale for discounting the time spent on it, since the bulk of the project cost is often labor. A basic premise of this work is that conducting a reverse engineering project in conjunction with an education in electronics and computer 26 technology permits the cost of time spent to be spread between the two tasks, with the majority of the weight going to the educational objectives. Then the low cost of parts and other materials drives up ROI. Another way to reduce cost that will be discussed later is the use of the open source development framework for distributing work worldwide among professional volunteers, hobbyists, and other enthusiasts.

The overall goal is to increase the effectiveness and productivity of the system by replacing troublesome items with an improved part. A great deal of work is done up front in the prescreening phase and Stage 1 in order to avoid cost overruns later. All available data is collected for the candidates in question, including drawings, technical manuals, usage data, maintenance logs, and performance specifications. It is important to identify what data is missing because the costs can increase exponentially when technical data must be developed from scratch. Large scale programs should have a cost tracking system and a comprehensive database.

An informed decision must be made whether the candidate is to pass forward into the four stage process. Ingle presents a prescreening recommendation sheet used in government projects. It includes notes about technical data availability, economic factors, logistics factors, the project type, overriding factors, judgments about technical complexity, an overall engineering judgment, further recommendations, and project priority. Economics should focus not simply on the unit cost savings but include overall return on investment when taking into account the life cycle savings and the cost of conducting the reverse engineering work. The 27 standard criterion is whether the substitute part can be successfully reproduced at a 25% savings over the next best alternative and yield a return on investment of 25:1 or better.

Stage 1: Evaluation and Verification

Candidates that have passed through the prescreening phase are prioritized according to criticality of need or likelihood of success, and each part proceeds through the Four­Stage

Process. Stage 1 amounts to another data collection and evaluation activity to decide whether and how a particular candidate should be acted upon by the reverse engineering team. Ingle summarizes it as follows:

Stage 1 entails the complete characterization of a part using visual and dimensional

inspection, material analysis, and identification. Comparisons to available data must

be made. A failure analysis is conducted if failed sample parts have been obtained.

Then, a quality report is generated. A stage 1 report must also be generated complete

with the projected reverse engineering cost estimates. A final go/no­go decision

must be reached by both the project leader and the approval body on the basis of the

available information (1994, p. 60­61).

The overall recommendation can take three directions. If it is found that the available technical data is adequate, no further work is required. If the project is not economical, it is terminated.

Otherwise, the recommendation to proceed to Stage 2 is made. 28

Ingle provides a list of key design features to identify. Most deal with machined parts; however, many are relevant to a microcomputer­based control unit. In particular, dimensions and tolerances are important when it comes to circuit board size and interconnections with other system parts. Fortunately, mass produced control units will likely utilize standard parts that can easily be identified. These include electrical and electronic components such as resistors, capacitors, semiconductors, and integrated circuits, standard wire types and connectors, and so on. Similarly, material composition is usually not a major issue because circuit boards tend to be made of generic materials and utilize generic components that are easily identified. Trouble arises when a there are unidentifiable discrete components, custom integrated circuit chips, or programmable chips. Here the concept of disassembly takes on a different notion than that associated with taking apart physical items. Disassembly is the process of translating machine code instructions back into assembly language code that is intelligible to computer programmers.

Another term, decompilation, translates machine code or object code as it is often called into a higher level language like C. Although Ingle does not address the problem posed by custom programmed circuit components, it can be likened to material composition analysis.

Operational testing is very relevant to electronic circuits. Without a working unit it may be impossible to determine exactly how the original unit is supposed to work in order to design a replacement. Ingle points out that "failure to either meet or exceed any known operating conditions could warrant a redesign, particularly if safety conditions are not met or are outdated" 29

(1994, p. 67). Safety factors may have to be reconsidered due to standards that have evolved since the original manufacture of the device, too.

Stage 1 leads up to an overall recommendation made to the project sponsors. One outcome is that the reverse engineering work is complete, which may be the case in data verification. Another may be that the candidate's overall priority is lower than initially judged, and it should be put on hold, or that the project should be terminated, if legal issues or other unforeseen complications have arisen wiping out any predicted return on investment. In rare cases such profound deficiencies may be uncovered that the system as a whole must be abandoned as soon as possible. Otherwise, a recommendation is made to proceed to Stage 2,

Technical Data Generation. The Stage 1 costs should be considered sunk unless the project continues forward. In a well defined reverse engineering program a formal report summarizing all of this information would be created.

Stage 2: Technical Data Generation

Technical data generation has already been occurring throughout the preceding stages.

Generating the missing data that has been identified in prescreening and Stage 1 is the essence of reverse engineering. The goal is a complete and unrestricted package suitable for fabrication and procurement. The Stage 2 deliverable is a Preliminary Technical Data Package in Ingle's jargon.

It should change little in its basic content through the next two steps of the Four Stage process. 30

The flow of technical data development in a formal project management process is shown in

Figure 4 (Ingle, 1994).

Figure 4. Flow of technical data through the Four Stage process

The reference document for generation of generic technical data packages, which includes engineering drawings, performance specifications, test, inspection, and quality assurance information is the MIL­T­31000 document, General Specification for Technical Data Packages.

Although developed for military applications, it has been widely accepted for commercial applications as well.

Standards are very important when it comes to technical data of any sort. They help with communication not only within an organization, but also in the manufacturing and procurement 31 sectors, which may be global. Ingle lists a number of ASME (American Society of

Manufacturing Engineers) standards commonly in use for engineering drawings. Of interest to a project focused on microcomputer­based control units would be the engineering drawing standards for generation of electrical and electronic diagrams, and special application drawings for wiring harness drawings, printed­board drawing sets, and computer program and software drawings.

Stage 3: Design Verification

The object of Design Verification is to build and test prototypes using specifications developed in the previous stage. Sometimes it is not deemed necessary to build a prototype; doing so reduces risk but increases cost. When substantial data development has been done, or part substitutions and deviations from the original design have been made, however, prototype testing is mandatory. Basic operation testing is often referred to as bench testing, in which a special apparatus is used to test a component outside of the system in which it operates. Full testing in a working system is preferred, although it may be difficult to justify unless a test system is available so that a production system is not disturbed. Following the testing the prototype part should be inspected for unusual degradation or failure. Degradation may reveal hidden, proprietary design features that were not factored into the reverse engineered design. It is at this point that inspection criteria are added to the data package for the finished product. This 32 is the measure by which the manufactured replacement part will be judged for compliance to the design. A quality assurance criteria such as ISO­9000 may also be incorporated at this stage.

Ingle provides a design verification checklist to be completed before moving to design implementation. The final question, according to Ingle, which echoes a test of repeatability in experimental design, "Is the information contained in the complete technical package sufficient to fabricate, test, inspect, and procure this part from another manufacturer?" (1994, p.116).

Stage 4: Design Implementation

The aim of the final stage in Ingle's reverse engineering methodology is to produce a

Final Technical Data Package. This shall include procurement requirements and an engineering and economic report summarizing the activities that took place in the other three stages of the project. Prototypes should also be delivered at this point for presentation before to project sponsors, in the case of a formal program. After the presentation, the prototypes should become part of the working supply of parts for their intended system.

Legal Issues

Ingle makes it clear that a patented object must never be reverse engineered; "[i]f a system component proposed for reverse engineering is patented in any country, all reverse 33 engineering efforts for that component must be discontinued" (1994, p .33). Design theft, the other major legal concern, pertains to trade secrets and copyrighted material. Especially in the case of computer software and firmware, which is part of any microcomputer based control unit, reading copyrighted source code is considered design theft. It is often argued that in order to translate machine­readable code into a human­readable format, known as disassembly or decompilation, a copy of the original is made in the process, albeit in a different form (Davidson,

1989; Miller, 1993; Cifuentes and Fitzgerald, 2000).

Applying copyright protections to embedded computer programs creates a type of information Ingle refers to as restricted. This means it cannot be legally used during technical data development or any later stage in the reverse engineering process. Legal precedents devolve from the 1976 Copyright Act, which protects the contents of firmware, including read­only memory chips (ROMs) and programmable read­only memory chips (PROMs), because intelligible source code can be mechanically derived from them (Davidson, 1989). This stance was strengthened by the 1984 Semiconductor Chip Protection Act, although it does make exceptions for teaching. IBM successfully sued rivals who had copied their Basic Input Output

System (BIOS) in order to make clones of the Personal Computer. The recent Digital

Millennium Copyright Act of 1998 (DMCA) endorses this interpretation of copyright law but makes exceptions for reverse engineering when the purpose is interoperability, encryption research, computer security testing, law enforcement, and protecting personal privacy (Cifuentes and Fitzgerald, 2000). Brehens and Levary add "teaching students to write code" and "for 34 repairing malfunctioning software" (1998, p. 28). These purposes are often linked to the concept of fair use, which is commonly granted to artistic and critical appropriations of other kinds of copyrighted material. A fair use defense, however, is not sufficient protection from liability.

There may be licensing agreements with the owner of the code. There must be no other means to obtain the desired information, including obtaining an authorized copy. The problem is that in order to disassemble or decompile machine code, a copy of it must be made. To avoid even the semblance of impropriety, legal scholars recommend a clean room procedure to organizations wishing to create a compatible clone, and this concept has been applied to many electronic devices besides the personal computer (Davidson, 1989; Miller, 1993). Of the clean room

Davidson writes, "[t]he programmers in the clean room must be fed only the design information of the original program (the ideas), not any specific elements of programming itself (the expression)" (1989, p. 159). Therefore, when the goal is to create a substitute for an existing, commercially available part, the preferred method of data development is to treat the item containing copyrighted content as a black box.

The distinction between white box and black box reverse engineering methods is borrowed from software testing methodologies. White box (structural) testing "analyzes the code and uses knowledge about the structure of a component," whereas black box (functional) testing "treats the system to be tested as a black box whose behavior can only be determined by studying its inputs and outputs" (Cifuentes and Fitzgerald, 2000, p. 338). Lawyers recommend that researchers generate a paper trail to demonstrate that data was derived from functional 35 testing; Miller states that "the preferred (although not mandatory) practice is to have a knowledgeable witness corroborate a laboratory notebook, using the phrase 'read and understood'" (1993, p. 65).

Microcomputer Technology

Microcomputer technology poses an interesting dilemma. On the one hand, rapid advances in the state of the art continue to provide faster, cheaper, and small devices; on the other hand, this frenetic pace of progress can lead to shortages of spare parts for aging equipment that has an intended life span beyond three to five years (Ingle, 1994). Indeed, there remains a large installed base of equipment controlled by first generation microcomputer systems. These same first generation microprocessors are often used for undergraduate instruction in electronics and computer technology because they embody fundamental concepts in a package of manageable complexity. An electronic pinball machine embodies a degree of complexity on par with the problems encountered in a capstone course.

At the heart of many of these systems can be found representatives of the most common first generation 8­bit microprocessors, which include the Intel 8085, Motorola 6800, Rockwell

6502 and Zilog Z­80. All of these devices are based on the von Neumann model of the stored program computer. "[D]espite advances in semiconductor technology and microprocessors, the basic architecture of the digital computer has remained unchanged for the last 35 years. This is 36 the so­called von Neumann model of the stored program computer" (Uffenbeck, 1991, p. 2). Its basic components are the central processing unit (CPU), arithmetic logic unit (ALU), which is often integrated into the CPU, memory unit, and input/output (I/O) devices. The Motorola 6800

CPU architecture that is at the heart of the control unit that was reverse engineered in this study is illustrated in Figure 5 (Microprocessor Products Group, 1998).

Figure 5. Motorola MC6800 CPU block diagram

The key to understanding the activities of any von Neumann computer is the fetch and execute principle. This is the process by which the CPU fetches data from memory addressed by its program counter into its instruction register, increments the program counter, executes the command, and then repeats the cycle. The CPU, memory, and I/O devices communicate with each other via three electrical buses. A bus is defined as a collection of lines that each carry a discrete voltage level (Uffenbeck, 1991). Each digital signal line is referred to as a bit. Thus a 37 typical 8­bit microprocessor has an 8­bit data bus, and 8­ or 16­bit address bus, and a control bus. Normally the I/O lines that link the computer to the rest of the world are not connected directly to the microprocessor data bus. Instead, special­purpose support devices designed to accompany a particular microprocessor provide a configurable set of I/O ports in excess of the 8 bits available on the data bus. Other support devices are designed to interface to a particular piece of hardware such as a disk drive or keyboard. An Intel x86 based control unit is likely to utilize such chips as the 8255 Programmable Peripheral Interface (PPI) for digital input and output; Motorola 6800 based systems often utilize the 6820 Peripheral Interface Adapter (PIA).

Both chips can provide 24 bits of buffered I/O to the microprocessor's 8 bit data bus via addressing. A memory mapped I/O scheme allows the digital lines on these chips to appear in system memory at three consecutive addresses, for example 0x280 through 0x282. While it is now obsolete, the 8255 PPI integrated circuit remained relevant for a decade based on the fact that it is encountered in textbooks from 1980 (Artwick) to 1991 (Uffenbeck).

Accessing I/O is typically done via polling, interrupts, or direct memory access. The

CPU can poll an I/O port to determine whether there is data available to be read in, or whether it is free to write data out to it. The device itself can signal the CPU via an line that it is either ready to accept data for output, or that data is available for input. Interrupt based I/O is often claimed to be more efficient than polling, but complicates program design because a running program is literally stopped in its tracks so that the CPU can process the interrupt request with a special routine. Direct memory access schemes are often used for transferring 38 large amounts of data quickly; the downside is that for most DMA types the CPU must remain idle while the transfer takes place, or it must refrain from accessing the affected memory region.

John Uffenbeck proposes a curious perspective for the purpose of learning assembly language programming:

to consider the effects each computer instruction has on the electrical lines or 'buses' of

the microprocessor chip itself ... [f]rom the standpoint of the three­bus architecture, there

are only four unique instruction cycles possible. .. The instruction set of a computer can

be thought of as a list of commands that cause unique sequences to occur on the three

buses (1991, p. 2­5).

From this perspective the intended purpose of the computer software is irrelevant. What matters is the effect its operation has on the digital bus lines, viewed one tick of the CPU clock at a time.

The possible changes in state on the bus are limited in number. This largely functional, black box approach should be contrasted to the structural one. The ideal functional analysis would take Uffenbeck's perspective to its logical conclusion: monitor the state changes of the computer bus while putting the system through all known operational permutations. While the operation may sound fanciful and unmanageable to a human integrator due to the immense quantity of data that it would generate, it may be well within the grasp of adaptive, automated analysis tools.

The only published, scholarly literature on reverse engineering computers involves very expensive systems using very expensive apparatus. Reverse engineering computers has not been explored by academicians beyond those who attached themselves to highly funded research 39 programs. Welch et al. (1996) present a hypothesis and experiment not easily repeated by most students of electronics and computing technology. Their article "Reverse Engineering of

Computer­Based Control Systems" describes (uses as its quintessence) a rare, expensive device

(AEGIS Weapon) written in a computer language (Ada), both far from the generic, general purpose main stream desktop and server world.

Process Control

The discussion of reverse engineering has progressed from a basic methodology that arose from military applications into the realm of devices with embedded microcomputer systems. When black box testing strategies are required, a thorough understanding of the purpose of a system is necessary in order to discern what secrets may be contained in the box. If the targeted item is a control unit of some kind, then familiarity with the basic types of computerized process control is beneficial to filling in missing information about it. Even when the white box testing method is permissible, knowledge of process control strategies may help to make sense of software algorithms reconstructed from the disassembly of project language code that has been extracted from a read­only memory or other programmable devices. A typical closed loop process model is shown in Figure 6. 40

Figure 6. Closed­loop process control diagram

The loop is closed rather than open because feedback from the output is used to regulate the process. In continuous processes the inputs and outputs can vary infinitely between low and high values; discrete processes, as the name implies, have inputs and outputs that always take discrete values, such as a low and a high voltage level.

The analysis of a microcomputer­based control unit like the Bally unit falls in line with advanced undergraduate courses in process control systems. Clark's research into using a pinball machine for real­time programming education in computer science pointed to the problem of a continuously improving state of the art rendering common problems insignificant. Non­ deterministic, multi­tasking operating systems nevertheless present substantial shortcomings for process control. Dayton Clark's Pinball Player Project is intended for real­time programming education (Clark, 1988). As of 1998 it used a general purpose digital computer to track the 41 movement of the pinball and operate the flippers appropriately. Real­time processes were developed to control the system's operation. Clark's rationale for pursuing the project is that few good test beds are available for teaching real­time computing concepts due to the expense and high stakes involved in typical practical applications of real­time systems. The pinball machine is a relatively low­cost, safe test bed.

Clark's project makes no attempt to implement the degree of control intended in this work. His apparatus leaves the original pinball machine control system intact and only interfaces to the two flipper buttons. As a result, he finds that his framework is rapidly obsolescing due to the continued improvements in processor speed. The computational problems formerly bearing timing requirements with respect to the operation of the control elements (flippers) disappear as the process execution time falls below a particular threshold. On the contrary, forcing the system designer to take into account the comprehensive scope of timing requirements required to operate the pinball machine itself, and not merely the two flipper buttons, generates a number of non­trivial real­time requirements that are not dependent on CPU speed alone.

Real­time Computing

Running a general purpose operating system like GNU/Linux presents significant problems for satisfying timing requirements due to nondeterminism inherent in program execution. Care must be taken in developing software programs to duplicate the operation of a 42 single­tasking computer in multi­tasking environment that does not qualify as a true real­time operating system. A computer program with real­time requirements has temporal conditions that must be met in addition to the logical conditions required by conventional programs (Shaw,

2001; Stankovic and Ramamritham, 1988; Dankwardt, 2002). The simplest timing assertion is a deadline; this is the time in which a computation must occur. Two basic types of real­time processes are referred to as periodic and sporadic. A periodic process recurs once in a given period or exactly so many time units apart; a sporadic process is triggered by events that may not exhibit a fixed pattern. Both types of processes must complete their computations within their deadline. The standard notation for representing the temporal requirements of a real­time process is a triple (c, p, d) (Shaw, 2001). This formula applies to both periodic and sporadic processes.

For a periodic (cyclical) process the quantity c is the worst case estimate for the total computation time of the process; p is the overall period between cycles of the process; and d is the deadline for the process to complete. The process meets its design specifications when the inequality c <= d <= p is satisfied. That is, the computation time must be less than the deadline for the process to complete, which in turn must be less than the period of each cycle. Often the deadline is equal to the period. This relationship is illustrated in Figure 7 (Shaw, 2002). 43

Figure 7. Real­time periodic process

A sporadic process is initiated by some event rather than repeating in a defined cycle. It may arise from the result of computation by a control algorithm or from a change detected in the physical state of a hardware input. Often the external event is communicated via a hardware interrupt. In this case p is the minimum time between events. The process deadline is often referred to as response time, with the interrupt as the stimulus. Since the time it takes to sense

t the event ( e) is a factor as well, the timing specification for a sporadic process by be expressed

c t d by the inequality <= e + (Shaw, 2001).

By definition, the consequences of failing to meet the deadline in a hard real­time system is system failure. Soft real­time systems are by definition more tolerant. Failure to meet deadlines may result in slow system response, missed events, or other outcomes that the user may deem unsatisfactory. Common examples are jittery motion of animated objects and moments of silence during digital music playback. Additionally, most timing constraints are deterministic, meaning both the minimum and maximum computation time must fall within a specific range, and for periodic processes repetitions of the cycle must occur at a fixed 44 frequency. Jitter is the term used to describe the variance in response times to a stimulus

(Dankwardt, 2002). Finally, real­time systems tend to emphasize reliability and fault tolerance in their design as well due to the fact that they are controlling physical processes like automobile engines and missile guidance systems.

Writing a computer program that satisfies the timing requirements for a single control process when it is the only thing running on a system is relatively simple. For example, consider an application in which a brief pulse is generated on an output port once every thousandth of a second. On a single­tasking computer, this behavior can be achieved through a simple nested loop, as in the following assembly code presented in Table 1 along with computation times based on a 1 MHz Motorola 6800 CPU similar to the one used in the Bally 2518 MPU.

Motorola 6800 Assembly Code CPU cycles Cumulative Execution Time START LDA #0 2 2 microseconds STA PORT1 4 6 microseconds LOOP1 LDA #82 2 8 microseconds LOOP2 DEC 2 10 microseconds BNE LOOP2 4 14 microseconds LDA #1 2 792 microseconds STA PORT1 4 794 microseconds LDA #20 2 798 microseconds LOOP3 DEC 2 800 microseconds BNE LOOP3 4 804 microseconds JMP START 3 999 microseconds

Table 1. Assembly code to create 1000 Hz waveform

Assuming true single­tasking operation without interrupts, the output at PORT1 would be square wave with a frequency of approximately 1000 Hz with a 25% duty cycle. There are no other 45 tasks for the CPU to perform besides cycling endlessly through this program. Consequently, its timing behavior is not a matter of statistical averages; rather, it is deterministic. The time it takes to execute the machine instructions represented by the assembly code can be determined a priori based on the CPU clock frequency and the number of cycles consumed by each instruction.

The problem becomes more challenging when there are multiple processes meant to be running simultaneously on the same computer, each with its own timing requirements. On any single CPU multitasking system, only one process can actually execute at a time, and the semblance of simultaneity is the effect of the operating system scheduler rapidly switching from one process to the next. Special purpose computer systems can be designed in which multiple real­time tasks are programmed to meet their temporal requirements via a cyclic executive

(Shaw, 2001). The cyclic executive is a control program designed to produce a feasible schedule by interleaving a predefined set processes in a continuous cycle. However, Shaw notes that "[m] odern concurrent programming techniques are incompatible with the need to break code into predictable blocks and preschedule these blocks. The method is also inflexible and brittle with respect to changes" (2001, p. 24). What is desired is a convergence of the flexibility and conveniences offered by general purpose, multitasking operating environments with the predictability and reliability guaranteed by statically designed, special purpose, deterministic systems. Thus many real­time environments have been derived from general purpose operating systems by stripping down and optimizing the system kernel to have fast context switches, quick response to external interrupts, and no virtual memory (Stankovic and Ramamritham, 1988). 46

Virtual memory in particular can add substantial execution time to a process if a hard disk must or other secondary storage device must be accessed to retrieve a region of memory that has been swapped out.

Is a multi­tasking environment controlled by a scheduler­based, time­sharing operating system like Linux amenable to real­time tasks? CPU time is divided into time segments, and each process in the system is given a 'time slice' to execute. If it does not complete before its time expires, it is swapped out and replaced by another process so that it can run. This is the essence of a time sharing system. Not only must time­critical processes compete with ordinary processes for CPU time and other system resources, but they must also compete with the operating system kernel itself. The kernel may preempt a running process and replace it with another more important task. Or a running process may block for an indeterminate amount of time waiting for a requested resource such as a file stored on a remote file server to be made available by the kernel. Additionally, modern operating systems employ techniques such as caching and virtual memory to enhance overall system capability and performance. Despite these measures, the kernel itself must be preemptible to guarantee real­time performance. Shaw writes, “[t]he effects of a general­purpose operating system are normally not deterministically predictable; OS functions may preempt applications processes at unpredictable times and their execution times are often not known precisely. For these reasons, modifications of the general architecture are made, but often in an ad hoc way” (2001, p. 27). In addition to the challenge of cooperating with the operating system and the other programs that may be running, there is the 47 question of application development. What computer programming languages are well­suited to creating real­time tasks? Besides having characteristics like control of interrupts, access to fine­ grained timers, and so on, they should support reuse and adaptability. Changes in system state, configuration, or input specifications should not require fundamental redesign of a system. Such is the liability of strategies that rely on leveraging specific features of a static platform

(Stankovic and Ramamritham, 1988).

A key element in software development for real­time computing is the selection of the programming language used. The programming language of historical significance for real­time computing is Ada. It was developed in the late 1970s for a design contest sponsored by the

United States Department of Defense. The aim was to have all of its contractors adopt a standard programming language for command and control applications. The important features of any real­time language are time access and control, concurrency, , and predictability; the Ada 95 specification addresses all of these. On the contrary, popular languages like C and C++ require extensive modification or severely restricted use to meet real­ time software needs (Shaw, 2001).

The programming language that is emerging as the favorite for real time software today is

Sun Microsystem's JAVA with real­time extensions, although there are many derivatives of

Microsoft Windows and GNU/Linux that provide real­time extensions to standard languages like

C and C++ (Shaw, 2001). Besides providing software primitives suitable for achieving temporal objectives, these languages facilitate adaptability and reuse of program code. 48

Obenland identifies timer jitter, response, and 'bintime' as three benchmarks designed to measure the determinism of an operating system. Bintime determines maximum kernel blocking time, that is, "time spent blocked in the kernel" (2001, p. 3). Obenland's latency benchmarks involve synchronization, message passing and signaling. A common graphical presentation of the timing behavior of large sets of samples of repetitions of a periodic process is a cumulative percentage graph. It shows what percentage of the sample falls within particular minimum or maximum values. Often the 99.999% threshold is given special attention (Laurich, 2004).

GNU/Linux

Of the many general purpose operating systems commonly in use, GNU/Linux was selected for this study on account of its zero cost and unrestricted source code licensing. The

Linux kernel and GNU (GNU's Not UNIX) system programs are distributed under the General

Public License (GPL), which compels original and derived works to include program source code or make it available and freely modifiable to all users (Free Software Foundation, 1991;

Open Source Initiative, 2004; O'Reilly, 2004; Stallman, 2002). Mission­critical process control systems typically are not entrusted to non­deterministic digital computers like a general­purpose

Intel x86 architecture GNU/Linux machine. That is not to imply that no GNU/Linux applications contain real­time requirements. The distinction between interactive tasks and real­ time tasks has become blurred due to the expansion of what is meant today by an interactive task. 49

Whereas formerly interactive tasks were those in which rapid response was desired for user interaction, it now includes, according to Lindsley, "tasks that should receive high priority upon waking up from self­imposed sleeps" (2003, p. 22). For example, a digital audio player may perform unacceptably when a browser reloads a web page. Indeed, current versions of

GNU/Linux applications may try to employ built­in mechanisms that induce the operating system to behave more deterministically with respect to a particular running program. However, these mechanisms only adjust parameters; the 2.4 version of the Linux kernel is not preemptible.

This means that the kernel cannot break out of a system call done on behalf of a user process, even if the process time slice expires or a higher priority user process enters the run queue

(Dankwardt, 2002). The condition known as priority inversion occurs when a lower priority user process is granted a resource that subsequently holds up the execution of a higher priority process.

The challenge is to induce sufficient determinism from an off­the­shelf, generic computer running a generic GNU/Linux distribution to satisfy the timing requirements of the control system being replaced. Nondeterminism in a GNU/Linux environment usually manifests itself as considerable degradation in application performance from its average condition. The root cause is contention for resources, including synchronization primitives, main memory, the CPU, a bus, the CPU cache, and interrupt handling (Dankwardt, 2002). Basic strategies for increasing determinism include imposing external constraints on the computing environment, wise programming practices, and scheduler tuning. Advanced strategies include encapsulating real­ 50 time processes within kernel device drivers, and supplementing the stock kernel with custom modifications such as a real­time sub kernel.

The least invasive way to increase determinism is by controlling external constraints.

This may mean imposing limitations on the number and type of processes run on the system, the size of files, the frequencies of refreshes, and so on. The key limitations of general purpose, time­sharing environments for use in real­time applications have already been given: imprecise and unreliable process scheduling, susceptibility to non­deterministic delays from blocking and preemption, temporary disabling of external interrupts, and the inherent unpredictability of the low­level routines used for memory management and other basic functions. Another level of unpredictability is introduced by programming languages and techniques used to develop the real­time applications. For example, algorithms that do not run in constant time, which is expressed as O(1), can cause nondeterminism (Dankwardt, 2002). It is well known that the scheduler used in the Linux kernel version 2.4 does not run in constant time, either. Instead, its run time is proportional to the number of processes in the system; this behavior is expressed as O

(n), for linear time. Dankwardt explains, "[w]ith an O(n) algorithm there is no upper bound on the time that the algorithm will take. If your response time depends upon your sleeping process to be awakened and selected to run, and the scheduler is O(n), then you will not be able to determine the worst­case time" (2002, p. 2 of 8). Indeed, the Linux 2.4 scheduler can consume more CPU time than the processes it is scheduling when the system is very busy or has more than four processors (Lindsley, 2003). A basic strategy for increasing determinism is therefore 51 to limit the number of processes that users of the system may create. This is easily done in special purpose, embedded systems; it is another story on a general purpose desktop computer.

Other constraints to the operating environment include restricting the use of certain programs such as hdparm, scrolling the frame buffer, and switching consoles (Dankwardt, 2002).

There are programming strategies for the control process that can help increase determinism in a GNU/Linux system. Writing compact code that can execute quickly is the first step in enhancing the real­time behavior of a program. As the example of the Linux scheduler illustrates, code that does not run in constant time, O(1), can yield surprisingly poor performance when pushed to its limit. Normally only portions of an overall control system demand deterministic behavior. Such portions of code are referred to as critical sections, and must be protected from blocking, preemption, and other forms of interruption. Custom device drivers represent a rather invasive method of extracting more deterministic performance for a given process. At the ultimate extreme are custom kernels that include a real­time sub kernel. A common approach is to split the overall task between a kernel device driver for high precision, high frequency time critical tasks, and ordinary user processes for everything else (Laurich,

2004).

In addition to these programming strategies, a number of system calls exist that can attempt to adjust kernel scheduling parameters. The Linux scheduler allots CPU time to user processes based on a dynamic, priority based method. If a new process enters the run queue with a higher priority than the process currently running, the scheduler will preempt it and insert the 52 new one. It uses a single global run queue known as the task list from which it selects the best candidate to run when a processor is idle. Each process is assigned a "goodness rating" based on the number of clock ticks it has left, its CPU affinity, its user­set priority (the so­called "nice" parameter), and whether it has been marked as a real­time task. Real­time processes are never blocked by lower­priority processes, have short response times, and have minimal response time variance (Silberschatz, Galvin and Gagne, 2000).

A number of GNU/Linux system calls have existed from the 2.4 version of the kernel onwards to adjust the scheduling behavior of a particular user process (Silberschatz, Galvin and

Gagne, 2000; Love, 2003). These are listed in Table 2.

System Call Effect mlockall() POSIX standard function to prevent dynamic memory paging for a process sched_setaffinity() Set a process's CPU affinity sched_setparam() Set a process's real­time priority sched_setscheduler() Set scheduling policy including POSIX real­time policies SCHED_FIFO and SCHED_RR

Table 2. Linux system calls used to increase process determinism

Current Literature

Current literature for the sake of this work is material published from 2003 onwards. The general release of the Linux 2.6 kernel led to a dramatic revision of the overall strategy for prototype development in this work. Previously it had been assumed that some real­time Linux 53 variant such as RTLinux or RTAI would have been required to provide sufficient precision and determinacy to meet the timing requirements of the pinball machine control system. However, research findings indicate enhancements made in the Linux 2.6 kernel obviate the need for a bona fide real­time operating system.

Linux 2.6 Kernel

Significant improvements in the state of the art found in the 2.6 Linux kernel include kernel preemption, a faster timer interrupt, and a better scheduler. Another improvement seldom mentioned in research papers but very important to programmers is the "work­queue bottom­half mechanism" (Weinberg, 2004, p. 40). The combination of improving the timer resolution by an order of magnitude, from 100 Hz to 1000 Hz, the new scheduler, and the introduction of kernel workqueues, greatly simplify the creation of high frequency cyclical processes. This in turn may inspire a shift in emphasis in real­time programming techniques from methods that formerly depended on an external interrupt generator to provide the basic timing signals. The previous state of the art relied on generating external interrupts to maintain the real­time requirements of an application. As late as 2004, Laurich evaluated RTAI running on the 2.4 kernel presumably because a version for the 2.6 kernel was not yet widely in use. One of the improvements found in the 2.6 kernel apparently missed by Laurich is this change in the atomic system timer. This is an implicit acknowledgment that the 2.4 kernel timer resolution is too coarse for servicing a 54 typical hard, real­time application, such as one having a 100 Hz process cycle in a hypothetical example, or for satisfying physical requirements in a soft real­time environment. Weinberg's article reflects this fact as well, stating "[p]rojects such as DOSEMU offer signal­based interrupt

I/O with SIG (the Silly Interrupt Generator), but user­space interrupt processing is quite slow, millisecond latencies instead of tens of microseconds for a kernel­based ISR" (2004, p. 42).

Weinberg continues, "user­context scheduling, even with the preemptible Linux kernel and real­ time policies in place, cannot guarantee 100% timely execution of user­space I/O threads" (p.

42). Thus, what may be acceptable for playing encoded music is unacceptable for the sort of applications used to control devices that in previous states of the art contain specialized, unique

(rather than ubiquitous) embedded real­time operating system (RTOS).

Laurich's investigation of hard, real­time Linux alternatives includes kernel modules

(device drivers) and user­space processes to measure the performance of Linux 2.4, 2.6, 2.4 with

RTAI, and 2.4 with RTAI and LXRT. This reflects the basic nature of state of the art electronic computing machinery used for process control. The application (task) is run in user­space with the help of a kernel module. He extols the virtues of a product (RTAI) with a history involving

"patent infringement claim" (2004, p. 3 of 22), though concedes that "[u]sing the worst­case latencies measured, Linux 2.6 is the next most suitable candidate" (2004, p. 20 of 22). His research sets the bar for "a hard real­time digital control system" at 100 Hz cycle frequency with a maximum 0.5 millisecond latency. In each case an external interrupt generator triggers the response of the real­time process. To Laurich it appears to be unthinkable to use a stock 2.6 55 kernel timed internally, despite the improved timer precision, for hard, real­time applications.

Others like Heursch, et al., emphasize that "on the whole Linux 2.6 is better optimized to execute time­critical or soft­real time tasks than the Linux kernels before ever were" (2004, p. 1). This suggests that it may now be feasible to use a generic 2.6 kernel based GNU/Linux distribution for problems previously solved only with special real­time enhancements like RTAI and

RTLinux. 56

CHAPTER III. METHODOLOGY

The methodology followed an overall research design based on iterative, quantitatively validated prototype testbed apparatus, passing through the first three stages of Ingle's reverse engineering process. The project selection strategy sought to identify classes of microcomputer­ based control units susceptible to the proposed black box reverse engineering method and capable of yielding a return on investment greater than 25:1. The Bally AS2518 pinball machine control unit had been selected in the prescreening phase due to the large number of extant pinball machines containing this failure­prone part, the relatively high cost of a COTS replacement, and the wide availability of data in the form of schematics, service manuals, and expired patents.

Following the proposed Socratic method, each electrical connection between the control unit and the rest of the machine was interpreted in terms of its function in a time­ and event­driven, discrete, closed­loop process control system, thus dividing them into input, output, feedback input, feedback output, power or ground. The Bally Electronic Pinball Games Theory of

Operation provided insight for creating timing diagrams of the hardware circuits in each sub system, as did US patents assigned to the Bally Manufacturing Corporation. Real­time requirements for the continuous and momentary solenoids, switch matrix, and feature lamp control actions derived from these representations guided iterative prototype development. The digital displays were deemed out of scope. The specific machine studied was the 1977 model 57

Evel Knievel. Embedded programs on ROMs were treated as black boxes due to their unexpired copyrights, calling for a clean room approach to devising programs to control overall game operation. For design verification, programmatic analysis of logged run time data was selected in favor of a user survey instrument. The test plan called for placing the final version of the prototype in numerous public venues over the course of two months to collect data. Three variables were manipulated: the cyclic hardware control process period, the overall system load, and the presence or absence of POSIX real­time enhancements to the game play program. A comprehensive report was programmatically created for every game played. From these results, summary descriptive statistics were stored in a relational database to test hypotheses concerning the manipulated variables. The final stage in Inge's process, implementation, was judged beyond the scope of the research. The apparatus consisted of the aforementioned pinball machine, the replacement computer system, and a data recorder. This measurement, logging, and analytical apparatus leveraging the sub­microsecond resolution of the prototype system's own CPU Real

Time Stamp Counter (RTSC) provided design verification. This minimized the need for expensive, external test equipment and bolstered internal validity of the research.

Sourceforge.net was selected to host the source code and project web page to provide a public interface and springboard for future development. Statistical techniques included basic descriptive statistics and cumulative percentage graphs. 58

Restatement of the Problem

The failure­prone microcomputer­based control unit used in over 300,000 Bally electronic pinball machines manufactured from 1977 to 1985 was reverse engineered and replaced with Intel x86 compatible computer hardware running custom programs on a stock

Fedora Core 2 GNU/Linux operating system.

Research Design

The overall structure of the research followed the four stage process presented by

Kathryn Ingle. The prescreening phase has been implicitly conducted in order to identify the objectives of the study. The bulk of the project work involved executing the next three stages.

Execution of the fourth stage, Design Implementation, will be discussed in Chapter V in terms of a hypothetical business venture to produce pinball machine reverse engineering kits. It was hypothesized that a replacement pinball machine control system can be based on a personal computer. Although there was an ample supply of second­hand computers whose cost can be considered immaterial, additional hardware was required for interfacing to the pinball machine's digital inputs and outputs that are driven by the MPU board. At minimum, this included a device capable of performing the required digital I/O, wires and connectors to join it to the other boards in the pinball machine that were originally connected to the MPU board. The electrical 59 specifications of this interface had to be determined. This included the number of input and output lines, the voltage and current handling requirements of each, and also their timing requirements. This refers to how rapidly each is able to detect a change in state, in the case of the inputs, or change state, in the case of the outputs. The reverse engineered, microcomputer­ based control unit running on a 2.6 Linux kernel x86 architecture was interpreted as a discrete process, whose open loop set points included workqueue process period for high speed control actions, and user process period for supervisory control actions. The machine's switch matrix was interpreted as a feedback loop in a closed loop control model. Therefore, the only true inputs were the flipper buttons and self test switch. Supervisory control action handled operations whose real­time frequency requirements were multiple orders of magnitude fewer than those of the basic control actions. This permitted far less demanding execution by the operating system for as much activity as possible, and included most game play control actions.

These iterations of the test bed design were contemplated: first, a purely user­space program; next, a kernel workqueue process triggered by the kernel timer; finally, an interrupt triggered workqueue process. If none of these solutions satisfied the specifications derived from the technical data development, then further iterations employing real­time Linux derivatives like

RTAI and RTLinux could be tested. Since the hypotheses to be tested ultimately pertain to the behavior of deterministic machinery, they lend themselves to quantitative measurement and analysis. In addition to facts about the project itself, such as cost information for the apparatus, the bulk of the data collected involved the operation of a pinball machine under the control of 60 various iterations of the replacement system. Each method developed for technical data generation of the solenoids, switch matrix, lamps, and overall game operation explained how that aspect of the prototype would be verified, in addition to the actual process timing requirements.

It was envisioned that a state record of the values of all I/O points for each time unit during normal game operation would be recorded and stored in a database. From this history of measurements the derived data for the design verification stage could be generated by computer programs and statistically evaluated. The analysis substantiated judgments about the correctness and timeliness of the reverse engineered unit in order to support or disprove research hypotheses.

The statistical methods employed were standard tests of significance and cumulative percentage graphs.

Universality of the experiment was achieved by targeting a population of functionally equivalent subjects. Approximately 300,000 pinball machines based on the AS2518 MPU and the prospect of a mass produced Pinball Machine Reverse Engineering Kit sets the stage for repeating the experiment. Universality and repeatability of the experiment were further enhanced by using a stock version of the Fedora Core GNU/Linux platform. The fact that the early microcomputer­based control units were single­tasking digital computers devoid of an operating system makes them the preferred choice for basing a study of reverse engineering microcomputer­based devices in general. Put another way, the advantage of using a single­ tasking computer is the "assumption of determinism" so system behavior for the control subject can be derived a priori from a known state of the system rather than being derived from 61 statistical analysis of games played on samples sets of any of the 300,000 members of the population. Thus experimental control was facilitated by the nature of the problem itself and the instruments used to collect data, all being electronic devices whose operation could be rigorously regulated. For this research external validity meant generalizing beyond specific repetitions of the study using pinball machines to other microcomputer­based units. For all such devices that have a digital I/O interface, their operation can be represented and recorded as a series of I/O bus state descriptions, or time slices, about the disposition of each I/O point during operation.

Methods

Information from technical documents and expired patents may be used to recreate knowledge of the design of the original device. However, in a clean room approach designers are barred access to copyrighted source materials including disassembled, read­only memory chip (ROM) contents, treating the object of study as a black box. The Socratic method divides the part to be reverse engineered into meaningful sub parts, and then analyzes each one based on the function of its electrical connections to the rest of the system. The Bally electronic pinball machine was divided according to the manufacturers block diagram into solenoid control, switch control, feature lamp control, and digital displays. Game operation as a whole was then considered to integrate these sub systems into the closed loop process model. 62

Evaluation and Verification

The purpose of this stage in Ingle's reverse engineering process is to completely characterize the targeted part, collect and assess all available information, and determine what data needs to be developed. A judgment is then made whether to proceed further in the process.

All the relevant published data relevant to the AS2518 MPU was collected and identified. This included a number of United States patents, the highly informative Bally Electronic Pinball

Games Theory of Operation, as well as the operators' manual and schematic diagrams for the

Evel Knievel pinball machine. These resources alone did not provide enough information to recreate the original part or a substitute for it. Visual and dimensional inspection was done to identify the interconnections between the MPU and the rest of the pinball machine.

Development of a comprehensive understanding of the process control system embodied in a pinball machine was required, both at the level of overall game operation and the level of the individual electronic circuits controlling the various sub assemblies.

The rationale for undertaking a reverse engineering project to replace the AS2518 MPU aligns loosely with economic considerations. A commercial, off the shelf replacement part was identified, the $200 Alltek Ultimate MPU (Alltek Systems, 2004). The overriding factor of obsolescence could come into play in the future if the Alltek part ceases to be available.

However, the primary motive behind the exercise was the educational objective that is served in 63 the process. The intention is to produce a test bed suitable for learning about real time programming, basic electronic computer technology, discrete process control, and reverse engineering itself. Nonetheless, to follow Ingle's methodology the feasibility of the project must be verified once more for economic as well as logistic considerations, now that the full scope of the project has been evaluated. In order to simulate actual implementation of the reverse engineered design, a scenario was imagined in which a certain number of units were ordered by customers at a fixed price. This aggregate group represents the users of the reverse engineered part, collectively returning investment on the cost of conducting the project by purchasing this kit instead of an after market replacement MPU. A PHP (PHP Hypertext Preprocessor) program was developed to perform these computations on the fly. Table 3 portrays the dismal failure of using traditional, corporate system development model to achieve the targeted cost savings and return on investment. The replacement system is based on new computer equipment, commercial operating system, i.e., Microsoft Windows, and licensed, proprietary control software developed by a third party. 64

COTS Replacement (Alltek Ultimate MPU) $200.00

Reverse Engineered Unit Cost:

Computer Hardware (New PC) $500.00

Circuit Boards $25.00

ICs and Other Components $25.00

Assembly (5 Hours @ $10/Hour) $50.00

Software (Microsoft Windows and custom proprietary control software) $500.00

$1,100.00

Cost Savings ­450.00%

Life­Cycle Cost Savings (25 Models, 1000 Units) ­$900,000.00

Reverse Engineered Project Cost:

Prototype Hardware $1,100.00

Programming Cost (100 Hours/Model @ $10/Hour) $25,000.00

Return On Investment ­35:1

Table 3. Reverse engineering economics based on traditional development model

A Socratic Method

In the Preface to Reverse Engineering Ingle writes, "I do not believe that reverse engineering is truly anything new although its origins are vague. In all the research I have done on the subject since 1985 I have found no definitive discussion of where it came from or how it is conducted" (1994, p. ix). This is no doubt because no philosophy of computing technology has 65 been widely advertised. That does not mean that computer ethics appeared only recently in our intellectual history. From the texts of Plato and Xenophon, recorded over two thousand years ago, it was possible to trace a three part Socratic method, arriving at the typical closed­loop process model familiar to professional technologists. The first two passages considered were from Plato's Phaedrus, where Socrates says,

ought we not to consider first whether that which we wish to learn and to teach is a

simple or multiform thing, and if simple, then to enquire what power it has of acting or

being acted upon in relation to other things, and if multiform, then to number the forms;

and see first in the case of one of them, and then in the case of all of them, what is that

power of acting or being acted upon which makes each and all of them to be what they

are? (Plato, 270C­270D).

Being able to divide something functionally into parts acknowledges that it is a "multiform thing." This data development process is illustrated in Figure 8.

Figure 8. Socratic reverse engineering method 66

Its inventor claims that it can be used to reverse engineer anything. Nevertheless, an acknowledgment of economic constraints prescreening feasible candidates was evident in another passage of the same work, where Socrates says, "[t]o give the actual words would be too much of a business, but I don't mind telling you how one ought to write if one wants to be as scientific as possible" (Plato, 1973, 271C). That is, Socrates presents an overall methodology for data development, but would only use it for tasks that deserved the costs associated with the project. This method helped decide the iterative development of the testbed according to the cost of writing the computer software to conduct the prototype testing, judging that results achieved from computation over large sets of empirical data are preferred over the opinions of users. It goes hand in hand with a third fragment from antiquity, from words of Socrates penned by

Xenophon. "And would not a person with good reason call me a wise man, .. for this, that while other men get their delicacies in the markets and pay a high price for them, I devise more pleasurable ones from the resources of my soul, with no expenditure of money?" (Xenophon,

1992, 15­19). Besides the iterative method and the acknowledgment of ROI considerations, the

Socratic method also includes a preference for the Free Open Source Software development methodology over expensive, commercial things. In other words, it is often advantageous to use

“home grown” processes to figure something out for oneself. This gives preference to a functional, black box approach, even when source code can be purchased. Patent concerns are eliminated due to the age of the device; copyright and presumably DMCA concerns are eliminated by the clean room (black box) reverse engineering methodology. This satisfaction of 67 the assumption of legality is oddly a path of least resistance once it is properly understood.

Rather than attempting to disassemble the contents of copyrighted ROMs, their "closed source" contents are treated as part of a black box. Black boxes are analyzed functionally, which may turn out to be less of a burden than recreating the original software source code on an obsolete platform and attempting to emulate it or translate it.

This is how the proposed Socratic method was applied to the research problem. The operators manual for the pinball machine provides a functional diagram of the major control system components. It is reproduced in Figure 9 with coloring added to the parts germane to the thesis work (Bally Manufacturing Corporation, 1977).

Figure 9. Bally pinball control system block diagram

It divides the pinball machine up functionally, identifying the MPU board within it, which in turn by the iterative analysis is examined. Following Uffenbeck's functional computer bus means of 68 dividing into parts, the data development questions reduce to information about the MPU external electrical connections. These electrical connections on the MPU are shown in Figure

10.

Figure 10. AS2518 MPU electrical connections

The functional interpretation repeatedly applied to the part being reverse engineered assigns each electrical connection to either power, output, input, feedback (switch matrix input), or ground.

In the pinball machine outputs far outnumber feedback inputs, which far outnumber inputs.

Other microcomputer­based control units may have more inputs and no switch matrix at all.

Others may have more inputs than outputs. The implied extensibility question will not be explored during this microcomputer­based control units study.

The analysis of electrical connections actively involved in the control system model (that is, everything except power and ground lines) is done in sets connecting to each major final 69 control element. On the pinball machine they were the continuous solenoids, momentary solenoids, switch matrix, and feature lamps sets. Those connecting to the digital displays were ignored. Next, define a set of logical and temporal requirements that includes the physical constraints of the circuitry in the remaining final control elements using the Socratic method

"what is that power of acting or being acted upon which makes each and all of them to be what they are?" These are tedious discoveries commonly dismissed by other real­time thinkers as uninformative of theory. Yet they are of monumental practical consequence and must be executed correctly. Representing 'shortlines' rather than deadlines, they also deliver the application from the apparent threat of insignificance made by increasing speedy processors and reduced I/O latencies.

Another way to divide the full set of MPU electrical connections is as computing objects or not. The latter include power and ground connections and are easily solved; for the former follow the rule to go back from the point of interconnection until solved or restricted, for example, by copyright in programmed device like a ROM. Knowing when a connection is sufficiently analyzed requires comprehensive and technically correct understanding of the problem. There are two things to be accomplished in this analysis: development of specifications for replacement hardware and development of understanding the control process to be duplicated programmatically. A practical example that was encountered was the problem with the continuous solenoids and momentary solenoid enable during PC power up. 70

A few refinements were made to this Socratic reverse engineering method to tune it to microcomputer­based control units. Since all measurements on the microcomputer bus are of binary state changes, the instrumentation portion of the control system is essentially digital.

Likewise, the control actions applied to the peripheral sub­systems are "two­position" control operations, and therefore also essentially digital. The pinball machine switch inputs can be viewed as a feedback mechanism in a closed­loop control system. Rather than comparing their state to set­points, they are evaluated by the computer in terms of the game program. The effect is the same: some control action will result. Despite the functional separation exhibited in the instrumentation system block diagram, the same bus is actually used by both the measurement and control elements. Hence the physical characteristics of the microprocessor bus define the absolute frequency response and amplitude limits of these two elements in the overall system. It seems unlikely, however, that the bus is utilized at its full potential; other, slower components are involved in the MPU board circuitry that will reduce the frequency response limits even further.

Technical Data Generation

If the object of the study was strictly computer software, the technical data would consist of system specifications, flowcharts, pseudo code, and so on; if it was a machined part, it would consist of assembly drawings, manufacturing plans, and so on. Technical data for reverse 71 engineering a microcomputer control unit encompasses both software and hardware specifications, and could entail an enormous amount of details if it was to be fully draw out according to Ingle's process. A solution fitting to the project must be chosen. The strategy was to use the Socratic method beginning at the discrete points of intersection between the microcomputer unit and the rest of the system to which it belongs. These were points of physical, electrical contact, made up of wiring harnesses and connectors, as well as chassis ground connections. The closed loop process model and the hierarchical control scheme represented by arrows emanating outward from the MPU module provided the overall structure for the division into sub parts. AS2518­35 MPU board printed schematics supplied with the game operation manual, reproduced in Figure 11, were invaluable in identifying the connector pin terminations and their function, nearly all of which trace back to the the two 6820 PIAs

(Bally Manufacturing Corporation, 1977). 72

Figure 11. AS2518 electrical schematic

The division of the I/O interface for the Bally MPU has been summarized in Table 4.

Electrical Connection Name Connector/Pin Number of Lines I/O Direction

Play Field Switch Return J2­8 to J2­15 8 Input (Feedback)

Play Field Switch Strobes J2­1 to J2­5 5 Output (Feedback)

Self Test Switch J3­1 1 Input

Cabinet Switch Strobes J3­2 to J3­3 2 Output (Feedback)

Cabinet Switch Returns J3­10 to J3­16 5 Input (Feedback)

Lamp Address J1­15 to J1­12 4 Output

Lamp Data J1­16 to J1­19 4 Output

Lamp Strobes J1­11; J1­8 2 Output

Display Segment BCD Data J1­25 to J1­28 4 Output

Display Latch Strobe J1­20 to J1­24 5 Output

Display Blanking J1­10 1 Output

Display Digit Enable J1­1 to J1­7 7 Output 73

Electrical Connection Name Connector/Pin Number of Lines I/O Direction

Momentary Solenoid Data J4­4 to J4­1 4 Output

Continuous Solenoid Data J4­5 to J4­8 4 Output

Solenoid Bank Select J4­10 1 Output

Table 4. Division of MPU electrical connections into major sub systems

This simple enumeration produced a total of 14 input lines and 43 output lines. Each

6820 PIA has two 8­bit ports and two 4­bit ports, making a total of 48 bits available for use on the MPU board. From the schematic it was evident that some I/O lines are connected to more than one line going to other parts of the pinball machine, such as the play field switch returns and the cabinet switch returns. The Socratic method recommends each part be studied in order, from the simplest to the most complex.

Controlling Continuous Solenoids

The continuous solenoids represent the simplest operation performed by the AS2518­35

MPU. No mention is made of them in Theory of Operation, so study of the circuit schematics and direct inspection of the circuit boards was required. Four 6820 PIA outputs are connected to transistor driver circuits. Evel Knievel uses the first two to control the Coin Lockout Relay and the Flipper Enable Relay; the other two continuous solenoid outputs are not used. Figure 12 shows only the Flipper Enable Relay implemented. 74

Figure 12. Continuous solenoids electrical connections

Comparison of electrical characteristics for the 6820 PIA and the 8255 PPI validate the substitution; they are both TTL compatible and have similar current driving and sinking characteristics. The fact that they are called "continuous" whereas the other solenoids are called

"momentary" reveals something of their nature. Observation of their behavior during normal game operation revealed coarse process deadlines. The flippers are enabled shortly following the start of a new game, and disabled shortly after the game is over or tilted.

Controlling Momentary Solenoids

Momentary solenoids make up the bulk of pinball machine action. A subset of the MPU schematic depicting the portion associated with the continuous and momentary solenoids is shown in Figure 13. 75

Figure 13. Solenoid control detail of MPU schematic

Apart from the flippers, which are controlled directly by the player, all the other moving parts­­ pop bumpers, kickers, saucers, the out hole kicker, and the musical chimes­­are actuated by momentary solenoids. A Bally pinball machine has up to 16 momentary solenoids controlled by the MPU via five output lines. These lines control a 74LS154 four­by­sixteen decoder on the

Solenoid Driver and Voltage Regulator board. The electrical connections between the two boards are shown in Figure 14.

Figure 14. Momentary solenoids electrical connections 76

It was obvious from inspection of the schematics that the MPU produces a four­bit data word for the decoder inputs, and the selected solenoid is energized by the MPU making normally high the decoder select line low.

A 74LS154 data sheet provides information concerning the minimum time required for the inputs to be properly set up before turning on the select line (Fairchild Semiconductor, 2000).

Problems from time­critical processes failing to meet their deadline have already been discussed thoroughly. Nondeterministic computers can also cause problems for time­critical operations when by completing processes too quickly. A time­critical operation completed too quickly can also fail. Take the example of setting up the outputs for the momentary solenoid address. A logical requirement is that this must be done before enabling the output. But the enable cannot come too soon after output of the four bit address. The 74LS154 IC has its own timing requirements, and many of these must be treated as lower bounds for the time­critical process.

Another deduction based on the circuit was that only one solenoid can be energized at a time.

It was obvious from playing a game of pinball that many of the momentary solenoids are actuated by the MPU when the ball makes contact with a particular pop bumper or kicker; indeed, there are leaf switches built into these assemblies to sense the ball. Other momentary solenoids, such as saucer kickers, the out hole kicker, and drop target resets, are actuated after some delay. Fortunately, these behaviors are documented in the patents and the Bally Electronic

Pinball Game Theory of Operation. The latter text gives some further insight into the operation 77 of the momentary solenoids that was not apparent from direct observation. It was important to note the function of the zero­crossing interrupt generator built into the MPU board: "DC powered solenoids exhibit a far smaller back EMF if turned off near a zero crossing. This helps extend the life of the solenoid driver transistors and other circuit components by keeping large voltage spikes out of the system" (Bally Corporation, 1977, p. 7). A zero crossing occurs 120 times per second as the AC line input voltage alternates at 60 Hz. This fact should play an important part in the software design because it will be expected that the replacement system respects this design consideration. This means that the replacement unit must also sense zero crossings in the AC power supply, suggesting the need for an external interrupt to keep the control process synchronized with the power supply DC waveform ripples. The same section also noted that most momentary solenoids are energized for three zero crossings (26 milliseconds), although the saucer kickers are kept on longer to make sure the ball clears the saucer. It was necessary to trace the individual solenoid lines back to the corresponding output on the decoder because the numbering scheme used in the operation manual is different, it being preferred to number the solenoids in accordance with their actual disposition in the circuit. This information is shown in Table 5, which has been ordered by the decoder outputs rather than the original Bally solenoid numbers listed in the Operation Manual (Bally Pinball Division, 1977).

Solenoid Name Number in Operation 74LS154 Output Prototype Solenoid Manual (Hex) Number

Chime 10 4 0x0 1

Chime 100 5 0x1 2 78

Solenoid Name Number in Operation 74LS154 Output Prototype Solenoid Manual (Hex) Number

Chime 1000 6 0x2 3

Extra Chime 7 0x3 4

Knocker 2 0x4 5

Outhole 1 0x5 6

Saucer 3 0x6 7

Top Thumper­Bumper 8 0x7 8

Middle Thumper­Bumper 9 0x8 9

Bottom Thumper­Bumper 10 0x9 10

Left Sling Shot 11 0xA 11

Drop Target Rest 13 0xB 12

Right Sling Shot 12 0xC 13

Table 5. Mapping of solenoids to decoder outputs

Observations of the select line on a working game should be performed to measure the pulse duration of each particular solenoid. Additionally, the conditions under which a particular solenoid is fired during game play must be determined. According to the the Theory of

Operation, the thumper­bumpers and sling shots are activated upon the first detection of their corresponding switch inputs. Other solenoid responses follow from the game program after the switch closure has been validated by the valid switch detection algorithm, which will be discussed in the next section.

An alternative to using external test equipment to measure the enable pulses produced by the prototype was to use an auxiliary input line physically connected to the 74LS154 enable line in parallel with the PPI output line controlling it. Readings were made at the Nyquist sampling 79 rate, which was twice the reciprocal of the shortest enable pulse produced by the original system.

This improved upon a program verification approach based on analyzing the unbroken sequence of every I/O operation performed during the run time of the experiment, accompanied by timestamps from the CPU time stamp counter. Either method also serves to verify overall game operation by confirming solenoid actions follow switch detections, and after what delay.

Detecting Switches

There are many switches that communicate the instantaneous disposition of the pinball machine to the MPU, more than there are I/O ports on the two PIAs. Switch sensing is accomplished via a matrix of switches and steering diodes much like a computer keypad. Figure

15 is a copy of the Bally switch matrix schematics (Bally Manufacturing Corporation, 1977).

Note that the rows and columns are rotated 90 degrees. 80

Figure 15. Pinball machine switch matrix

Evel Knievel, and most other models based on the AS2518 MPU, multiplexes five rows of eight column inputs, yielding forty discrete inputs from thirteen lines connected to the MPU. The type of each switch was identified as either normally open or normally closed, and continuous or momentary. Momentary switches change state briefly, whereas continuous switches, when they change state, remain in their new state until an action is issued by the control system. A number of momentary switches trigger an immediate solenoid reaction, and were identified. Cross 81 reference data in Table 6 was developed for this Socratic division of the overall system; the numbering scheme between the operations manual and the prototype was kept consistent.

Switch Name Prototype Matrix Matrix Switch Type (All Solenoid Switch Number Row Column Normally Open) Response

Drop Target "A" (Top) 1 0 0 Continuous

Drop Target "B" 2 0 1 Continuous

Drop Target "C" 3 0 2 Continuous

Drop Target "D" 4 0 3 Continuous

Drop Target "E" (Bottom) 5 0 4 Continuous

Credit Button 6 0 5 Momentary

Tilt 7 0 6 Momentary

Outhole 8 0 7 Continuous

Coin Shoot 3 (Right) 9 1 0 Momentary

Coin Shoot 1 (Left) 10 1 1 Momentary

Coin Shoot 2 (Center) 11 1 2 Momentary

Slam 16 1 7 Momentary

"E" Target in CYCLE 17 2 0 Momentary 4 (extra chime)

"L" Target in CYCLE 18 2 1 Momentary 4 (extra chime)

Middle "C" Target in CYCLE 19 2 2 Momentary 4 (extra chime)

"Y" Top Lane Roll­over in Momentary 4 (extra chime) CYCLE 20 2 3

Top "C" Top Lane Roll­over in Momentary 4 (extra chime) CYCLE 21 2 4

Center Target 22 2 5 Momentary

Right Outlane Momentary 3 (1000 pts 23 2 6 chime)

Left Outlane Momentary 3 (1000 pts 24 2 7 chime)

Left and Right "EK" Bumpers 25 3 0 Momentary 1 (10 pts chime)

Top Red Targets; General Purpose Momentary 2 (100 pts chime) 100 Points 26 3 1 82

Switch Name Prototype Matrix Matrix Switch Type (All Solenoid Switch Number Row Column Normally Open) Response

Top Saucer 29 3 4 Continuous

Right Spinner 30 3 5 Momentary

Left Spinner 31 3 6 Momentary

Left and Right Flipper Feed Roll­ Momentary 4 (extra chime) overs 32 3 7

Right Slingshot Momentary 13 (right 36 4 3 slingshot)

Left Slingshot Momentary 11 (left 37 4 4 slingshot)

Bottom Thumper Bumper Momentary 10 (bottom 38 4 5 thumper)

Middle Thumper Bumper Momentary 9 (middle 39 4 6 thumper)

Top Thumper Bumper 40 4 7 Momentary 8 (top thumper)

Table 6. Mapping of switches to switch matrix positions

One switch, the Self Test button mounted on the coin door, is wired directly to its own port. No doubt it was designed that way for fail safe entry into the self test program, even when there is a problem with the switch matrix. The parts of the MPU schematic relevant to this sub system are shown in Figure 16. 83

Figure 16. Switch matrix control detail of MPU schematic

The Socratic analysis of the electrical connections between the MPU and the switches are depicted in Figure 17.

Figure 17. Switch matrix electrical connections

The Bally Electronic Pinball Game Theory of Operation describes how the MPU senses valid switch closures by examining a history of three samples for each switch. A valid closure has a history of open, closed, closed. This is used to 'debounce' the switch and detect switches 84 that are stuck open or closed. Continuous switches may require more debounce than momentary switches since they are expected to remain in their new state once actuated.

Each row and column combination has a timing requirement roughly computed from the

RC time constant from electronics. This is the time it takes the high output on the row to raise the voltage of the closed switches in the particular column to a detectable high value. From the published specifications of these electronic devices, the critical RC (resistor­capacitor) charging times can be calculated for the circuits connecting the MPU to other parts of the system. Basic electrical circuit analysis of a single switch circuit from Figures 15 and 16 suggested at least two microseconds are required to change the circuit to 68% of the applied voltage. This property is known as the RC time constant (Grob, 1992). Real time requirements expressing minimum time between outputting one of the five rows high and reading in the eight column inputs derive from these values. This had to be a critical timing requirement of the original MPU as well as the replacement system. The switch matrix will malfunction if the input (read) operation follows the output (strobe) too soon.

Multiple strategies had to be developed to handle switch bounce. Routines potentially dynamically configurable (adaptive) from the user program built into the low level valid switch detection program were to be complemented with "watchdog" functions that handle critical missed valid switch detections. Methods had to be developed to verify the operation of the switch matrix. Besides the fundamental multiplexing operation to correctly read the disposition of the switch matrix, real­time requirements for switch detection latency or dead time should be 85 measured. This is the time interval between successive strobing of a particular row of switches.

It must be brief enough that the shortest duration switch closure on the pinball machine is detected. A related requirement is the latency of valid switch detection, due to the the fact that more than one sample is being required to detect a valid switch closure. The concern here is that this interval is not so long that the control program that responds to valid switch detections is late in delivering the expected response, when compared to the action of the original MPU. Perhaps the most demanding situation for the switch matrix control routine is when a switch is repeating rapidly, such as when the pinball is bouncing back and forth between targets or a spinning target is hit.

Controlling Feature Lamps

Like the momentary solenoids and the switches, there are more feature lamps than I/O ports, so a multiplexing scheme was employed to control them in the original design. All Bally pinball machines based on the AS2518 MPU utilized a Lamp Driver board; some also used an

Auxiliary Lamp Driver board. The former can control up to 60 individual feature lamps using 8 bits for address and data, and one strobe bit. Figure 18 depicts the portion of the MPU schematic relevant to the feature lamps. 86

Figure 18. Feature lamps control detail of MPU schematic

How their control is accomplished was again thoroughly described in the Bally Electronic

Pinball Game Theory of Operation and patents. The zero crossing detection circuit on the MPU creates an interrupt for the 6800 every 8.3 milliseconds (120 Hertz) when the bridge rectified AC power supply voltage from the transformer windings dips to zero volts. On the lamp driver board four four­into­sixteen decoders (14514B) drive the gates of sixty silicon controlled rectifiers (SCRs), which turn the feature lamps on and off. These decoders share a four bit

"address" bus, similar to the four bits used to select a momentary solenoid on the 74LS154.

However, the enable lines of the four chips are independently controlled by another four bit 'data' bus. A value is latched onto the address bus of all four chips via a common strobe line, and then up to four lamps are energized depending on which of the 'data' bus lines are next enabled. Once 87 the SCR gate and anode voltages rise to a sufficient voltage, the SCR conducts until the anode voltage falls to zero volts at the end of the AC half cycle. Thus the computer enables lamps at address 0x0, then 0x1, and so on through 0xE, on any of the four 14514Bs. There are no SCRs connected to the highest output line. Table 7 lists the feature lamps used in Evel Knievel and their relationship to the lamp driver board final control elements; because the prototype took the place of the head unit, the lamps in the head unit were not included.

Lamp Name 14514B Decoder (Lamp Address Prototype Lamp Data) Number

Bonus 1000 Points 0 0x0 1

Bonus 5000 Points 0 0x1 2

Bonus 9000 Points 0 0x2 3

Saucer "S" and "S" Arrow 0 0x3 4

Saucer "R" and "R" Arrow 0 0x4 5

"S" in SUPER 0 0x6 7

"R" in SUPER 0 0x7 8

First "C" in CYCLE 0 0x8 9

Middle "C" in CYCLE 0 0x9 10

Bonus 2000 1 0x0 17

Bonus 6000 1 0x1 18

Bonus 10000 1 0x2 19

Saucer "U" and "U" Arrow 1 0x3 20

Drop Target Scores Special 1 0x5 22

"U" in SUPER 1 0x6 23

"Y" in CYCLE 1 0x8 25

"L" in CYCLE 1 0x9 26

Bonus 3000 2 0x0 33

Bonus 7000 2 0x1 34 88

Lamp Name 14514B Decoder (Lamp Address Prototype Lamp Data) Number

Saucer "P" and "P" Arrow 2 0x3 36

Bottom Thumper Bumper 2 0x4 37

Drop Target Scores Extra Ball 2 0x5 38

"P" in SUPER 2 0x6 39

Right Spinner Scores 1000 2 0x7 40

Right Outlane Scores Special 2 0x8 41

"E" in CYCLE 2 0x9 42

Shoot Again 2 0xA 43

Bonus 4000 3 0x0 49

Bonus 8000 3 0x1 50

Double Bonus 3 0x2 51

Saucer "E" and "E" Arrow 3 0x3 52

Top and Middle Thumper Bumpers 3 0x4 53

Drop Target Scores Double Bonus 3 0x5 54

"E" in SUPER 3 0x6 55

Left Outlane Scores Special 3 0x8 57

Credit Indicator 3 0xA 59

Table 7. Mapping of feature lamps to decoders

The electrical connections between the MPU and the Lamp Driver board are shown in Figure 19.

Figure 19. Feature lamps electrical connections 89

The design called for an external interrupt to trigger the control process at the beginning of each pulse of the bridge rectified AC power supply; however, a "shot gun" method was developed instead in which the control process was simply executed at a higher frequency than

120 Hertz without attempting to synchronize the triggering action.

Controlling Digital Displays

Control of the digital displays has been deemed beyond the scope of the prototype development for this project. Nonetheless, it is instructive to enumerate the electrical connections between the MPU board and the displays in order to round out the analysis of all the electrical connections. If the results of this study are favorable, future work will no doubt include developing a control strategy for this subsystem. The relevant portion of the MPU schematic is shown in Figure 20. 90

Figure 20. Digital displays control detail of MPU schematic

The electrical connections to the rest of the pinball machine are depicted in Figure 21.

Figure 21. Digital displays electrical connections 91

Controlling Game Operation

Recall the closed­loop process diagram portrayed in Figure 6. Figure 22 represents the overall process in terms of the electrical connections between the MPU and the rest of the pinball machine analyzed functionally according to the proposed Socratic method.

Figure 22. Reverse engineered process model

Now that the constituent sub systems had been analyzed, it was time to put them all together and operate the pinball machine as a whole. The Theory of Operation was an invaluable source of information regarding the nature of the normal mode of operation. It has been decided that examination of the program code on the game ROMs is prohibited due to copyright restrictions.

Consequently, the details of game play for the individual model selected had to be discerned 92 from reading instruction cards and advertisements, observing a working game, inference, and interviewing experienced pinball players who had played Evel Knievel.

The original system was interrupt driven. It uses two hardware generated interrupts, the former by a 555 timer circuit and the latter by the zero crossing of the line voltage via the bridge rectifier. For the original 6800 microcomputer, 40% of available CPU time is used for normal program operation, and 60% is divided between the interrupt sub­systems (Bally Corporation,

1977, p. 6). Using the cyclic executive concept from real­time computing, it would be possible to develop a scheduler to interleave the execution of the two interrupt routines and the game control process. The key would be to ensure that the execution times of each part are deterministic, or at least bounded by maximum computation times that do not compromise the beginning of the next process. The original requirements are summarized in Table 8.

Interrupt Routine Period Frequency Maximum Computation Time display 2.7 ms 360 Hz 0.5 ms zero crossing 8.3 ms 120 Hz 3.7 ms

Table 8. Timing requirements of the original MPU

Note that the maximum computation time for the zero crossing interrupt service must be decreased by three times the actual display interrupt routine. To do the same thing on a non­ deterministic, multi­tasking operating system, existing process scheduling mechanisms must be used to achieve the same effect. There may be a basic, control process that accesses the hardware I/O, complemented with multiple levels of supervisory control. Typically the latter operate at significantly lower frequencies than the former, and are implemented as ordinary user 93 processes, whereas the former are implemented as part of the operating system kernel (device driver, kernel module, etc.). For example, the process controlling hardware via memory­mapped digital I/O ports, executing hundreds of times per second, may itself be controlled by a supervisory process executing ten times per second, and this program itself may be controlled by a supervisory program that executes once every hundred seconds or longer. This, in fact, is the sort of model used in Laurich's real­time research. Besides making adjustments to the output of the hardware control process and responding to inputs detected by the hardware control process, the supervisory level may perform other tasks such as log performance data to the hard drive, manage a user interface, detect faults, and so on. One advantage of state of the art computers over the first generation systems constituting the original controller is this ability leverage the additional resources made available by the operating system (although Ingle would refer to this as value engineering).

Design Verification

Reverse engineering has been depicted as a process of knowledge discovery that may be accomplished through disassembly of a working device, but also draws heavily on available documentation wherever possible. When the goals for technical data generation set out in the first stage of Ingle's process have been completed, the project proceeds to the third stage of design verification. Usually a prototype of the replacement part is built and tested in a working 94 system. Building the prototype replacement system is usually considered the exciting part of a reverse engineering project. The hobbyist approach is typically to jump right into building the prototype. Without the careful attention to detail taken in the previous stages of prescreening, evaluation and verification, and the critical step of technical data generation, however, this can easily prove the most frustrating and time consuming, especially for something as complex as a microcomputer­based control unit.

Prototype Determination

A machine based on generic, Intel x86 PC architecture and a GNU/Linux operating system was contemplated from the prescreening phase onwards as the desired platform for the reverse engineered solution. Its key additional components are the digital I/O board to supplement the insufficient I/O capabilities of a generic, off the shelf system, the software framework from which to develop control programs to actually operate the pinball machine, and an interface circuit to take the place of the passive components connecting the digital I/O lines to the solenoid driver board, switch matrix, and lamp driver board. Intel produced an IC that is similar to the Motorola 6820 for interfacing I/O to its x86 family microprocessors. The widely used 8255 Programmable Peripheral Interface (PPI) offers three 8 bit ports, each of which can be set for input or output. Indeed, most basic digital I/O boards marketed for Intel x86 systems use a number of these chips. On both ISA bus cards and PCI bus cards the 8255 PPIs are accessed 95 via memory mapped I/O. A brief survey of manufacturers and resellers reveals that a 48 port digital I/O board will cost more than the computer using it. Therefore, a board will be built from scratch using a prototyping board and two 8255 PPIs. An ISA interface was selected due to lower cost of the blank board and easier circuit construction. Instructions for building such a card were readily found on the Internet. A design based on a tutorial supplied by Boondog

Automation was used (Boondog Automation, 1998).

For an exploratory prototype, the software control aspect was limited to design verification. Implementing the actual game play for any other AS2518 based games has been left to future repetitions of this study. Iterations of testbed development aimed at implementing control functions from the simplest to the most difficult, while introducing first a diagnostic interface and finally scoreboard suitable for public exhibition of the project. If it was found to be impossible to consistently create the enable pulses for the momentary solenoids and feature lamps through software according to their timing requirements, then a hardware solution may be chosen instead of abandoning the selected operating environment in favor of a more deterministic one. For access to hardware interrupts a kernel device driver would be necessary.

Control programs written in C do not benefit from real­time programming features of languages like Ada, but are easy to develop, and a requirement for it being a Linux kernel module. Another option was to create a JAVA environment that programmatically implements and operationally supports real­time extensions. The preference shifted to a Linux kernel module and supervisory control program written in C, plus a number of PHP scripts to facilitate a browser based interface 96 to the test bed results. Additionally, a MySQL database was created to store statistical data from each game played. This was due in part to the abundance of reference material available such as the Linux Kernel Module Programming Guide, and Welling and Thompson's PHP and MySQL

Web Development.

The software framework had three major components. First, the low level interface to the

I/O board was to be encapsulated in a software data structure representing the enumerated, information bearing electrical connections determined in the previous methods. Hence the purpose of the software framework was not full emulation of the original pinball machine MPU.

Rather, its design was to derive from the functional analysis of the sub systems discovered following the Socratic method, and represent a fresh perspective to solving those control problems based on the operational qualities of the Intel x86 architecture, the GNU/Linux operating system, and the C programming language. Second, the C language supervisory program ran along side the kernel module at much lower frequency to manage the high level aspects of system operation, including the creation of a log file from the data recorder. Third, an fork of the supervisory program asynchronously parsed the data recorder log file and invoked a

PHP program that performed to achieve automated, quantitative validation of the system.

In order to encourage others to repeat the study and develop full implementations of game control programs, the source code for the software framework was made available under the

General Public License (GPL) and hosted on Sourceforge.net as a project called the Pinball

Reverse Engineering Kit (PMREK). 97

Prototype Testing

Testing accompanied every step of prototype construction. By using an iterative approach to hardware and software development, the correct operation of each sub system identified in the Socratic method was verified both individually and in terms of its overall integration into the growing system. Nonetheless, testing a reverse engineered control unit entails a degree of complexity not found in either wholly software or wholly hardware solutions.

Namely, when the system did not perform as expected, the root cause could have been a hardware design or assembly flaw, a software design or coding mistake, or a combination.

Evaluation of runtime performance consisted of verifying that requirements for safety, logical correctness, and timeliness were met. Particular attention had to be given to timeliness requirements that fell below the sensitivity of the experimenter and other pinball players. Any of the internal timing requirements for momentary solenoid, switch, and feature lamp routines will have to be measured using an instrument that can detect and record voltage changes at the

Nyquist rate of information being sampled.

One pivotal question was the determinism of the Linux kernel timer, since it took the place of an external hardware circuit generating interrupts to trigger the primary control process.

Whereas Laurich defined the acceptable timer jitter at 0.5 milliseconds in his research, concerns with preemption latency should have empirical foundations based upon the particular circuitry and the performance expectations (Laurich, 2004). This was information sought in the actual 98 reverse engineering work. For the Bally pinball machine control unit, a few hard, real­time requirements were derived from documentation and circuit analysis. These involved the operation of the switch matrix and overload protection of the momentary solenoids. The rest were soft, real­time requirements because game play was merely degraded when they are not met, not ruined all together. They have been categorized in Table 9.

Subsystem Real­Time Requirements

Momentary Delay Between Data Setup Maximum Pulse Duration < Solenoids and Enable > RC Time 2 x Maximum Nominal Pulse Constant of Circuit and Duration 74LS154 Setup Time

Switch Matrix Row Strobe to Input > RC Sampling Dead Time < Switch Detection < Fastest Time Constant of Switch Shortest Switch Action Repeating Switch Circuit

Feature Lamps Delay Between Address Setup Enable Duration > SCR Gate Frequency of Control Action > and Enable > RC Time Triggering 2 x SCR Anode Power Pulse Constant of Circuit and Frequency 14514B Setup Time

Game Play Response To Valid Switch Consume Module Log Detection < Noticeable Delay Before it Fills (No I/O by Player Discontinuities)

Table 9. Real­time requirements for prototype testing

It was decided to forgo the use of electronics test equipment like a digital data analyzer and to build the data recorder into the prototype computer itself. The ability to time stamp points in code execution with the Real Time Stamp Counter (RTSC) meant that sub­microsecond precision was possible for making timing measurements.

Message passing between the kernel module and the user program already existed in order to send commands to the module and retrieve valid switch detection information. Adding 99 additional log types for data recording was not difficult; however, the concern did arise whether the amount of data being transferred due to the additional logging had undesirable side effects on overall performance. These findings will be detailed in the results.

Design Implementation

Design implementation was beyond the scope of this project. However, it was necessary to consider the project from "cradle to grave" when designing even a proof of concept prototype if it is hoped to meet economic expectations. A number of different versions of the replacement system were contemplated. The product was first evaluated as a kit including the ISA I/O board and a separate circuit board with header pins for the original wire harnesses leading to the solenoid board, lamp board, display boards, and the switch matrix. The additional circuit board would house the resistor and capacitor networks, and the pull­up resistor circuit supplying the self test switch input. Next an assembly that attached to the existing MPU board via a cable connected to the ISA I/O board was considered. This would have taken advantage many preexisting arrays of passive components, reducing the cost when compared to an option replacing the MPU board. Active components like the microprocessor IC and memory chips would have been removed; the replacement system would attach to the MPU in place of the

PIAs. Another version of the replacement system located the PC motherboard in place of the 100

MPU in the pinball head, with the aforementioned passive components built directly onto the

ISA I/O board.

Niche markets exist for short term, low volume circuit board manufacturing operations.

Awareness of the hazardous chemicals involved in circuit board manufacture steered the design implementation away from the putatively optimal solution of outsourcing board construction to a manufacturer. Instead, it was preferred to use recycled hardware and a kit format so that the construction of the board could be accomplished in the context of electronics technology education. Advertisement of the product would occur through the Sourceforge.net developer community as interest in the open source programming project was built through magazine and journal articles about it. Demonstration units would be displayed at pinball conventions as well.

Apparatus

The prototype of the reverse engineered replacement for the Bally AS 2518 microcomputer­based control unit was part of the overall experimental apparatus. This was attached to an Evel Knievel pinball machine in place of the original MPU. The apparatus also included common electrical test equipment such as a digital multimeter and an oscilloscope. The remainder of the apparatus consisted of additional computer software written for recording run time data and analyzing the testbed performance. Sourceforge.net was used to host the project on the Internet, exposing it to the open source developer community and to the world at large. 101

Electronic Pinball Machine

The primary apparatus of the experiment, which constituted the evolving iterations of data verification from bench setups used for operational testing to the final prototype suitable for installation in the field, was the 1977 model Evel Knievel electronic pinball machine produced by the Bally Manufacturing Corporation, playfield serial number 207593, of which 14,000 were produced (Petit, 2002). This was one of the first games converted from the older, electromechanical design to the new, solid state, microcomputer controlled design described in

U.S. Patent Number 4,198,051, Computerized Pin Ball Machine. Evel Knievel and a number of other first generation electronic pinball machines used a solenoid based chime unit to produce sounds; the MPU designation was AS2518­17. Later models had a separate microcomputer­ based unit that produced digitized sound through a loudspeaker. Together with the later

AS2518­35 version of the MPU, Bally built over 300,000 pinball machines employing the same basic design. The design, shown previously in Figure 11, allowed each machine to have up to four continuous solenoids, sixteen momentary solenoids, forty matrix switches, sixty feature lamps, and four, six or seven digit player score displays plus a six digit display for credits and ball in play. Additional solenoids and lamps were added to some models by using multiple select lines and additional circuit boards. 102

Data Recorder

The design verification method called for a data recorder that can be used to validate the correctness and the timeliness of system operations. Prototype testing consisted of subjective and objective methods. Judgments made by the experimenter were prominent during software development and debugging. Later, automated statistical analysis of quantitative measurements were performed by the data recorder apparatus, for which hundreds of games of pinball were played by the experimenter and members of the general public. Initially, the data recorder tracked the values the kernel module output to the memory mapped I/O lines. In the final version an auxiliary input line was connected directly to the 74LS154 enable pin on the solenoid driver board. A sample rate of at least 77 Hertz was required to detect the enable pulses; this is the Nyquist sampling rate based on a theoretical pulse width of 26 milliseconds. A higher sampling rate was needed to measure their duration. Taking a sample every cycle of the control process enqueued in the Linux kernel yielded a sampling frequency of 250 to 500 Hertz depending on the workqueue delay. To facilitate analysis of data logged from the kernel module, each record contained RTSC timestamps marking the beginning and end of the execution of the control process.

The data recorder tracked control actions performed by the replacement system, such as responding to valid switch detections by energizing solenoids and turning on feature lamps.

However, these actions were inferred from the module log records that the commands had been 103 processed, not from direct measurement of outputs as in the enable pulse. To actually have measured these events, a much higher sampling rate would have been required, and the data recorder itself would have consumed far more CPU resources than the control processes.

Sourceforge.net

Sourceforge.net (www.sourceforge.net) is an online project management system for open source software. It was chosen part of the apparatus because it was used to host the project on the Internet, including the source code and documentation. The hope is that other people will become interested in repeating the pinball machine reverse engineering experiment, and eventually write software to play other games besides Evel Knievel using the replacement system. Rather than starting this exercise from scratch every time, building on a foundation of shared code will reduce development time and errors in addition to promoting the thesis topic.

Using a third party hosting service such as Sourceforge has the additional benefit of innoculating the project against the "Slashdot Effect." The term alludes to the often disastrous effect of sudden, massive popularity to an online property, as often happens when a news story about it is posted on Slashdot News (http://slashdot.org). While a hosting the project locally on a home or university system works fine for the original author of the thesis and a couple of other readers to access it, should a large number of readers suddenly request the resource, its initial network host cannot or will not support the bandwidth required to service them. Besides slow 104 response, the web site may even be taken off line by its hosting service. This is an aspect of good project management that Ingle probably never considered. For reverse engineering projects that exist in the virtual world of hobbyists rather than the real world of high dollar government and commercial enterprises, the project management system itself must be carefully engineered from the start. The attribute of sustained networked presence, critical to sustaining a collaborative environment, can be easily defeated by a project management system that cannot scale effectively. Sourceforge also offers project management tools such as a secure source code depository and revision control system, project web pages, and location within the catalog of tens of thousands of other free, open source projects.

Statistical Techniques

The statistical techniques employed in the prototype testing methods included basic, descriptive statistics like the mean, standard deviation, and skewness of sample distributions of data collected from game operation using the data recorder apparatus. Standard tests of significance were used to test hypotheses concerning logical requirements. One unusual technique borrowed from Laurich's real­time research was the cumulative percentage graph for displaying the distribution of actual process cycle periods recorded during each game of pinball

(2004). Functionally, it is an inverted distribution graph, and graphs on the y­axis the percentage of all samples in a dataset that were either more than or less than the value along the x­axis. Its 105 aim is to illustrate the overall timer jitter and effect of preemption latency for a cyclical process.

Computations were accomplished using custom PHP code and statistical functions built into the

MySQL relational database that stored cumulative data of all games played on the system. Just as the prototype itself went through iterations of feature implementation, the statistical techniques employed in the analysis of each game did as well. For instance, it was discovered from the cumulative percentage graphs during the research that positive skewness of the process periods was correlated to the amount of flicker apparent in the feature lamps. Therefore, code was added to compute the actual skewness of each data set and store it in the database. 106

CHAPTER IV. RESULTS

Iterative prototype development was guided by real­time requirements for the continuous and momentary solenoids, switch matrix, and feature lamp control actions derived from the timing diagrams developed for the hardware circuits. Results for each of these sub systems are presented separately, and then the results for each major iteration of the prototype. The first version was a purely user space program in which the switch matrix and lamps failed to function correctly. The second major iteration introduced a kernel module to handle low level control, while a supervisory user process managed game play, logging, and fault detection. In the third and final iteration an emulation of the digital displays and other feature lamps on the pinball head back board was added to the user interface. Statistical analysis of logged data sets showed acceptable overall performance can be achieved when the workqueue process was repeated every two or three milliseconds; at four milliseconds considerable lamp flicker was evident although other functions performed adequately. Furthermore, lamp flicker was also pronounced when the user program was granted real­time priority scheduling. The economic criteria were satisfied as well. The 25% unit cost savings was met by minimizing materials cost with free, open source software and recycled computer hardware. Future labor cost can be reduced by casting the effort in an educational context and by distributing software development among the SourceForge.net community, resulting in enhanced overall return on investment. 107

Process Models

Continuous Solenoid Control

The continuous solenoid process model was a straight forward, one to one relationship between outputs from the I/O board to inputs on the solenoid board. This was shown previously in Figure 13 and Figure 12. The control action was simple, too: a single I/O action to turn the solenoid on or off. The only continuous solenoid of interest for the prototype was the one controlling the flipper enable relay, although the coin lockout relay could have been implemented as well. A PPI output was connected to the original MPU connector J4­5 via

0.025" square post, 0.100" spacing header pins. Setting the output high turned the relay off; low turned it on. This action was verified by a test program that turned it on and off at one second intervals. Later iterations of the test bed included a diagnostic display that showed the current disposition of the continuous solenoid, and permitted the operator to change it using a keyboard command.

This inversion of ordinary digital logic is called negative logic. A problem with the initial hardware circuit was quickly discovered when the pinball machine was left powered on while the PC booted. During system start up the line output was low and therefore the flippers were enabled when they should not have been. Even after installing a pull­up resistor in the circuit, the flippers were enabled during system start up, and did not turn off until the control 108 program was started to set the output high. The problem was solved by adding a 7404 hex inverter in line so that the initial power on state resulted in a high output applied to the continuous solenoid line.

Momentary Solenoid Control

The process model for the momentary solenoids consisted of five outputs from the I/O board going to the solenoid driver board. This was shown previously in Figure 13 and Figure

14. Four were used to set up the four bit input to the 74LS154 demultiplexer (decoder), and the fifth was used to enable or disable it, energizing the respective solenoid coil. Figure 23 presents timing diagram that was developed for this action; in the diagram the Extra Chime (input address

0x3, see Table 5) is being actuated.

Figure 23. Momentary solenoids control timing diagram 109

The test bed program cycled through the sixteen solenoid addresses at a one second interval, similar to the self test routine built into the original MPU. Initially the enable followed immediately after the address set up output; however, this resulted in unreliable operation, as the expected solenoid was not always the one that was energized. It was concluded that the one microsecond delay between the two instructions was not sufficient for the input values to reach suitable voltage levels to be decoded correctly. Therefore, an additional microsecond delay was added between them by repeating the address output command. This proved effective, and in later iterations of the test bed program the delay was created by reading in the auxiliary input byte between the solenoid address set up and the enable.

Like the continuous solenoids, the 74LS154 used negative logic for its enable line.

Therefore, during initial system start up a solenoid was enabled if the pinball machine was left powered on. This behavior was unacceptable due to the fact that the momentary solenoid circuits were designed for brief current pulses. Prolonged electrical current either blows the fuse or leads to components overheating. The same solution was applied to the momentary solenoid enable line as to the continuous solenoid, an in line logic inverter.

The solenoid control action was developed after the continuous solenoid action was verified by modifying the user space program. To achieve the required 26 millisecond pulse, a usleep(26000) (26000 microsecond sleep) was inserted between the commands to set the enable line high and low. It was noted that usleep() guarantees a minimum delay, but may exceed the target for an unpredictable amount of time because the normal process scheduling activity and 110 process preemption (Saikkonen, 2000). That is, the time slice allotted to the calling process may expire during the system call, resulting in the process being swapped out by the scheduler, or a process of a higher priority may come into the run queue and preempt it. This feature of the operating system did not bode well for the soft, real­time requirement of producing the nominal pulse duration, nor the hard, real­time requirement of never exceeding twice the nominal pulse duration. The standard Linux scheduling policy for the user process yielded a cyclic period of 5

Hertz, far too slow to effectively regulate the solenoid enable pulse duration. This result, in conjunction with problems experienced with the switch matrix control, led to the second major iteration of the test bed, in which the low level, hardware control actions were handled by a higher frequency periodic process enshrined in a kernel module.

During the development of the kernel module a diagnostic screen was added to the user space program (now referred to as the supervisory process). With it the operator could send a command to the kernel to enable a particular solenoid, for either its default duration or a custom one. The screen display showed the last solenoid that had been ordered up and its intended duration. Using this feature in conjunction with an oscilloscope, the experimenter could make a gross estimation of the pulse duration, and of course ascertain whether the correct solenoid was enabled. An image of this screen in operation is shown in Figure 24; the experimenter has just pressed the 'S' key and is being prompted to enter the solenoid number to fire. 111

Figure 24. Manually firing a solenoid from the user interface

A better method to verify correct pulse duration was accomplished in the second test bed iteration by feeding the enable pin back into an auxiliary input that was read every process cycle of the kernel module. This yielded a sampling precision equal to the process period, which varied from two to four milliseconds, or a frequency from 500 to 250 Hertz. Analysis of the prototype results included computation of solenoid duration error, that is, the difference between the expected pulse duration and the measured pulse duration. Table 10 summarizes the solenoid duration error, in milliseconds, of the results of all games recorded by the data recorder by the workqueue process frequency, average system load, and whether real­time enhancements were applied to the supervisory process. This format was repeated for other significant items.

Data Set Games Min Mean 99.999% Max Standard Count (milliseconds) Threshold Deviation Period Load RT

500 Hz All No 95 ­24 2.8 11 19 2.8

500 Hz Idle No 67 ­24 2.8 11 19 2.8 112

Data Set Games Min Mean 99.999% Max Standard Count (milliseconds) Threshold Deviation Period Load RT

500 Hz Mod No 28 ­2.3 2.8 11 19 2.9

500 Hz Full No 0 No Data No Data No Data No Data No Data

500 Hz All Yes 2 ­2.1 3.9 18 19 3.6

333 Hz All No 142 ­50 3.2 15 39 3.5

333 Hz Idle No 75 ­50 3.3 17 31 3.8

333 Hz Mod No 53 ­2.1 3.2 14 39 3.2

333 Hz Full No 14 ­2.1 3.2 11 13 3.1

333 Hz All Yes 18 ­2.1 3.2 14 23 3.5

250 Hz All No 120 ­22 6.2 39 63 7.4

250 Hz Idle No 92 ­22 6.2 40 63 7.5

250 Hz Mod No 27 ­4.2 6.1 34 63 7.1

250 Hz Full No 1 ­4.1 8.9 54 54 11

250 Hz All Yes 2 ­4.1 5 23 23 5.8

Table 10. Summary of solenoid pulse duration error

This statistic was also used to verify the hard, real­time requirement that no pulse exceeded twice the nominal value, which was deemed likely to cause damage to the hardware or blow a fuse.

Failure was only evident in the 250 Hertz data sets at the 99.999% threshold. Curiously, pulse duration error decreased as system load increased. One factor influencing pulse duration, similar to the situation faced by the user space program, was apparent latency and jitter in kernel execution of the workqueue process. Those findings are revisited in the section on overall performance. 113

Switch Matrix Control

The process model for the switch matrix consisted of five outputs connected to the MPU switch strobe lines (connectors J2­1 through J2­5 and J3­2 and J3­3) and eight inputs connected to the MPU switch return lines (connectors J2­8 through J2­15 and J3­9 through J3­16). This was shown previously in Figure 16 and Figure 17. Attention had been given during data development to determine the RC time constant for each of the forty switch circuits. Just as the momentary solenoid control action required a brief delay between address set up and enable, the switch matrix presented a situation where the sequential processing of machine instructions on the x86 PC must be slowed down to allow voltage applied to the row output to rise from the low state to a level the column input would register as a high for any switches that were closed at the time. The first iteration of the test bed user program created a two microsecond delay between the row strobe and the column input by performing another output to the current strobe row.

This method failed to correctly detect switch closures. Increasing the delay by one microsecond increments until the detection was successful resulting in the process consuming valuable CPU time waiting for I/O. This busy wait was a poor means to generate the brief but necessary delay between the two control actions. Thus, this failure of the user space control program led to the next iteration of the prototype system, in which a periodic process began each cycle by reading the switch returns, and ended each of its cycles by strobing the next switch row. This created a 114 delay equal to the process period, thousands of microseconds, for the RC set up time. The timing diagram of this method is shown in Figure 25.

Figure 25. Switch matrix control timing diagram

After separating the control functions between the original user space program and the new kernel module, the former became known as the supervisory process, and the latter the control process. A curses based diagnostic screen, redrawn every cycle of the supervisory process, allowed instantaneous monitoring of the disposition of the switch matrix, as well as reporting of valid switch detections by the kernel module. An image of this screen taken in the midst of a one player game is shown in Figure 26; switch numbers 01 through 04 are closed, meaning the player has knocked down four of the five drop targets. 115

Figure 26. Diagnostic display showing switch closures

As actual games were played more was learned about the idiosyncrasies of specific switches. A great number of false closures were believed to be occurring due to vibrations

(bounce), especially on the targets mounted on posts. One feature developed in the per­game analysis was to calculate the time between successive switch detections, and, if the same switch was detected within a 100 milliseconds, the entry was flagged as the possible result of bounce.

Rather than changing the valid switch detection algorithm, which examined the current and the previous states of each switch for a high following a low to register a valid closure, an additional parameter was added to the switch configuration for bounce dead time. That is, if a subsequent detection of the same switch occurred within this interval, it was rejected. The experimenter was able to fine tune the switch detection operation on the fly by changing the values in the switch configuration data structure sent to the kernel module to eliminate most unwanted switch actuations resulting from vibrations in the switches themselves. 116

Referring back to Table 9, the real time requirements for the switch matrix were initially the circuit RC set up time, the effective switch sampling rate or latency, and the latency for valid switch detection. The results for the latter two varied depending on the process cycle frequency, and are summarized in Table 11. The valid switch detection latencies are these values multiplied by two for momentary switches, and by three for continuous switches.

Data Set Games Min Mean 99.999% Max Standard Count (milliseconds) Threshold Deviation Period Load RT

500 Hz All No 95 9.1 12 23 62 0.82

500 Hz Idle No 67 9.1 12 23 62 3.7

500 Hz Mod No 28 9.1 12 23 25 3.7

500 Hz Full No 0 No Data No Data No Data No Data No Data

500 Hz All Yes 2 9.9 12 29 29 4

333 Hz All No 142 9.9 17 35 338 5.5

333 Hz Idle No 75 9.9 17 35 338 5.6

333 Hz Mod No 53 14 17 36 287 5.5

333 Hz Full No 14 14 17 31 33 4.6

333 Hz All Yes 18 14 17 34 36 4.7

250 Hz All No 120 9.1 23 56 1346 8.1

250 Hz Idle No 92 9.1 23 61 1346 8.4

250 Hz Mod No 27 19 23 40 42 7.2

250 Hz Full No 1 20 22 40 40 6.6

250 Hz All Yes 2 19 22 46 46 6.4

Table 11. Summary of switch sampling and detection latencies

Although reasonable game play served to verify the correct control of this subsystem, that relied on the opinion of the experimenter or would have required a substantially more complex data recorder and analysis apparatus. The account of valid switch detections stored by the data 117 recorder could have been evaluated according to an algorithmic notion of typical game play, but even then it would be difficult to conclude from this account how many switch detections were missed. Therefore, a new method was created for prototype testing that sought to estimate the percentage of missed, repeating switches. The idea was that if the control process was failing to detect switch closures, that failure would be noticeable during times that a given switch was expected to repeat at a given frequency. On the pinball machine this was the two playfield spinners. Code was added to the game analysis script to look for missed detections in the first half of large sets of consecutive spinner switch detections.

Feature Lamp Control

The feature lamps comprised the most complex pinball machine sub system for which control by the prototype reverse engineered system was attempted. Bally designers chose to use a number of decoders with outputs driving silicon controlled rectifiers rather than some kind of matrix operation to selectively illuminate feature lamps during game play. The process model consisted of four outputs connected to the MPU lamp address lines (connectors J1­12 through J1­

15), four outputs connected to the MPU lamp data lines (connectors J1­16 through J1­19, and one output connected to the MPU lamp strobe line (J1­11). This was shown previously in Figure

18 and Figure 19. When an SCR is turned off, as illustrated in Figure 27, the oscilloscope display shows the bridge rectified power supply voltage of about ten volts DC at the SCR anode. 118

Figure 27. Oscilloscope display of untriggered feature lamp SCR anode

These wave forms are the alternating current in the transformer secondary winding. When the

SCR is turned on, as illustrated in Figure 28, the wave forms are flattened, representing the saturation voltage of the SCR; when the power supply voltage falls to zero, the oscilloscope trace rises until the SCR turns on.

Figure 28. Oscilloscope display of triggered feature lamp SCR anode 119

For every feature lamps intended to be illuminated, each SCR gate must have a positive voltage applied to it when the anode voltage begins to rise from zero. A "shotgun" method was developed in lieu of triggering the control process by an external interrupt when the anode voltage crossed zero and began to rise again. It fired the process off at better than twice the frequency of the 120 Hertz wave form from the bridge rectifier supplying power to the feature lamps, ensuring that the SCR would be triggered at least once during each pulse. Indeed, this requirement guided the variation of the process cycle workqueue delay from two to four milliseconds. Five would fail this requirement altogether. The timing diagram for this method is shown in Figure 29.

Figure 29. Feature lamp control timing diagram 120

Lamp control was first attempted in the second testbed iteration, which was developed in response to the inability of a user space process to adequately handle control of the switch matrix. The supervisory program communicated the intended disposition of the entire lamp matrix by writing a command data structure containing four, sixteen bit unsigned integers encoding in the resulting bit fields which of the sixty possible lamps were to be illuminated.

This meant that the quickest rate of change possible among the lamps was equal to the frequency of the supervisory process, which averaged 5 Hertz. This was deemed adequate based on the black box analysis of game play for Evel Knievel. Before any game play was built into the supervisory program, a simple interface allowed the user to key in which lamps to turn on and off, in order to verify the intended lamps were being illuminated.

A problem confounded the experimenter for some time, in which more than one lamp would alight when only one was specified for certain lamp numbers. It was suspected that, like the momentary solenoids, a delay greater than on microsecond was needed between the address setup and the latching into the 14514B IC, a step that was not part of the operation for the

74LS154. However, even with an additional microsecond delay the extra lights continued to appear. The hardware circuit was inspected for faults, and a glob of solder was discovered shorting two of the data lines together and remedied. This event reinforced the need for careful work and testing each step of the way, for a symptom like this could have been caused by bugs in the software, hardware, or both. 121

Proper operation of the feature lamps was solely based on the judgment of the experimenter. Indeed, if the intended lamps did not alight as ordered by the supervisory program, development would never have progressed to control of the overall game operation.

The diagnostic user interface developed in the second prototype iteration made it possible to compare the intended disposition of the lamps against their actual disposition on the pinball machine playfield through a display of the lamps currently being energized. An image of this screen in the midst of a one player game was shown previously in Figure 26. The most perplexing phenomenon observed during prototype testing was an unexpected, pronounced flickering of the lamps under certain conditions. This topic will be discussed in detail in the section on overall performance.

Game Operation Control

Evidence that a feedback (closed­loop), discrete process model is well suited for pinball machines was the fact that documentation for electromechanical pinball machines featured complex ladder logic diagrams. Their operation is amenable to representation by process timing diagrams, although tedious to actually portray due to their complexity (Bateson, 2002). It was advantageous to depict certain portions of the process with timing diagrams, in particular the low­level hardware I/O actions performed by the kernel workqueue process developed in the 122 second iteration. In this sense the activity on each electrical connection has been represented in timing diagrams. Figure 22 depicted the Bally pinball machine diagram as a closed loop process.

Disturbance variables were the two flipper buttons; the controller was oblivious of their value because there were no switches connected to the original design. The controlled variable

Game, which was the output of the reverse engineered process, fed back into controller by way of the switch matrix. The remaining manipulating elements were not fed back into the original controller, with the exception of the solenoid enable line. An unused feature lamp SCR anode served as an input datum to detect game power. The only other input to the system was the self test switch. Everything else was evaluated in terms of the controller responding to changes to the forty digital switch matrix inputs based on time and event conditions coded into the game control program. The game control program was a supervisory process that itself sent commands to and retrieved data from the pmrek kernel module. The strategy behind multi­level control is that some operations require a high speed, deterministic periodic process, whereas others can suffer embodiment as a lower frequency, less deterministic process.

The full header file (pmrek.h) and program source code for the kernel module (pmrek.c) and the supervisory program (testbed.c) are reproduced in Appendix B. Both the kernel module and the supervisory program shared the pmrek.h header file, which defined a number of enumerated types and data structures. The sole communication mechanism between the two processes was the character device file /dev/pmrek. Through it supervisory process could send 123 commands to the kernel module, and the kernel module could send log entries back to it. Table

12 lists the possible commands by their enumerated type names.

Enumerated Type pmrek_commands Description pmrek_CMD_CONFIGURE_IO Install mapping between pinball machine I/O lines and 8255 PPI ports pmrek_CMD_SET_WORKQUEUE_DELAY Set workqueue delay in milliseconds for low level hardware control periodic process pmrek_CMD_SET_LOG_TYPES Define which events are logged by the kernel module pmrek_CMD_CONFIGURE_SOLENOIDS Install a mapping of solenoids and pulse durations used by the pinball machine pmrek_CMD_CONFIGURE_SWITCHES Install a mapping of switches, including type, bounce threshold, and solenoid response, used by the pinball machine pmrek_CMD_SET_CONT_SOLENOIDS Enable or disable continuous solenoids pmrek_CMD_ENABLE_MOMENTARY_SOLENOID Enable a given solenoid for default duration or override value pmrek_CMD_SET_SINGLE_LAMP Turn a single feature lamp on or off pmrek_CMD_SET_LAMP_MATRIX Set the disposition of all lamps at once pmrek_CMD_RUN Begin normal game play operation pmrek_CMD_IDLE Begin idle game operation pmrek_CMD_STOP Stop game operation pmrek_CMD_GET_MODULE_INFO Request current state of module information data structure

Table 12. Commands available to the kernel module control process

Upon executing a command the module created a log entry detailing its execution, and placed it into a log buffer that was read from the character device file by the supervisory process.

Log entries were also created by events such as valid switch detections during the execution of the workqueue process. Figure 30 portrays the basic design of the kernel workqueue process. 124

Figure 30. Kernel workqueue process program flowchart

Every log entry was timestamped with the CPU RTSC, giving sub­microsecond precision to the data records. By reading this logged data, the supervisory process advanced through the states of game play. Only the quick response solenoid actions were performed directly by the module in response to valid switch detections for switches configured with a solenoid response.

This was meant to handle thumper bumpers and slingshots as quickly as possible, following the guidance of the Theory of Operations. The design of the kernel module was modeled on the samples found in the Linux Kernel Module Programming Guide for the customary operations

(Salzmann, 2004). Additional functions specific to the process control objectives were then added to it. The function pmrek_process_io() was enshrined as a workqueue process, which was 125 scheduled at two, three, or four millisecond intervals when the game was in its running or idle states. The major functions are described in Table 13; note that all functions begin with the kernel module name because they are exported globally to the entire Linux kernel.

Function Name Purpose pmrek_device_open() Called when user process opens character device file /dev/pmrek pmrek_device_release() Called when user process closes character device file /dev/pmrek pmrek_device_read() Called when supervisory process reads from character device file /dev/pmrek, transfers log buffer pmrek_device_write() Called when supervisory process writes to character device file /dev/pmrek, enqueues commands pmrek_cleanup() Called when kernel module unloaded pmrek_init() Called when kernel module loaded pmrek_configure_io() Sets control words for 8255 PPIs based on desired configuration of ports A,B,C pmrek_idle() Put the control process in an idle state; turn off all lamps and continuous solenoids pmrek_inb() Read from I/O ports pmrek_outb() Write to I/O ports pmrek_log_io() Log read or write information to module log buffer pmrek_process_commands() Process any commands enqueued by pmrek_device_write() or pmrek_process_io() pmrek_process_io() Configured in module as a kernel workqueue process, triggered by kernel timer after workqueue delay of two, three, or four milliseconds, performs all low level hardware control operations

Table 13. Key functions in the kernel module

The supervisory control process, in which the game program was embedded, is illustrated in

Figure 31. 126

Figure 31. Supervisory process program flowchart

Game play was also analyzed as a time­ and event­driven sequential process; however, timing diagrams were not necessary as narrative statements about the disposition of the current ball in play readily translated into program code. These were derived from inspection of the pinball machine playfield itself, which detailed the rewards for hitting various targets, the instruction card placed on the apron near the front of the machine, reproduced in Figure 32, and expert knowledge provided by experienced players of Evel Knievel when the prototype was showcased at the 2005 Pinball at the Zoo event in Kalamazoo, Michigan, on April 15 and 16. 127

Figure 32. Pinball machine instruction card

The majority of events driving game play operation were switch detections. Sequences of events, such as completing a bank of drop targets, spelling a word by hitting the switches that make up its letters, and so on, advanced game operation through various stages, as did the overall sequence of events in which each player was served the first ball, in order, and then the second, and then the third, after which point the game ended. A great deal of detail would be required to state the entire narrative according to which the game program was written. Indeed, like

Socrates' statement made concerning a full blown implementation of his method, "to give a complete account would be tiresome.” Using the iterative program development method, as more detail was learned about the time and event driven sequence defining game operation, it was 128 simply coded into the supervisory program. The key functions making up the flowchart presented in Figure 31 are listed in Table 14; refer to the actual source code for testbed.c in

Appendix B.

Function Name Purpose game_add_player() Called when the credit button is pressed (and there are credits) to start a new game or add more players game_ball_end() Called when the outhole switch is detected while a ball is in play to initiate the bonus count down, advance to the next ball, the next player, or end the game game_collect_bonus() Called after a ball ends to count down the current player's bonus game_segment_display() Emulation a seven­segment digital display on the computer screen for player scores, match count, credits, and ball in play game_lamp_update() Called after processing switch detections to update the disposition of all the feature lamps at once game_play_tune() Plays various tunes by firing the chime momentary solenoids in predefined sequences game_switch_response() Called for each valid switch detection retrieved from the kernel module; initiates all other events related to normal game operation game_watchdog() Called every second to detect game faults including missed switch detections and either reprocess the switch response or terminate the program process_output_file() Called by the forked child process after a game is completed to analyze the log file recorded during the game play termination_handler() Signal handler for cleanly ending the program closes data log file and puts the kernel module into an idle state main() Main program initializes kernel module data structures, computer screen, and loops until a termination signal is caught; main loop processes user keyboard input, reads events from kernel module, calls game process functions, writes log file to disk, and updates computer screen display

Table 14. Key functions in supervisory control program

Proper operation of the game control program implied consistency in the execution of the user process testbed.exe. The program was allowed to run free according to the default Linux scheduler algorithm, with the exception of the tests in which it was granted the POSIX FIFO 129 real­time scheduling policy. The results of the supervisory process cycle times for the various conditions studied are presented in Table 15.

Data Set Games Min Mean 99.999% Max Standard Count (milliseconds) Threshold Deviation Period Load RT

500 Hz All No 95 19 211 263 431 4

500 Hz Idle No 67 52 211 259 431 3

500 Hz Mod No 28 19 212 273 354 6

500 Hz Full No 0 No Data No Data No Data No Data No Data

500 Hz All Yes 2 48 209 333 335 8

333 Hz All No 142 8 209 444 23963 8

333 Hz Idle No 75 46 208 258 635 2

333 Hz Mod No 53 8 210 722 23963 16

333 Hz Full No 14 205 212 388 516 10

333 Hz All Yes 18 49 213 2216 29417 96

250 Hz All No 120 13 207 279 1647 4

250 Hz Idle No 92 13 207 280 1647 3

250 Hz Mod No 27 29 207 277 441 5

250 Hz Full No 1 70 208 260 260 7

250 Hz All Yes 2 204 205 267 270 3

Table 15. Summary of supervisory process execution periods

Testbed Iterations

While the progression of methods for reverse engineering sub parts of the MPU occurred linearly, the actual prototype system was developed iteratively. This distinction may seem academic; however, it emphasizes the fact that a reverse engineering project involving hardware 130 and software is best developed in stages, rather than all at once. Moreover, it is in the story told by the evolving versions of program source code that the discovery process is revealed. This may be important if there is any suspicion that the experimenter illegally reproduced copyrighted material, such as through disassembly of machine code stored on read­only memory chips, rather than arriving at the solution using clean room techniques. Therefore, the highlights of each major iteration of prototype development are given in the following sections. As the source code was developed, time stamped entries were made in comment fields summarizing key insights, developments, and changes to the design. The source code repository stored each version by the day it was edited, allowing recreation of the set of executables used on any particular day of the experiment.

Before any programming work was begun, the I/O board was fabricated by the experimenter from a Radio Shack 8­bit, full length ISA prototype board (catalog number 276­

1598) and components obtained from Jameco electronics. The layout was adapted from a design offered on line by the apparently defunct Boondog Automation, shown in Figure 33 (Boondog

Automation, 1998). 131

Figure 33. 8­bit ISA I/O board schematic 132

The complete parts list is given in Appendix A. The jumper block shown in the schematic was eliminated, and the first 8255 PPI CS (chip select) pin was connected to the Y4 output of the

74LS138, giving it a base address of 0x280. A second 8255 PPI was wired in parallel to the first, with the exception of the CS pin, which was connected to the Y5 output of the 74LS138, giving it a base address of 0x2A0. This assembly was connected to a second prototype board by ten feet of 25 pair, Category 3 telephone wire. The second board was the interface board to which the pinball machine wire harnesses that originally mated to the AS2518 MPU were connected. It contained a duplicate of the resistor and capacitor networks of the original MPU traced from the

6820 PIAs outward to the header pins. In the first test bed iteration everything except the lamp control lines were installed; Figure 34 shows the completed interface board, including the 7404 inverter that was added to remedy the power on problem and an LED power indicator. 133

Figure 34. Interface Circuit Between ISA Board and Pinball Machine 134

First Iteration: The User Space Program

Before any sort of control operations specific to the pinball machine were written, a program was developed to verify that the I/O board was constructed correctly. It tested all 24 bits in both input and output states. The first test bed iteration evolved from this single, user space program. The control process was an endless loop whose cycles were regulated by the usleep() function to avoid a busy loop. Control of the continuous solenoid and momentary solenoids were both successful, although there was concern that process swapping or preemption could extend a solenoid pulse beyond the maximum allowable duration of 100 milliseconds.

Next, the switch matrix was programmed like the original Bally unit, and yielded very poor performance. It was unable to reliably detect switch closures, even with an added inb() operation to create a longer delay between the row strobe output and the column input operations. This was based on the knowledge that these I/O operations take approximately one microsecond to execute on the x86 architecture. This problem with the timing requirement of the switch matrix presented a design dilemma that ultimately led to the development of a separate kernel module program. Even if the added delay proved successful in giving the row strobe sufficient time to raise the voltage at the column inputs to a detectable level, the cycle had to be repeated every 8.3 milliseconds if it was to mimic the strategy described by Bally documentation. On the one hand, the inherent upper bound indeterminism of usleep() hardly guaranteed this would happen. 135

Eventually, the kernel scheduler would have to swap it out to allow other processes to run.

Moreover, the control process would consume all available CPU time. On the other hand, if the process was allowed to be swapped out on a regular basis, examination of the switch matrix would occur too infrequently to detect short­lived and repeating switch closures; nor would it be able to respond quickly to switch closures for the thumper bumpers and slingshots. These suspicions were verified in repeated attempts to register a coin dropped through the coin shoot.

Only one in four attempts was detected by the program.

It is worth noting that the first version of the prototype system was built on a Red Hat

Linux 9.0 platform, which uses the 2.4 version of the Linux kernel. Because this version of the kernel used a 100 Hertz timer by default, it was unable to honor the 8,3000 microsecond wait.

That was one of the weaknesses of standard Linux for real­time computing cited by a number of researchers, the lack of high­precision timers (Laurich, 2004; Dankwardt, 2002).

Another problem with the initial design was quickly discovered, this time in the hardware. The interface circuit between the 8255 output lines and the solenoid driver board was modeled after the original, consisting of a current limiting series resistor and a capacitor parallel to ground. On initial boot up of the PC, these lines were in a low state, causing the continuous solenoid and a momentary solenoid to be enabled. For the former this meant the flippers were enabled when they should not have been; for the latter, the consequence was more drastic. A momentary solenoid enabled beyond its hard requirement of under 100 milliseconds resulted in a driver transistor burning up before the machine was shut off by the experimenter. This led to the 136 modified interface circuit used in subsequent iterations. A 7404 hex inverter changed the initial power on state as applied to the solenoid board from a low to a high.

Second Iteration: The Kernel Module

The results for this iteration of the testbed included recreation of overall event history and the generation of descriptive statistics for a number of data items built from logged information read from the kernel module on each cycle of the supervisory user process. A character device file was used as the mechanism to pass data between the kernel module and the user space supervisory process. Data structures referred to as command packets contained the control operation to be performed by the kernel module; after being written to the character device file into a buffer in the kernel module, they were sequentially processed the next time the workqueue process was triggered. The basic operation of this pair of complementary processes was presented in in the previous section on game operation. The supervisory process wrote every logged command to a binary file on the hard drive. When a game ended, the process forked, and the child parsed the binary file into a human readable text file and called the PHP script analyze_testbed_output.php, reproduced in Appendix B, to create a detailed history of the game actions, summary statistics for elements of interest, and deposit the results into a number of

MySQL database tables. Sample output files are reproduced in Appendix C. 137

Poor lamp performance during game play, especially when a large number of lamps were being illuminated, was not captured by the data recorder but was observed by the experimenter.

This led to consideration of implementing zero crossing detection by an additional hardware circuit fed into one of the ISA interrupts to more precisely time lamp control. An alternative method of zero detection performed solely through auxiliary input lines under the existing timer driven work queue without introducing the complexity and overhead of interrupt processing was also evaluated. This produced the elegant side effect of allowing the module to only perform lamp processing when its execution coincided with the rising portion of the waveform. It was attempted by reading an unused SCR anode via byte reserved for auxiliary input once every process cycle. Theoretically, it would detect when the waveform was near zero volts. There was a problem with this approach related to the sampling precision required to detect zero crossings; even at 500 Hertz the rate was too low. Furthermore, fluctuations in the process period newly revealed by the descriptive statistics and cumulative percentage graphs confounded the notion of synchronizing the process period with the lamp SCR power supply. A watchdog algorithm would have to monitor and resynchronize. A separate hardware circuit to detect zero crossings and generate an interrupt, along with the software overhead required to service it, seemed to be unavoidable.

It was thought that the data recording function was significantly interfering with system performance. The distribution of process period durations recorded over the course of game play was skewed to the right, with a maximum of 29 milliseconds. This latency in the kernel timer 138 turned out to be related to the full logging all I/O operations made by the kernel module. They were being used to determine solenoid response, feature lamp response, switch input values, and the continuity of the log itself. What they showed was the solenoid action was correct, in the sense the the kernel was firing the ones it was supposed to fire. While this information verified the correctness of the control actions based on the game control program, these records did not really measure the actual pulse duration. Thus the problem of lamp flicker that was assumed to be related to jitter in the process period was alleviated through a substantial reduction of the items logged by the kernel module. The logging function was adjusted to log input I/O operations only, which was fixed at two events every process cycle; fifty or more output I/O operations were no longer logged. The kernel module was creating too many log entries, and precious kernel CPU time was being spent writing these to user space via the character device file at the control program's request.

This version added features to the supervisory program, which also served as the user interface, such as player scores but retained the diagnostic screen format shown in Figure 26.

The bulk of the data involving variations in system load and the use of real­time enhancements to the user process came from games played running this version of the system. The experimenter played games on the machine while the host computer was subjected to varying loads, from an idle state to full CPU usage. Minor modifications to this iteration continued until the prototype was deemed ready for public display. 139

Third Iteration: The Public Interface

For the public interface the digital displays were simulated using block characters. The scores and other messages were played roughly where they appear on the original backglass; a screen shot of a game in progress is shown in Figure 35.

Figure 35. Backboard score display by supervisory program

Not logging output I/O operations improved lamp performance by alleviating the discernible flicker. Solenoid enable was detected instead by an aux input. Using an aux input line to monitor the solenoid enable verified control action was accomplished, measured the delay from the event initiating the control action, and measured the correctness of the control action in terms of the pulse duration error.

Enhancements were also made to the control and the analysis of the switch matrix operation. This pertained to continuous switches that, once closed, should not open again until 140 some event has occurred. Improper switch detection (bounce) for the top saucer and the outhole caused incorrect scoring, in the case of the former, and the watchdog process to fail the game on a number of occasions in the case of the latter. However, allowance for different types of switches had been made in the enumerated switch types defined in the header file and was leveraged for treating them. The valid switch detection algorithm in the kernel module was modified to inspect the current plus the two previous samples of continuous switches; momentary switches retained the previous algorithm that defined a valid switch closure as a high present plus a low previous sample.

Testing done in the third iteration mainly exercised variations in workqueue frequency and system load; the use of real­time enhancements to the user process was deemed unnecessary and was abandoned. The prototype was installed at a number of public venues where it was

"beat on" by players. For two months it resided in the Electronics Lab in the College of

Technology at Bowling Green State University, where students and professors played games of pinball between classes. Using OpenSSH for remote log in and file transfer, as well as VNC for remote X Windows desktop viewing, the experimenter was able to monitor and maintain the system remotely from 30 miles away. To ensure that the system was always running the control programs, a special shell login profile was written to execute a script called start_testbed, which is listed in Appendix B. This script ensured that the kernel module was loaded, and ran the supervisory program. If the supervisory program terminated, it would restart it. An entry was made in the system cron table to restart the supervisory program daily at 2:00 AM, loading the 141 most recent version of the executable in case a new one had been downloaded, and putting 40 credits into the machine so players did not have to drop quarters into it. Figure 36 shows the complete apparatus, including a safety cage built to protect the circuits from curious onlookers, being tested by participants at the 2005 Pinball at the Zoo convention on April 15, 2005, in

Kalamazoo, Michigan. It was also featured at the Instrument Society of America Toledo Section meeting held at BGSU on Aprial 20, 2005, and the BGSU College of Technology 2005 Picnic held on April 22, 2005.

Figure 36. Prototype system being tested at 2005 Pinball At The Zoo

The other aspect of the public interface was the institution of the Pinball Machine

Reverse Engineering Project Sourceforge.net (http://sourceforge.net/projects/pmrek). A software package was offered under the General Public License containing the source code used in this third iteration. These were the workqueue process, the supervisory testbed game control 142 program, the utility program for converting binary log files to text files, and the two PHP analysis scripts. It also included sample data sets, screen shots, and the output of the analysis.

Project Return on Investment

The project economics calculated in the evaluation stage (Table 3) were unacceptable according to Ingle's criteria. It was assumed that the project went forward for overriding reasons such as obsolescence or lack of supply support, despite the fact that a COTS option was available. The point was to encourage "out of the box" thinking to arrive at a cost effective solution. Table 16 presents a version of the economics that meets the costs savings target and delivers a return on investment. It hinges on the use of recycled hardware, free, open source software, and a discounting of labor cost.

COTS Replacement (Alltek Ultimate MPU) $200.00

Reverse Engineered Unit Cost:

Computer Hardware (Used PC) $50.00

Circuit Boards $25.00

ICs and Other Components $25.00

Assembly (10 Hours @ $5/Hour) $50.00

Free, Open Source Software $0.00

$150.00

Cost Savings 25.00%

Life­Cycle Cost Savings (25 Models, 1000 Units) $50,000.00 143

Reverse Engineered Project Cost:

Prototype Hardware $150.00

Programming Cost (100 Hours/Model @ $1/Hour) $25,00.00

Return On Investment 18:1

Table 16. Reverse engineering economics based on FOSS development model

The discounted labor is surely the most controversial item. It is founded on two principles: first, that the reverse engineering project is conducted in the context of an educational program for learning electronics computer technology, process control technology, or real­time computer programming. Given that the effort involved in developing and testing the control software for another pinball machine model is sunk in the course work, it can be discounted. The second is that a reduction in labor cost can be achieved by distributing the software development among the world wide community of hobbyists via the Sourceforge.net open source software hub.

Overall Performance

The test bed model upon which the overall performance was judged consisted of the Evel

Knievel pinball machine, with the head removed and a shelf installed in its place. On the shelf sat the computer monitor, the pinball machine transformer, the solenoid driver board, the lamp driver board, and the interface board shown in Figure 34. The circuit boards were then covered with a mesh screen for safety and security, as shown in Figure 36. The PC, attached to the shelf 144 from below, ran an 800 Mhz AMD Athlon unit with 384 MB RAM and a 20 GB hard disk. It ran a stock installation of Red Hat Fedora Core 2 GNU/Linux. In addition to the custom

PMREK programs, an Apache web server and MySQL database server rounded out the data recorder apparatus. The XMAME arcade game program and XMMS digital music player were occasionally run to create high system load condition. This model is depicted in Figure 37.

Figure 37. Test bed computer system block diagram

Overall results were based on the second and the third iterations, for which approximately 380 completed games were analyzed. The PHP script testbed_performance.php listed in Appendix B produced overall performance statistics. Its final output appears in Appendix C. A result expressing project success was the completion of games in itself. This fact not only verified the efficacy of the particular reverse engineered unit for which the experiment was conducted; it also 145 substantiated the viability of the Socratic method for reverse engineering a microcomputer­based control unit.

An example of a failed game was one in which a critical, valid switch detection was missed, for which there was no compensating supervisory action by the watchdog routine. In such a case the machine would become inoperable until the control program was reset. When such instances did arise during testing, they were usually the result of a programming glitch that was then debugged and corrected. No such failures occurred when the prototype debuted before the general public.

A surprising result was noted when POSIX FIFO real­time priority scheduling and memory locking was applied to the supervisory process. On the one hand, the determinism of kernel process decreased more than it ever had previously, and the timer latency and jitter increased. The lamps flickered noticeably at every workqueue process frequency, and the cumulative percentage graphs for the process periods show a considerable percentage of cycles delayed by more than twice the nominal period. A comparison of cumulative percentage graphs of games played with and without the real­time scheduling enhancement is shown in Figure 39.

Graphs for specific games can be viewed in Appendix C. On the other hand, this enhancement had no effect on reducing the bintime affecting the user process. Thus this Linux 2.6 user process maintained its set point frequency with or without the benefit of real­time scheduling enhancements. A likely explanation of this outcome is that the kernel timer service is being blocked until the user process completes its computations. Further analysis of the data logs 146 could confirm this suspicion by attempting to correlate the substantially delayed workqueue process cycles to executions of the supervisory process. To mitigate this effect, an external constraint to the system must be imposed that no other user programs be granted real­time scheduling priority, such as a digital music player.

Having rejected the use of real­time enhancements to the supervisory, user space process, the question remained of whether any real­time enhancements were required of the Linux kernel itself to improve the determinism of the low level, hardware control process. These would include modifications to the kernel configuration, such as low latency and preemption options, or the addition of ancillary sub systems such as RTAI or RTLinux. To allow a judgment to be made based on the experimental results, Table 17 summarizes the analysis of real­time requirements identified in the methods for the individual sub systems in terms of the number of failed requirements per game.

Data Set Games Count Min Mean Max Standard Deviation Period Load RT

500 Hz All No 95 1 1.1 2 0.29

500 Hz Idle No 67 1 1.1 2 0.29

500 Hz Mod No 28 1 1.1 2 0.31

500 Hz Full No 0 No Data No Data No Data No Data

500 Hz All Yes 2 1 1 1 0

333 Hz All No 142 0 0.94 3 0.57

333 Hz Idle No 75 0 0.85 3 0.58

333 Hz Mod No 53 0 0.98 2 0.57

333 Hz Full No 14 1 1.2 2 0.41 147

Data Set Games Count Min Mean Max Standard Deviation Period Load RT

333 Hz All Yes 18 1 1.56 3 0.6

250 Hz All No 120 1 1.8 4 0.83

250 Hz Idle No 92 1 1.7 4 0.73

250 Hz Mod No 27 1 2.1 4 1

250 Hz Full No 1 2 2 2 0

250 Hz All Yes 2 2 2.5 3 0.5

Table 17. Number of Failed Real­Time Requirements Per Game

It is clear from these results that the stock Linux kernel supplied with Fedora Core 2 was adequate. The determining factor for the basic control action process cycle period was clearly the SCR gate triggering. A workqueue delay of four milliseconds or more resulted in noticeable flicker in the feature lamps being illuminated. Figure 38 depicts the reason for this phenomenon, which has been discussed previously in the feature lamp results. 148

Figure 38. Ideal and actual SCR triggering

Lamp flicker was also noted whenever POSIX FIFO real­time scheduling priority and memory locking were granted to the supervisory process. The root cause was believed to be delay in the kernel timer itself because the real­time priority process was allowed to complete before the timer interrupt was serviced. A comparison of the cumulative percentage graphs for three and four millisecond workqueue delays (333 Hertz and 250 Hertz process frequencies), shown in

Figure 39, supports this hypothesis. 149

Figure 39. Comparison of cumulative percentage graphs

Closely related was the switch matrix operation. Research revealed the original Bally switch detection process relies on one sample (low then high) at 120 Hz for quick solenoid response and two samples (low then two consecutive highs) for valid switch detection (Bally

Corporation, 1977). The latter entails an overall 60 Hertz sampling rate. Requiring only one sample for valid switch detection (low then high), the reverse engineered solution compared favorably despite the slower overall sampling rate due to the trans­period, multiplexing strategy.

With a three millisecond workqueue delay and five rows to multiplex between workqueue process executions, the overall sampling rate was 67 Hertz. Besides anecdotal evidence of the experimenter and the compelling completeness of valid switch detection histories, a test was 150 developed to quantitatively measure the performance of the switch matrix based by looking for missed detections of a rapidly repeating switch. Evel Knievel has two spinners that represent that fastest repeating switch action in the system. A method was developed to analyze spinner sets for missed detections. Because not every game included a good, long spin, the percentage of missed detections out of analyzed detections within a single game was not necessarily instructive of the result desired. Therefore, Table 18 shows the both the per game and cumulative percentage of missed detections out of all analyzed detections.

Data Set Games Count Percentage of Missed Cumulative Percentage of Repeating Switch Detections Missed Repeating Switch Period Load RT Per Game Detections

500 Hz All No 95 8.42% 7.42%

500 Hz Idle No 67 7.46% 7.23%

500 Hz Mod No 28 10.71% 8.18%

500 Hz Full No 0 No Data No Data

500 Hz All Yes 2 No Data No Data

333 Hz All No 142 8.45% 5.06%

333 Hz Idle No 75 2.67% 3.65%

333 Hz Mod No 53 15.09% 6.10%

333 Hz Full No 14 14.29% 6.09%

333 Hz All Yes 18 33.33% 7.86%

250 Hz All No 120 2.50% 3.40%

250 Hz Idle No 92 1.09% 3.15%

250 Hz Mod No 27 7.41% 4.34%

250 Hz Full No 1 No Data No Data

250 Hz All Yes 2 50.00% 15.85%

Table 18. Percentage of Missed Repeating Switch Detections 151

As expected, the percentage of missed detections was higher for higher system loads. However, the slower, rather than the faster, process frequencies resulted in fewer missed repeating switch detections.

The final result to consider is the relative amount of CPU time consumed by the replacement system. As periodic processes running on a single CPU system, the execution time of the kernel module and supervisory process together can be expressed in terms of duty cycle.

Since the digital displays remain to be implemented, a sufficient cushion should be available for future growth. Tables 18 and 19 present the duty cycles calculated from their individual measured execution times; the actual amount must be higher due to the additional overhead entailed by the kernel mechanisms handling their scheduling.

Data Set Games Min Mean 99.999% Max Standard Count Threshold Deviation Period Load RT

500 Hz All No 95 1.1% 3.0% 6.4% 11% 0.82

500 Hz All Yes 2 1.1% 2.8% 6.8% 9.2% 0.90

333 Hz All No 142 0.36% 1.9% 4.3% 7.8% 0.58

333 Hz All Yes 18 0.36% 1.7% 5.1% 8.1% 0.69

250 Hz All No 120 0.55% 1.5% 2.9% 5.9% 0.43

250 Hz All Yes 2 0.54% 1.4% 3.6% 4.3% 0.52

Table 19. CPU Duty Cycle of Kernel Module Process 152

Data Set Games Min Mean 99.999% Max Standard Count Threshold Deviation Period Load RT

500 Hz All No 95 0.81% 4.7% 29% 108% 1.2

500 Hz All Yes 2 1.3% 4.7% 64% 65% 2.6

333 Hz All No 142 0.49% 3.8% 111% 10630% 3.1

333 Hz All Yes 18 1.7% 5.1% 761% 10453% 36

250 Hz All No 120 0.67% 2.7% 38% 693% 1.5

250 Hz All Yes 2 2.4% 2.7% 33% 34% 1.3

Table 20. CPU Duty Cycle of Supervisory Control Process

A value over 100% for many of the supervisory process statistics beyond the 99.999% threshold indicates that the operation failed to execute within its deadline. However, it can be assumed that implementation of the digital displays will mainly affect the kernel module execution time, and in the worst cases that value never exceeded 10%. It would seem advisable to use the 333

Hertz process frequency over 500 Hertz to minimize it. 153

CHAPTER V. CONCLUSIONS

The significance of this study arises from the realization that reverse engineering scholarship addressing microcomputer­based control units has hitherto focused on non­ commercial, low­production, high­cost military systems for the very good reason that reverse engineering is only justified when no commercial, off­the­shelf replacement is available for a given part, and the system must be preserved at any cost. It showed that valuable information can be gleaned from the United States Patent Office, even when a technology is decades old.

When data must be developed to recreate original design specifications, a black box methodology can capture the functional requirements of a microcomputer based control unit by casting it in a process control model. Every unit can be viewed in terms of the function of each of its electrical connections to the rest of the system in which it resides. It demonstrated that complex process control solutions can be reverse engineered using the Linux 2.6 kernel without employing any external interrupts or kernel enhancements like RTLinux and RTAI. Results can be economically obtained using self­generated logs tied to the Real Time Stamp Counter without additional apparatus. Correct game play can be intuited from the individual performance histories and summary statistics without recourse to a user survey.

The approach synthesized a number of future applications of reverse engineering proposed by Ingle a decade earlier: a systems approach using integrated systems to measure and test components, sharing information among a multidisciplinary, distributed community, and 154 harnessing opportunities for niche business opportunities based on flexible manufacturing of one­of­a­kind items (Ingle, 1994). Just as the advent of the Linux 2.6 kernel led to a new way of solving control problems using a high frequency kernel workqueue process, the growing popularity of open source software among hobbyists and professionals creates similar opportunities for niche operations that could not be sustained in the context of a traditional, for profit business.

The selection of the Bally AS 2518 pinball MPU also presents a solution to a philosophical dilemma inherent in technology education. Educators struggle to find instructional examples that are intrinsically interesting embodiments of the concepts. State of the art equipment is too complicated; older equipment, while comprehensible, has little relevance.

Computer simulations of physical systems, and temporary laboratory configurations, also provide little payout apart from their pedagogical function, and are quickly forgotten. A pinball machine reverse engineering project can be used to introduce the concepts of electricity and electronics, process control, real­time computing, design science, and project management. Its completion yields a return on investment by saving an otherwise doomed unit and returning it to daily use. 155

Discussion

This final section will defend the validity of the results against a number of challenges and suggest future applications of the thesis. The most serious criticism of the project is that the method of iterative prototype development, while achieving the objectives laid out in the first chapter, does not guarantee the feasibility of completing the task of reverse engineering the AS

2518 MPU. It is possible that controlling the digital displays is not possible using the scheme of the high frequency kernel workqueue process, despite the results concerning its measured, worst case duty cycle. Table 8 showed that the original MPU timing requirement for the digital displays was 360 Hertz. If, for instance, a 500 Hertz or even a 1000 Hertz process frequency is required to service them, so much CPU time may be used by the kernel module due to the increased number of output operations that the system will slow to a crawl. The response to this charge is that a fourth iteration of the replacement system may indeed have to adopt the use of an external interrupt to meet the timing requirements of the digital displays. This would weaken the result of demonstrating a complete process control solution without the use of external interrupts or real­time enhancements to the Linux kernel like RTLinux and RTAI; however, it would remain an empirical question whether the latter enhancements would be needed. The project can also be criticized for ignoring the synchronization between the initiation of control actions like trigger feature lamps and momentary solenoids and the zero crossing of the power supply. The

Bally Theory of Operations states that this was done to prolong the life of the components. 156

Therefore, a solution should be sought that allows for the control process to synchronize with the power supply.

Another type of criticism involves the methods for prototype testing. It is obviously very poor practice to design a product without seeking feedback from the end users, whether through reverse engineering or an ordinary development cycle. A user survey instrument was originally contemplated during the prescreening stage of this thesis, namely the proposal; however, the logistical overhead of complying with the University's human subjects policy was deemed too costly in terms of the expected benefit that would have been derived. It was for this reason, moreover, that the data recorder and automated analysis of test bed performance was developed in its stead. The charge could still be made that the data recorder should have involved some form of external measurement, and not relied solely on the prototype system itself to collect data.

It has been argued already that the completion of games in itself conveyed validity to the solution. The tacit acknowledgment is that the experimenter's subjective judgment was a reliable arbiter of the fine grained details, and that a programmatic analysis of game events could confirm or deny that the anticipated control action was actually applied. The results concerning the relationship between lamp flicker and process frequency are not well grounded in the data. It was initially thought that a correlation could be made between flicker and skewness, but without an empirical measure of flicker, any such test is useless. The statement was simply made that flicker was noted whenever the process frequency was 250 Hertz or the supervisory process was granted real­time scheduling priority. This criticism is deserved. Instrumentation should be 157 developed to quantitatively measure the intensity of lamp illumination or degree of flicker. Only then is it meaningful to conduct hypothesis tests to establish whether there is a correlation between process jitter and the lamp performance.

A third type of criticism attacks soundness of the economic justification for the project.

While it may be possible to produce pinball machine reverse engineering kits for $150, it is mere speculation that a return on investment can be achieved using a distributed labor model based on the pro bono efforts of the open source developer community. It is likewise a matter of psychological speculation that anybody would undertake the many hours of work to replicate the experiment in order to learn about electronics and computer technology, and so discount the labor due to that parallel objective. The response to this objection is that the full experiment has yet to be tried. It may never ignite the interest of a large number of hobbyists, and recede into oblivion like the Pinball Player Project, or it may be propelled into the spotlight by way of a posting on Slashdot news. This response is also meant for the final criticism that the reverse engineering methodology based on functional analysis of the electrical connections between the microcomputer­based control unit and the rest of the system to which it belongs is not extensible to other types of devices besides Bally pinball machines.

Acting on these criticisms informs future directions for study. Clearly the completion of the reverse engineering project to implement control of the digital displays precedes attempts to popularize the results. If this can be achieved, then the project can move into the fourth stage of

Ingle's reverse engineering process, implementation. Well placed publications can attract 158 hobbyists and educators to repeat the experiment with different pinball machine models and other types of control units. A small business venture can be commissioned to produce and distribute kits containing all the necessary hardware components to complement the software already made freely available on Sourceforge. 159

REFERENCES

Alltek Systems. (2004). The universal bally/stern replacement mpu. Retrieved April 24, 2004,

from http://www.allteksystems.com.

American Psychological Association. (2001). Publication manual of the american psychological

association (5th edition). Washington, DC: American Psychological Association.

Artwick, Bruce A. (1980). Microcomputer interfacing. Englewood Cliffs, NJ: Prentice­Hall, Inc.

Averett, Steven. (2003). Jane ammons: reclaiming the digital dump. Industrial Engineer, 35(9),

32­33.

Bally Corporation. (1977). Bally electronic pinball games theory of operation. Chicago: Bally

Corporation.

Bally Manufacturing Corporation. (1980). U.S. Patent No. 4,198,051. Washington, DC: U.S.

Patent and Trademark Office.

Bally Manufacturing Corporation. (1983). U.S. Patent No. 4,408,762. Washington, DC: U.S.

Patent and Trademark Office.

Bally Pinball Division. (1977). Evel knievel installation and general game operation

instructions, Game 1094­E. Chicago: Bally Manufacturing Corporation.

Bateson, Robert N. (2002). Introduction to control system technology, seventh edition.

Columbus, OH: Prentice­Hall. 160

Behrens, Brian C. & Revven, R. Levary. (1998). Practical legal aspects of software reverse

engineering. Communications of the ACM, 41(2), 27­29.

Brainex. (2004). Compilation of pinball machine patents. Provo, UT: Braindex.com.

Boondog Automation. (1998). 8255 IBM ppi pc interface card. Retrieved May 1, 2003 from

http://www.boondog.com/tutorials/8255/8255.htm.

Brooks, Frederick P., Jr. (1995). The mythical man­month: essays on software engineering.

Boston: Addison Wesley Longman.

Brosky, Steve. (2004). Shielded cpus: real­time performance in standard linux. Linux Journal,

121, 34­38.

Cifuentes, Cristina & Fitzgerald, Anne. (2000). The legal status of reverse engineering of

computer software. Annals of Software Engineering, 9, 337­351.

Clark, Dayton. (1997). Progress toward an inexpensive real­time testbed: the pinball player

project in Real­Time Systems Education, II. Los Alamitos, CA: IEEE Computer Society

Press.

Clark, Dayton and Goetz, Lawrence. (2001). A testbed for real­time programming in java: the

pinball player project. Retrieved May 1, 2003 from

http://www.sci.brooklyn.cuny.edu/~pinball.

Dankwardt, Kevin. (2002). Real time and linux. Embedded Linux Journal, 7. Retrieved January

13, 2004, from http://www.linuxjournal.com/article.php?sid=5407. 161

Davidson, Duncan M. (1989). Reverse engineering software under copyright law: the IBM pc

bios in Weil, Vivian and Snapper, John W., ed. Owning scientific and technical

information: value and ethical issues. New Brunswick, NJ: Rutgers University Press,

147­168.

Fairchild Semiconductor. (2000). DM74LS154 4­line to 16­line decoder/demultiplexer.

Retrieved April 20, 2004 from http://www.fairchildsemi.com.

Farrell, S., Hesketh, R. P., Newell, J. A., et al. (2001). Introducing freshmen to reverse process

engineering and design through investigation of the brewing process. International

Journal of Engineering Education, 17(6), 588­592.

Free Software Foundation. (1991). GNU general public license, version 2. Retrieved August 15,

2003 from http://www.fsf.org/licenses/gpl.html.

Freeman, Edward H. (2002). The digital millennium copyright act. Information Systems

Security, 11(4), 4­8.

Freiberger, Paul and Swaine, Michael. (2000). Fire in the valley: the making of the personal

computer (second edition). New York: McGraw­Hill.

Flower, Gary & Kurtz, Bill. (1988). Pinball: the lure of the silver ball. Secaucus, NJ: Chartwell

Books.

Godwin, Mike. (2002). The right to tinker. IP Worldwide, (September 2002), 18. 162

Graduate College of Bowling Green State University. (2004). Thesis and dissertation handbook.

Retrieved November 11, 2004 from http://www.bgsu.edu/colleges/gradcol/tdhandbook.

Grob, Bernard. (1992). Basic electronics, seventh edition. Westerville, OH: Glencoe.

Heim, Michael. (1992). The computer as component: heidegger and mcluhan. Philosophy and

Literature, 16, 304­319.

Heursch, Arnd C., Grawbow, Dirk, Roedel, Dirk, Rzehak, Helmut. (2004). Time­critical tasks in

linux 2.6 concepts to increase the preemptability of the linux kernel. 2004 Linux

Automation Conference, University of Hanover, Germany. Retrieved December 12, 2004

from http://www.linux­automation.de/konferenz_2004/papers/ Arnd_Christian_Heursch­

Zeitkritische_Aufgaben_in_Linux_2.6.pdf.

Honan, Peter. (1998). Reverse engineering. Tech Directions, 57(9), 36.

Ingle, Kathryn A. (1994). Reverse engineering. New York: McGraw­Hill.

Keller, Gerald and Warrack, Brian. (2000). Statistics for management and economics, fifth

edition. Pacific Grove, CA: Duxbury.

Lancaster, Don. (1996). Reverse engineering. Electronics Now, 67(2), 51­57.

Laurich, Peter. (2004). A comparison of hard real­time linux alternatives. Linuxdevices.com.

Retrieved November 30, 2004 from

http://www.linuxdevices.com/articles/AT3479098230.html. 163

Lewis, Bruce and McConnell, David J. (1996). Reengineering real­time embedded software onto

a parallel processing platform. Proceedings of Third Working Conference on Reverse

Engineering, November 8­10, 1996. Los Alamitos, CA: IEEE Computer Society Press.

Lindsley, Rick. (2003) What's new in the 2.6 scheduler? Linux Journal, 199 (March 2004), 20­

24.

Love, Robert. (2003). The linux process scheduler. Inform IT, November 13, 2003. Retrieved

January 13, 2004 from http://www.informit.com.

Microprocessor Products Group. (1988). Motorola Microprocessor Data, volume 1. Austin, TX:

Motorola, Inc.

Miller, Joel. (1993). Reverse engineering: fair game or foul? IEEE Spectrum, 30 (April 1993), p.

64­65.

Open Source Initiative. (2004). The open source definition, version 1.9. Retrieved March 25,

2004 from http://opensource.org/docs/definition.php.

O'Reilly, Tim. (2004). The open source paradigm shift. Retrieved June 28, 2004 from

http://tim.oreilly.com/opensource/paradigmshift_0504.html.

Petit, Daina. (2002). Mr. Pinball pinball list and price guide. Salt Lake City: Mr. Pinball, a

division of RRS, Inc.

Plato. (1973). Phaedrus, translated by Walter Hamilton. Harmondsworth, Middlesex, England:

Penguin Books, Ltd. 164

Plato. (1999). Phaedrus, translated by Harold North Fowler. Cambridge, MA: Harvard

University Press.

Ripoll, Ismael, et al. (2002). RTOS state of the art analysis. Retrieved May 4, 2004 from

http://www.mnis.fr/opensource/ocera/rtos/book1.html.

Saikkonen, Riku. (2000). Linux I/O port programming mini­howto, version 3.0, 200­12­13.

Retrieved August 1, 2003 from http://www.tldp.org/HOWTO/IO­Port­

Programming.html.

Salzman, Peter Jay. (2004). The linux kernel module programming guide, 2004­05­16, version

2.6.0. Retrieved August 17, 2004 from http://www.tdlp.org/LDP/lkmpg/2.6/lkmpg.html.

Schwartz, Mathew. (2001). Reverse enginering. Computerworld, 35(46), 62.

Schweber, William L. (1993) Data communications. Columbus, OH: Macmillan/McGraw­Hill.

Shaw, Alan C. (2001). Real­time systems and software. New York: John Wiley & Sons, Inc.

Silberschatz, Avi, Galvin, Peter, & Gagne, Greg. (2000). Applied operating system concepts.

New York: John Wiley & Sons, Inc.

Stallman, Richard. (2002). Linux and the gnu project. Retrieved April 20, 2004 from

http://www.gnu.org/linux­and­gnu.html.

Stankovic, John A. and Ramamritham, Krithi. (1988) Tutorial hard real­time systems. New

York: Institute of Electrical and Electronics Engineers, Inc.

Tennis, Caleb. (2004). Data acquisition with comedi. Linux Journal, 124, 80­84. 165

Uffenbeck, John. (1991) Microcomputers and microprocessors: the 8080, 8085, and Z­80:

programming, interfacing, and troubleshooting. Englewood Cliffs, NJ: Prentice­Hall,

Inc. von Krogh, Georg. (2003). Open­source software development. MIT Sloan Management Review,

(Spring 2003), 14­18.

Weil, Vivian and Snapper, John W., ed. (1989). Owning scientific and technical information:

value and ethical issues. New Brunswick, NJ: Rutgers University Press.

Weinberg, Bill. (2004). Porting rtos device drivers to embedded linux. Linux Journal, 126, 40­

44.

Welch, Lonnie R., Yu, Guohui, Ravindran, Binoy, Kurfess, Franz and Henriques, Jorge. (1996).

Reverse engineering of computer­based control systems. International Journal of

Software Engineering Knowledge, 6(4), 531­547.

Welling, Luke and Thomson, Laura. (2001). PHP and mysql web development. Indianapolis:

Sams Publishing.

W., E. C. (1982). Bally electronic pinball games theory of operation, F.O. 601­2. Chicago, IL:

Bally Corporation.

Xenophon. (1992). Socrates' defense, translated by O. J. Todd. Cambridge, MA: Harvard

University Press. 166

APPENDIX A. PARTS LISTS

8­Bit ISA I/O Board

Quantity Part

1 8­Bit ISA Prototype Board (Radio Shack Cat. No. 276­1598)

2 Intel 8255 Programmable Peripheral Interface (PPI)

1 74LS138 3x8 Decoder

10 feet 25­Pair Category 3 Unshielded Twisted Pair Cable 167

Interface Circuit Between ISA Board and Pinball Machine

Quantity Part

1 Breadboard 4.5” x 6.25” (Radio Shack)

1 Solderless Breadboard 2.0” x 3.5” (Radio Shack)

23 330 pF Capacitor

9 470 pF Capacitor 9 820 pF Capacitor

5 1N4148 Diode

1 330 1/8 W Resistor

5 470 1/8 W Resistor 40 1.0 K 1/8 W Resistor

8 3.3 K 1/8 W Resistor

1 10 K 1/8 W Resistor

13 47 K 1/8 W Resistor

4 100 K 1/8 W Resistor

1 Red LED

1 7404 Hex Inverter

1 15 Conductor Male Header Pin 0.025” Square Posts, 0.100” Spacing

1 16 Conductor Male Header Pin 0.025” Square Posts, 0.100” Spacing

5 8 Conductor Screw Terminal Block 0.197” Spacing 1 2 Conductor Screw Terminal Block 0.197” Spacing 2 feet 40­pin IDE Ribbon Cable 168

APPENDIX B. SOFTWARE PROGRAM CODE

Due to space considerations the program source code has been linked rather than incorporated into the PDF text. Use the following program name hyperlinks to view the source code.

Program Name Function analyze_testbed_output.php Analyzes a game using the parsed text file output of user_pmrek.exe and the saved system activity records common_functions.php Functions shared by PHP programs

Makefile_pmrek GNU Make command file to compile kernel module and executables pmrek_bash_profile Appended to auto­login user's bash profile; calls start_testbed pmrek.c Linux 2.6 kernel module for hardware control process pmrek.h Header file containing definitions and data structures pmrek.sql MySQL script to create database, tables, and access permissions start_testbed BASH script for running standalone testbed system; runs testbed.exe and restarts if terminated for upgrade testbed.c Supervisory process for controlling kernel module, playing Evel Knievel, logging and analyzing process data; compiles into the executable testbed.exe testbed_performance.php Creates summary statistics of all games analyzed user_pmrek.c Utility program for parsing output of testbed.exe, displaying data structure sizes, and simulating operation of the kernel module; compiles into the executable user_pmrek.exe 169

APPENDIX C: OUTPUT OF ANALYTIC PROGRAMS

Due to space considerations the following data files have been linked rather than included in this PDF document. Use the file name hyperlinks to view them.

Summary Data testbed_performance_20050614.html

Dump of MySQL database pmrek_dump_20050622.sql

Data Set Games Representative Individual Game Analysis

Period Load RT Count

500 Hz Idle No 67 analyze_testbed_output_20050415_163817.html

500 Hz Mod No 28 analyze_testbed_output_20050410_160149.html

500 Hz Full No 0 N/A

500 Hz All Yes 2 analyze_testbed_output_20050203_205645.html

333 Hz Idle No 75 analyze_testbed_output_20050324_155726.html

333 Hz Mod No 53 analyze_testbed_output_20050416_141240.html

333 Hz Full No 14 analyze_testbed_output_20050120_200549.html

333 Hz All Yes 18 analyze_testbed_output_20050120_204313.html

250 Hz Idle No 92 analyze_testbed_output_20050416_155108.html

testbed_out_20050416_155108.txt (first 8.5 seconds of data log)

250 Hz Mod No 27 analyze_testbed_output_20050205_101005.html

250 Hz Full No 1 analyze_testbed_output_20050210_155111.html

250 Hz All Yes 2 analyze_testbed_output_20050203_221126.html