INFORMATION TO USERS

This manuscript has been reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type of computer printer.

The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction.

In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion.

Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand corner and continuing from left to right in equal sections with small overlaps. Each original is also photographed in one exposure and is included in reduced form at the back of the book.

Photographs included in the original manuscript have been reproduced xerographically in this copy. Higher quality 6" x 9" black and white photographic prints are available for any photographs or illustrations appearing in this copy for an additional charge. Contact UMI directly to order.

University Microfilms International A Ben & Howeil Information Com pany 300 North Zeeb Road Ann Arbor Ml 48106-1346 USA 313 761-4 700 800 521 0600 Order Number 9120721

Visual data acquisition and computer interpretation

Russ, Keith Mitchell, Ph.D.

The Ohio State University, 1991

UMI 300 N. Zeeb Rd. Ann Arbor, MI 48106 NOTE TO USERS

THE ORIGINAL DOCUMENT RECEIVED BY U.M.I. CONTAINED PAGES WITH PHOTOGRAPHS WHICH MAY NOT REPRODUCE PROPERLY.

THIS REPRODUCTION IS THE BEST AVAILABLE COPY. VISUAL DATA ACQUISITION

AND COMPUTER INTERPRETATION

A Dissertation

Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University

By

Keith Mitchell Russ

* * * * *

The Ohio State University

1991

Dissertation Committee: Approved by Dr. Robert S. Brodkey Dr. Umit S. Ozkan Dr. Manjula B. Waldron Adviser I Department of Chemical Engineering To my fam ily

u ACKNOWLEDGEMENTS

I would like to express my appreciation to the Department of Chemical Engi­

neering at OSU for their support during this work, and especially to Dr. Robert

S. Brodkey for his guidance and friendship throughout my graduate experience,

and to Dr. Jacques L. Zakin for the financial support he always managed to find for me.

I would also like to give thanks to everybody in the department, which has

been an excellent family over the years, and to those outside the department who have been excellent comrades in whatever odd moments left over from my time on this work: Loyde Hales, S.V.S. Jagannadh, Lavinia Hales, Rob Carrierre, Fritz

Rauschenberg, Mike Richmann, and the whole gang at OSUMGA. VITA

January 27, 1964 ...... Born - Athens, Ohio 1985 ...... B.S.Ch.E. Ohio University Athens, OH 1988 ...... M.S., Chemical Engineering Department of Chemical Engineering The Ohio State University Columbus, OH 1988- ...... Graduate Research Associate Department of Chemical Engineering The Ohio State University Columbus, OH

FIELDS OF STUDY

Major Field: Chemical Engineering

Minor Fields: Fluid Mechanics, Computer Imaging, Computer Graphics TABLE OF CONTENTS

ACKNOWLEDGEMENTS ...... iii

VITA ...... iv

LIST OF TABLES ...... viii

LIST OF FIGURES ...... ix

LIST OF PLATES ...... xii

ABSTRACT ...... xiv

CHAPTER PAGE

I INTRODUCTION ...... 1

II BACKGROUND REVIEW ...... 4

2.1 Introduction ...... 4 2.2 Imaging ...... 4 2.2.1 Identification ...... 5 2.2.2 Visualization ...... 6 2.2.3 Image Acquistion ...... 7 2.2.4 Digitization ...... 8 2.2.5 Image Processing ...... 10 2.2.6 Image Analysis ...... 13 2.3 Fluid Mechanics ...... 17 2.3.1 Introduction ...... 17 2.3.2 Coherent Structures ...... 17 2.3.3 Probes and Data Acquisition ...... 18 2.3.4 Laser Doppler Anemometry ...... 18 2.3.5 Flow Visualization ...... 19 2.4 Catalysis ...... 23 2.4.1 Introduction ...... 23 2.4.2 Quantitative Analysis ...... 23

III EXPERIMENTAL ...... 28

3.1 Introduction ...... 28 3.2 Imaging Facilities ...... 28 3.2.1 Eikonix 78/99 ...... 29 3.2.2 Computer Hardware ...... 29 3.2.3 Dipix Aries- III ...... 30 3.3 Laminar Flow ...... 31 3.3.1 Identification ...... 31 3.3.2 Visualization ...... 31 3.3.3 Acquisition ...... 33 3.3.4 Digitization ...... 33 3.3.5 Image Processing ...... 33 3.4 Turbulent Flow ...... 36 3.4.1 Identification ...... 36 3.4.2 Visualization ...... 41 3.4.3 Acquisition ...... 42 3.4.4 Digitization ...... 42 3.4.5 Image Processing ...... 42 3.5 Catalysis ...... 48 3.5.1 Identification ...... 48 3.5.2 Visualization ...... 48 3.5.3 Acquisition ...... 48 3.5.4 Digitization ...... 49 3.5.5 Image Processing ...... 49

IV RESULTS AND DISCUSSION ...... 52

4.1 Introduction ...... 52 4.2 Laminar Flow ...... 52 4.2.1 Particle Identification ...... 54 4.2.2 Particle Tracking ...... 61 4.2.3 3-D Flow Analysis ...... 87 4.2.4 Discussion ...... 94 4.3 Turbulent Flow ...... 95 4.3.1 Particle Identification ...... 95 4.3.2 Particle Tracking ...... 99 4.3.3 3-D Flow Analysis ...... 99 4.3.4 Discussion ...... 108 4.4 Catalysis ...... 109 4.4.1 Initial Work ...... 109 4.4.2 Graphics Display ...... 110 4.4.3 General Program ...... 110 4.4.4 Depth Determination ...... 112 4.4.5 Area Determination ...... 117 4.4.6 Results and Discussion ...... 122 4.5 General Results ...... 123

V CONCLUSIONS ...... 127

VI RECOMMENDATIONS ...... 132

vi REFERENCES ...... 134

APPENDICES

A. PA R T JD .FO R ...... : ...... 137

B. D E T_V E C T .FO R ...... 148

C. CATA.FOR ...... 164

D. IMAGE_HEADER.FOR ...... 223

E. TEXT_FOR_BUTTONS.DAT ...... 225

F. COLORS.DAT ...... 227

vii LIST OF TABLES

TABLE PAGE 1. Image header file format for Dipix files. 11 2. Results of the tracker on the simulated image, Figure 11. 65 3. Results of the tracker on Figure 9. 84 4. Tracker parameters used for Figure 16(a-b). 85 5. Tracker parameters used for Figure 17(a-b). 86 6. Tracker parameters used for Figure 18(a-b). 88 7. Tracker parameters used for turbulence work [Figures 25(a-b) and 28(a-b)]. 101 8. Areas, normals and ratios for crystals indicated in Plate XXI. 123 LIST OF FIGURES

FIGURE PAGE 1. 3-D visual data acquistion methods, (a) Stereo (small angle) acquisition, (b) Orthogonal (large angle; 90° sep­ aration) acquisition. 9 2. 8-connectedness of pixel 1G

3. Mo0 3 crystal faces and directions. 24 4. Photographic simulation of stereo vision by image sepa­ ration. 26 5. Experimental setup for turbulence work. 43 6. (a) Ideal particle image, infinite domain, (b) Real parti­ cle image, digital (discrete) domain. 57 7. Pixel checking order for particle identification. Pixels are checked in the order 1, 2, 3, and 4; if no match is made, pixels A and B are checked simultaneously. 56 8. Flowchart for particle identification. 58 9. Particles identified for Plate XV(a) (• first frame; * sec­ ond frame; + third frame). 59 10. 3-point simplistic tracker. 62 11. Simulated particle image (• first frame; * second frame; + third frame). 64 12. (a) Particle tracks identified for the simulated image, Fig­ ure 11, including particle locations (• first frame; * sec­ ond frame; + third frame), (b) Particle tracks identified for the simulated image, Figure 11, vectors only. 66 13. Flowchart for particle tracking. 68 14. (a) Tracker performance for a single pass, for MAXDIS=0.29. (b) Tracker performance for a single pass, for MAXDIS=0.58. (c) Tracker performance for a single pass, for MAXDIS=0.87. (d) Tracker performance for a single pass, for MAXDIS=1.16. 69

ix

\ Effect of varying the number of tracks and particle dis­ placement in simulated images, using multi-pass tracker parameters ( correct; not found; wrong). (a) 25 vectors, (b) 50 vectors, (c) 75 vectors, (d) 100 vectors, (e) 125 vectors, (f) 150 vectors, (i) 225 vectors, (j) 250 vectors, (k) 275 vectors. (1) 300 vectors, (m) 325 vectors, (n) 350 vectors, (o) 375 vectors, (p) 400 vectors. Effect of varying multi-pass tracker parameters (Table 4), with maximum A ymax of 15, on simulated images with varying number of tracks and particle displacement ( correct; not found; wrong), (a) 25 vectors, (b) 400 vectors. Effect of varying multi-pass tracker parameters (Table 5), with maximum A y max of 30, on simulated images with varying number of tracks and particle displacement ( correct; not found; wrong), (a) 25 vectors, (b) 400 vectors. Effect of varying multi-pass tracker parameters (Table 6), with maximum A ymax of 45, on simulated images with varying number of tracks and particle displacement ( correct; not found; wrong), (a) 25 vectors, (b) 400 vectors. (a) Particle tracks identified for the particles in Figure 9, including particle locations (• first frame; * second frame; + third frame), (b) Particle tracks identified for the particles in Figure 9, vectors only. Particles identified for Plate XV(b) (• first frame; * sec­ ond frame; -f- third frame).

(a) Particle tracks identified for the particles in Figure 20, including particle locations (• first frame; * second frame; + third frame), (b) Particle tracks identified for the particles in Figure 21, vectors only. Particles identified for Plate XVII(a) (• first frame; * second frame; + third frame). Rough grid point locations identified for Plate XVII(a). Result of simple processing on rough grid point locations identified in Figure 23. (a) Particle tracks identified for the particles in Figure 22, including particle locations (• first frame; * second frame; + third frame), (b) Particle tracks identified for the particles in Figure 22, vectors only. Particles identified for Plate XVII(b) (• first frame; * second frame; + third frame). Grid points, after processing, for Plate XVII(b). (a) Particle tracks identified for the particles in Figure 26, including particle locations (• first frame; * second frame; + third frame), (b) Particle tracks identified for the particles in Figure 26, vectors only. Geometry notation for the catalysis problem. 3-D area projection to a 2-D area in the xy plane. Concavity possibilities, (a) Horizontally concave, (b) Vertically concave, (c) Both horizontally and vertically concave. Brute-force filling of a simple 2-D concave area [see Fig­ ure 31(c)]. (a) Result of horizontal linescanning and fill­ ing. (b) Result of vertical linescanning and filling, (c) Summation of (a) and (b), with the common area being the desired fill. LIST OF PLATES

PLATE I. (a) Left SEM catalyst image, (b) Right SEM catalyst image. II. Crystallization furnace. III. (a) Left flow image, first frame ( t — 0), of particles in laminar flow in the crystallization furnace, (b) Right flow image, corresponding to (a). IV. Digital image of Plate 111(a). V. Binary image of Plate IV. VI. Binary image of the second frame of the time sequence, t — A t, for the laminar flow. VII. Binary image of the third frame of the time sequence, t = A 21, for the laminar flow. VIII. (a) Summation of Plates V, VI, and VII, i.e. left view flow, using Eq. 3.1. (b) Equivalent image for right view flow. IX. 16mm stereo film frame for turbulent flow, with subarea used for analysis marked. X. (a) Digital image of Plate IX, left side, (b) Digital image of Plate IX, right side. XI. Binary image of Plate X(a). XII. (a) Summation of left hand view, turbulent flow, (b) Summation of right hand view, turbulent flow. XIII. Digital image of Plate 1(a). XIV. Binary image of Plate XIII. Arrow marks an open-end area of the type that made automatic processing impos­ sible. XV. (a) Psuedo-color representation of Plate VIII(a). (b) Pseudo-color representation of Plate VIII(b). XVI. Zoom of Plate XV(a), with identified particles in view. XVII. (a) Pseudo-color representation of Plate XII(a). (b) Pseudo-color representation of Plate XII(b). XVIII. Base graphics setup for SEM micrograph anaylsis The digital image of Plate 1(a) is on the left, and the digital image of Plate 1(b) is on the right.

XIX. Area in left image after its corners have been marked, showing fill and guides to the right image. XX. Right hand image of Plate XIX has now been marked, and depth outputted. XXI. Stereo SEM catalyst image with areas analyzed for Table 8 marked. VISUAL DATA ACQUISITION AND COMPUTER INTERPRETATION

By

Keith Mitchell Russ, Ph.D.

The Ohio State University, 1991

Professor Robert S. Brodkey, Adviser

ABSTRACT

Simple imaging, or image processing and analysis, was used to determine quantitative information from visualizations of experimental variables. Three- dimensional visual images of three experiments (laminar flow measuring velocity, turbulent flow measuring velocity, and catalyst surface structure measuring 3D area) were acquired using several camera and lens setups. These images were then digitized for computer access and analysis.

In the laminar and turbulent work particle identification and tracking algo­ rithms were developed for fast analysis of the images with a fair degree of accuracy

(potentially as high as 90% for up to 400 vectors in under a second). 3D analysis was hampered by view blockage and optical problems in the experiments, however.

For the catalyst surface structure work, computer graphics were ultimately used to obtain surface edge data as the images were found to be inappropriate for simple automated analysis. Surface areas and orientations in three dimensions were successfully calculated.

xiv It was concluded that imaging is a good tool for determining the desired exper­ imental quantities (variables) in these simple engineering problems. In addition, it has potential application in many research areas, when otherwise unobtainable data can be recorded in a visual medium. Careful attention must be paid in the acquisition of the original images, however, to ensure that programming the com­ puter to calculate the desired quantity is feasible and, if possible, simple.

xv C H A PT E R I

INTRODUCTION

Computers are becoming common tools in both the home and the workplace,

and computer processing power and capabilities are advancing by leaps and bounds.

Yet even with all this processing power the ability of the computer to see, to visually

respond to its environment, is entirely limited to how its operator can mathemat­

ically decode the visual scene and to what degree that model can be programmed into it.

The human brain, as a computer, is immensely powerful. With it a researcher analyzes visual data such as color changes or motion, often by integrating infor­ mation from the other senses (sound, smell, touch, and taste). The computer can

be given the capability of seeing the same scene, often in much greater detail, but it cannot necessarily be programmed to understand what its camera provides.

By recognizing the computer’s limitations, the human researcher can modify the scene so that the computer can more easily understand exactly what that scene represents (i.e., provide more visual clues). This is usually a matter of simplifying the visual data to the minimum necessary to extract the information.

The computer does enjoy the advantage, however, of having a very precise

1 view of the world. Its vision is discrete — a simple 2-D array of numbers, and any viewed features exist as a unique pattern within those numbers. Understandably, it is easier to program the computer to identify simple shapes and features than to identify complex objects, and it is capable of identifying those simple shapes and features tirelessly once programmed. From these features can come dimensions, morphology, position, size, brightness and perhaps color.

The researcher is in a position to take advantage of the computer’s ability to extract this information. A simple video digitizer camera could easily have

512 x 512, or about 256,000, data points sampled at one instance in time, in either an area or the 2-D equivalent of a volume. By defining an experiment such that the final desired data is visual, the computer can be programmed to extract that visual data and output the results. If the visual data is instantaneously continuous within the view, such as for color variation of a liquid crystal sheet where every pixel corresponds to a physical world data point, then the highest realizable data density can be obtained (effectively 256,000). If the visual data is instantaneously discrete (a set of distinct features) within the view, such as for tracer particle locations in flow, then significantly lower data density is obtained (two to three orders of magnitude fewer points).

Whatever the circumstances, the computer can be programmed to analyze that

2-D array and extract the features of interest. The researcher must be prepared to think visually, and to do so at the level of the computer. Simplicity is the key; reduce the visual data to its most fundamental element.

This project involves outlining the entire process of applying visual methods to quantitative data acquisition, from the visual acquistion to the computer inter- 3 pretation. The specific tasks of this project are:

1. to fully outline the steps of image processing and analysis, as applied to

engineering problems.

2. to obtain 3-D visual data for static and dynamic processes, for orthogonal

and stereo viewing, for three experiments:

(a) laminar fluid flow (dynamic/orthogonal): the determination of flow ve­

locity, with a time constraint

(b) turbulent fluid flow (dynamic/stereo): the determination of flow veloc­

ity, preliminary to rigorous tracking calculations

(c) catalyst surface structure (static/stereo): the determination of 3-D pla­

nar area and orientation.

3. to use simple image processing to prepare the images for analysis.

4. to develop the algorithms necessary to reduce the viewable qualitative infor­

mation into the desired quantitative data. These include

(a) particle identification

(b) particle tracking

(c) 3-D area and orientation determination.

5. to develop the programs necessary to implement the algorithms.

6. to outline directions for further study. C H A PT E R II

BACKGROUND REVIEW

2.1 Introduction

The application of image processing and analysis to chemical engineering sci­ ence has had limited inroads to date. Visualization of various quantities, on the other hand, has long been used to obtain qualitative information about a particu­ lar process, material, or system. Image processing and analysis provides the tools with which to convert that qualitative information into a more useful quantitative form. The entire process, from visualization to the final numerical output of the analysis, is here referred to as imaging , representing all aspects of the technique.

The following background material is divided into three main sections. Section

2.2 deals with common imaging concepts. Sections 2.3 and 2.4 contain information on the two applications areas, these being fluid mechanics and catalysis, respec­ tively.

2.2 Imaging

Fundamentally, imaging can be broken down into six basic steps.

4 5

• Identification

• Visualization

• Acquisition

• Digitization

• Image Processing

• Image Analysis

The first three steps are the experimental steps. The fourth and fifth steps, digi­ tization and image processing, are routine data handling steps. The sixth step is the determination of the actual quantitative data, and to a large degree is different for every application. The six steps are outlined in more detail below.

2.2.1 Identification

Identification is simply the act of identifying the quantity of interest (‘variable’) to be visualized and measured. Both time-varying and static variables are possible.

Static measurements can include particle size, morphology, or assembly-line part inspection. Dynamic measurements can include velocity, temperature, and reac­ tion state. Whatever the quantity of interest, at this step the researcher must have some idea of how the final analysis will be conducted in order to more adequately proceed to the next step (visualization).

The particular experiment being set up can range in scale from very small scale

(scanning electron micrographs) to very large scale (space probe photographs). 2.2.2 Visualization

Visualization is the process of making the desired information in the field of view visible, or at least distinguishable from the background. There are several ways of doing this.

The first, and simplest, situation is where the desired information is already vis­ ible without further work. Particular examples include particle size analysis, where particles are directly viewable under a microscope, and inspection tasks, where the inspected part usually has different optical patterns than the surroundings.

A second situation is the addition of a reactant. The reactant can actually reflect the process (phenolpthalein in an acid-base titration), or be triggered exter­ nally (laser dyes activated by lasers focused in a grid pattern, allowing 2-D velocity data to be obtained). A particularly interesting reactant can be found in biochem­ ical work, where Tri-Pan Blue dye can be added to a colony of E Coli cells. The dye darkens the organic matter, and can penetrate the cell membrane when the cell dies and its membrane is no longer able to filter it out, so that live cells appear clear while dead cells are dark blue. Reactants generally reflect a specific instance in the experiment, and are not as such well suited to time-varying quantities.

Third, it is possible to utilize an external sensor. A prime example of this is the use of a liquid crystal sheet to monitor the temperature distribution on the wall of a reactor. The sheet reflects the temperature in an area by translating temperature into a continuous color pattern.

Fourth, non-reactive additives may be a feasible addition to the experiment.

Fluid flow researchers have long used particles, hydrogen bubbles, smoke and dyes to record flow patterns. In the static problem of floe fiber entanglements, it is 7 possible to do index of refraction matching to provide a better view into the center of an immobilized paper floe.

Fifth, it is possible to combine two or more of the above concepts into one visu­ alization technique. Of particular note here is the possibility of embedding liquid crystals (external sensor) in flow particles (non-reactive additive) to simultane­ ously measure temperature and velocity distributions in fluid flow. This particular experiment was conducted by Wilcox et al. (1986).

Sixth, it is possible to use an energy source outside of the visible light spectrum

- for example, UV light and fluorescent dyes, infrared photography for heat, and x-rays to view interior structure. This category of visualization tends to require specialized equipment, and as it doesn’t operate in the normal visible light range it may abstract some qualitative information.

2.2.3 Image Acquisition

Once a quantity is visualized, it remains to acquire that image using film, video, or direct digitization. Each format has inherent advantages and disadvantages.

Film provides the best resolution (dependent on grain size), and time-varying images can be taken using 16mm movies. Film cannot be used in real-time analysis, however, due to the photographic processing required. In addition, film is not kind to optical mistakes and per-run costs are the highest of the three.

Video provides simple, fast, and reasonably accurate recording of visual images.

Most video is limited to standard video rate, i.e. 30 Hz; and video quality is significantly poorer than that of film. However, the reusability of videotape makes video an ideal system for preliminary work. 8

Direct digitization bypasses recording mediums and takes an image directly into the computer. Such techniques tend to be video camera based, and as such operate at similar resolutions to video. They are dependent on the host computer speed and the access time to the computer RAM for image storage, and as such these systems are usually limited to a small number of images (16, 4, or even only

1) in a usefully short period of time. Direct digitization is perhaps best used in a real-time system where only a small number of images are required to adequately represent the quantity being measured.

In certain experiments, and with certain visualization techniques, it is possible to acquire depth information. As shown in Figure 1, this can be done using two cameras (or two views angled by mirrors to a single camera, i.e. a stereoscopic lens), with the separation of the cameras determining the complexity (and accu­ racy) of the resultant depth information. Small angle separation, approximating human stereo vision, allows the resultant image to be viewed through appropriate projectors. This provides excellent qualitative information which may aid in image analysis. Large angle separation, approximately 90°, offers the best accuracy in the determination of depth, but does not lend itself to operator analysis in 3-D.

Regardless of angle separation, the computer can be programmed to calculate the depth information (via 3-D matching and ray tracing).

2.2.4 Digitization

Digitization is the “routine” act of translating the visual (continuous) image into a computer (discrete) image. There are usually two quantities of interest in digitization: image resolution, which is the density of data points (or more OR

camera

(A) (B)

F ig u re 1: 3-D visual data acquistion methods, (a) Stereo (small angle) acquisition, (b) Orthogonal (large angle; 9(P separation) acqui­ sition.

CO 10 specifically pixels, picture elements, the matrix of which make up the digitized image); and depth resolution, which is the range of specific (integer) values each pixel can take, and is usually 8 bits (allowing for 256 distinct grey levels). A variation on the depth is color digitization, which usually specifies three color constituents (red, green and blue, or generally RGB) each of which exists at the standard depth (usually 8 bit).

Video is generally fast, but the image resolution is commonly about 512 x 512 pixels. Slow-scan cameras trade speed (several minutes to digitize a single black

and white image is not unusual) for image resolution (slow scan cameras are often found in resolutions of 2048 x 2048 and 4096 x 4096 is becoming more common).

The resultant images can have a number of different file formats. The choice of file format is usually chosen to mate with the software to be used for digitization

and image processing. For this work, the Files-11 format supported by DEC is

used by the camera software. The file header, containing information about the

image parameters, is usually unique to the software and hardware performing the

digitization or display. The header format used is given in Table 1.

2.2.5 Image Processing

Image processing represents a number of algorithms that can be used as tools

to modify or otherwise segment the image into the desired classifications for image

analysis. These operations can operate on individual pixels uniquely (Pixel opera­

tions; binary processing, for example), or as a group (Group operations; most filter­

ing), or on the image as a whole (Frame operations; image addition/subtraction). Table 1: Image header file format for Dipix files.

STRUCTURE / IMAGE .HEADER/ UNION MAP INTEGER*4 UNKN0WN1 INTEGER+2 UNKN0WN2 INTEGER*2 BYTES_PER_PIXEL INTEGER*4 NUMXINES INTEGER*4 NUM_PIXELS INTEGER+4 UNKN0WN3 INTEGER*4 CCTJSTART_PIXEL INTEGER+4 CCT_START_LINE INTEGER+4 UNKN0WN4(2) REAL+4 UTM_PIXEL_WIDTH REAL*4 UTM_PIXEL_HEIGHT REAL+4 UTM_EASTING REAL+4 UTM-NORTHING INTEGER*2 UTM.ZONE INTEGER+2UNKN0WN5(5) REAL+4 MIN.PIXEL REAL*4 MAX_PIXEL REAL*4 MEAN_PIXEL REAL*4 STD_DEV INTEGER+2 UNKN0WN6(34) INTEGER*4 DATA_BLOCK_COUNT INTEGER+4 HEADER_BLOCK_COUNT INTEGER+4 HISTORY_BLOCK_COUNT END MAP C MAP BYTE RAW(512) END MAP END UNION END STRUCTURE 12

Pixel operations

A binary image, which consists of pure black and pure white (usually defined

as 0 and 1, respectively), is the result of a pixel operation, in which each pixel is compared to a cut-off value, below which everything is set to one extreme (i.e.

black) and above which everything is set to the opposite extreme (i.e. white).

A variation on this operation sets everything below a certain cut-off value to one extreme (usually black), while leaving all other pixels intact; this is thresholding ,

which can be a very powerful (and easily misused) tool.

In both of these operations, the cut-off or threshold value must be specified.

If the image is well segmented, the cut-off may be obtainable from the histogram of the image. The histogram is simply a frequency-of-occurence versus pixel grey

level plot for the image, and the cut-off can be found by determining which pixel

levels belong to the quantities of interest in the image.

Group operations

Group operations are more complex than pixel operations, and generally they

are discrete functions of a limited neighborhood surrounding the pixel being oper­

ated on. Filtering (sometimes referred to as masking) algorithms represent a ma­ jority of the group operations. Filters can simulate, in a discrete domain, common

mathematical operators. For example, the Laplace filter is a discrete implemen­

tation of the Laplacian operator V 2 = Jp- + Jp-. Other common filters are the

Gaussian, high-pass, and low-pass. Details on filtering can be found in almost any

book on image processing or computer vision (e.g., Levine (1985); Horn (1986);

Young and Fu (1986); Schalkoff (1989)). 13

A second group operation is median filtering. Median filtering is a group equiv­ alent to thresholding, except that the cut-off is dictated by the pixels within the group undergoing the operation. A center pixel below the group median is set to black; above the group median, the center pixel remains unchanged.

Frame operations

Frame operations are perhaps the easiest image processing operations to visu­ alize. Image addition is either the addition of a constant to every pixel, or the addition of two equivalent sized images. Image multiplication is the multiplication of every pixel by a constant, or the multiplication of each element in one image by the equivalent pixel in a second image. Image inversion is simply subtraction of the image from the maximum pixel value (often 255). Operations can be combined to highlight whatever segment of the image is of interest.

2.2.6 Image Analysis

Image analysis represents the brunt of the work necessary to obtain real quanti­ tative data. Image analysis starts with the visualization of the experiment; proper visualization simplifies analysis.

Each individual experiment contends with its own algorithms for quantization, however, three specific concepts or areas can be covered here. These are particle

identification , 8-connectedness, and knowledge based systems.

Particle Identification

Particle identification is essentially programmed feature extraction in the im­ age. The basic algorithm is dependent on boundary determination in the image. 14

The threshold at which the boundary occurs can be determined by histogramming or via a process of trial and error. In either case a distinct boundary should be available around each object in the image.

In his Ph.D. Dissertation, Chang (1983) presented a moderately simple particle identification algorithm. The algorithm utilizes the knowledge that the viewed particles (in this case, tracer particles in a mixing vessel) will appear as solid, bright images against a dark background. Two adjacent rows of the image are retained for analysis at any given moment. Within those two rows, three possible cases for particle identification were characterized:

• particle appearance: a boundary is encountered that cannot be part of an

existing particle.

• particle continuance: a boundary is encountered that is directly connectable

to a known boundary/particle.

• particle disappearance: no boundary is encountered to correspond with known

particles.

Chang’s procedure is significantly more detailed than that to be used in this work, and the reader is referenced to Chang’s Dissertation for the complete algo­ rithm (Chang (1983)).

8-connectedness

8-connectedness is simply the recognition that a boundary element is connected to the next boundary element in sequence as one of the eight adjacent pixels. 8- connectedness has roots in region growing/splitting and merging algorithms, which 15 are used to transform an unclassified input image into an output image composed of disjoint regions (pixel sets). For a given pixel I ( i,j) as shown in Figure 2, the

8-connected neighbors are shown by “X”.

8-connectedness allows the programmer to develop the boundary edge by ef­ fectively walking around the object; Racca and Dewey (1988) utilized a form of

8-connectedness for particle identification. 8-connectedness requires significantly

more rows of the image to be maintained in the computer’s memory as arrays,

however, and sometimes cannot be adequately programmed on operating systems

with insufficient RAM (virtual or otherwise) or array size limitations (stack/pointer

restrictions or RAM segmentation).

Knowledge Based Systems

Knowledge based imaging systems, commonly found in robotic visions research,

are systems that utilize object models (usually in the form of LISP frame represen­

tations) to match and identify viewable objects in an image. Boyer et al. (1984)

looked at the implementation of such systems for parts manufacturing, with the

intent of recognizing and locating three dimensional objects. The processing tech­

niques used are intensive Laplacian of Gaussian masks, providing reasonable edge

(binary) images [Sotak and Boyer (1989)]. These systems use 8-connectedness for

determining potential object edges.

Such systems hold some potential for the catalyst problem, since the object data

base is very simple (a single three dimensional object). The recognition algorithms

are a large piece of work in and of themselves, which makes it difficult to use in

the catalysis problem within the limits of this project, however. 16

X XX \ / \ /

X — ! 0 j ) — X / 1IN /\ XX X

F ig u re 2: 8-connectedness of -pixel 17

A more general knowledge based system, which also has roots in connectivity, is neural networking. Such systems are being researched for pattern recognition software, among other things, and have highly advanced connectivity patterns

(beyond 8-connectedness).

2.3 Fluid Mechanics

2.3.1 Introduction

Work in quantitative image analysis of coherent structures in turbulent shear flow has been prompted by recent advances in imaging technology and computer hardware. Image analysis of visualized flow is seen as an advance over traditional intrusive probing methods, ultimately providing full-field quantitative information about the entire test flow field. Optimization of this technique requires careful experimental design to provide adequate images for computer manipulation.

2.3.2 Coherent Structures

Turbulence research has been divided into three main periods. Each period has provided some level of insight into the problem, but no complete description of turbulence has resulted. The phenomenological theories of turbulence provide information of the mean velocity profile of the flow, but do not establish the under­ lying mechanisms of turbulent motion required for problems in complex geometries.

The statistical theory of turbulence attempts to solve the indeterminate Reynolds equations by evaluating randomly fluctuating terms in the equations, but provides very little insight into the actual physical mechanisms of the fluid motion.

The structural theory of turbulence provides the basis of the third period. 18

Research indicated that turbulent motion consisted of events, often sequential and periodical. The occurance of these sequences is apparantly random. The emphasis of this stage of research has been in the separation and analysis of recognized patterns. The average properties of each event need to be obtained and analyzed

[Brodkey, et al. (1984)].

2.3.3 Probes and Data Acquisition

The traditional method of obtaining flow information has been the insertion of a flow sensor into the flow field. Hot wire and hot film anemometry has been used in a great deal of research, providing velocity measurements within the flow field. These probes provide adequate information at a given point, but are unable to provide large amounts of multi-point data. Additionally they need to be placed in the flow field, thereby affecting the flow at that point and in the region behind it.

Probes also suffer from a resolution problem. In many cases the structures being measured are of a smaller scale than the probe, so that the probe is unable to provide a true reading on that structure let alone multi-point data. When probes are of an adequate scale to measure a phenomena, the problem then becomes one of positioning the probe in a visible event or sequence of events.

2.3.4 Laser Doppler Anemometry

Alternative measurement methods are available to normal hot wire or hot film anemometers. These methods generally measure light scattered from particles seeded in the flow field. LDA (laser doppler anemometer) techniques utilize crossed 19 laser beams to create an interference pattern at a point in the flow field, the disturbance of which by a particle can be measured and the velocity determined.

LDA’s resolution is limited only by the size of the particle used to seed the flow.

It suffers from being single- point, and two- and three-component measurements are prohibitively expensive; even in a single component configuraton LDA is far more expensive than hot-wire methods, thereby eliminating multi-point variations.

2.3.5 Flow Visualization

Qualitative work

Visualization has long been a tool in the area of fluid mechanics. Early work used hydrogen bubbles or injected dyes to follow water flow, and smoke injection or

Schlieren or shadowgraph photography for the most part utilizing refractive index changes with density for gas flows.

Incense smoke has been used as a flow marker by some experimenters in gaseous experiments of boundary layer flow. Others have utilized T i 0 2 particles in propane to investigate combustion. The reaction of TiCLj with H 2O forms the visible Ti 0 2 as flow markers at the point of combustion. Water-oil emulsions such as mixtures of benzene and carbon tetrachloride can also be used to produce markers for visu­ alization [Caffyn and Underwood(1952)], while kerosene and water has been used in preliminary work for crude oil sampling [Hanzevack (1986)].

For simple flow structure visualization, dye injection has been used for imaging

[Gad-el-Hak (1985, 1986)]. Falco and Chu (1988) have used laser-excited pho- tochromatic dyes in the flow. The laser light source is split into a grid pattern, which excites the otherwise non-visible dye in the flow in the same grid pattern. 20

This pattern then distorts with the flow in time, and can be tracked for as long as the dyes remain excited. Smith (1982) has used dyes to visualize flow structures for high speed videography. Yang (1989) provides an excellent review of visualization techniques.

These techniques were used to yield primarily qualitative information. More recently, neutrally buoyant particles have been used in a multitude of techniques, creating point images from which quantitative information can be derived.

Quantitative work

Image analysis and the availability of the computer as a willing slave has prompted the reduction of qualitative images into quantitative information. Vari­ ous researchers have developed means of recording the flow and interrogating the resultant images. The large amounts of seeded particles in the flow field provides the multi-point information needed for coherent structures, thereby overcoming one of the major limitations of traditional probes.

Weinstein et al. (1985) has developed a holocinematographic velocimeter (HCV) which uses 40 (im hollow glass spheres illuminated by an oven heated copper va­ por laser expanded to a 5 cm diameter. Adrian (1986a, 1986b) and Landreth et al. (1988) have developed the pulsed laser velocimeter (PLV) using 10 fim plas­ tic spheres illuminated by a double-pulsed ruby laser expanded as a laser sheet.

Other experimenters have utilized particles in a variety of variations on the ba­ sic principle [Wilcox et al. (1986), Racca and Dewey (1988), Canaan and Hassan

(1990)], or have used other visualization and analysis techniques [Smith (1982),

Toy and Wisby (1988)]. Kiritsis (1989) studied the effect of noise on particle iden- 21

/ tification in flow images, while Agiii and Jimenez (1987) analyzed error sources in particle tracking. Hesselink (1988) offers a moderately complete review of imaging techniques applied to fluid flow.

Lakshmanan (1986) conducted preliminary work on a full-field, multi-point quantitative image analysis system. Colored particles were used to mark positions in a flow field, and a full color digitizer used to obtain digital images. Color was used to enhance the number of particles that could be used in the image, for the goal was to use the computer to track particles recorded on 16 mm film through sequential frames, thereby providing velocities.

In his PhD dissertation, Economikos (1988) expanded on the work of Laksh­ manan. In his work a small section of Laksmanan’s film was digitized and the particles tracked using a predictor-corrector algorithm [Economikos et al. (1990)].

The predictor-corrector algorithm is a rigourous tracking algorithm incorporat­ ing the physics of the particle motion (path, velocity and acceleration coherence).

The algorithm requires at least 3 known points to work, and therefore requires a

“startup” algorithm. It was this need that prompted part of this current effort.

3-D data acquisition

The majority of techniqes given above visualize a 2-D plane of the test area, or a 2-D projection of a 3-D volume. Full field 3-D information can be obtained if illumination is in a volume. There are two relatively simple methods of obtaining

3-D images, stereoscopic and orthographic views, as shown in Figure 1. A more expensive alternative is holography, and a less accurate method is to acquire a sequence of 2-D images through the volume. Some 3-D particle data may also be 22 obtained using a single camera and defocusing techniques.

Stereoscopic views simulate human vision. Two images separated by a small

angle are taken simultaneously of the flow, and can be recombined by a projec­

tor or in the computer to obtain depth information. Praturi and Brodkey (1978)

investigated the feasibility of such a technique in fluid mechanics, and other re­

searchers have also utilized the concept [Sheu et al. (1982), Chang (1983), Chang

et al. (1985a, 1985b)].

Orthogonal views provide the best depth information, but the resultant images

cannot be viewed in 3-D using a film projector. Racca and Dewey (1988) used

a mirror system to provide orthogonal views to a single video camera, and im­

plemented a low-resolution particle identification and tracking system with some

success.

Weinstein et al. (1985, 1986) developed the HCV (holocinematographic ve­

locimeter) at NASA’s Langley Research Center. High speed single- exposure movie

holograms of tracer particles are used to acquire the 3-D images. It was found that

the third dimension’s accuracy was insufficient, and so orthogonal holographic cam­

eras were utilized. The resultant holograms were interrogated using an argon ion

laser and a video camera mounted on a 3-D positioner. HCV is a complex ve­

locimeter, poorly suited to many research areas and groups.

3-D data can also be obtained by acquiring a sequence of 2-D slices through

a volume. Utami and Ueno (1984, 1987) illuminated a cross-sectional view of a

flow volume using a slit to create a light sheet. The resultant sheet was bounced

off a mirror to illuminate a 2-D cross-section, and the mirror moved to illuminate

subsequent cross-sections. A pair of synched cameras were used to record alternate 23 cross-sectional views.

Finally, Willert and Gharib (1990) have developed a single camera 3-D acqui­ sition method. By placing a triangularly positioned trio of pinholes in the optics, they defocus particle images. These defocused images form a triad of points on the film, with the depth relative to the triad size. Videos made with this tech­ nique provide an amazing amount of 3-D information to a human viewer, but its accuracy is not yet known.

2.4 Catalysis

2.4.1 Introduction

The determination of catalyst surface area is usually based on the physical adsorption of a gas on the solid surface of the catalyst (BET analysis). This method is not as convenient when the catalyst exhibits some structural specificity.

In particular, M 0 O3 crystals can be grown to enhance the ratio of basal (010) to side (100) planes (defined in Figure 3), but BET analysis can not determine this ratio.

The surface of the catalyst can be viewed using a scanning electron microscope

(SEM), as given in Plate I(a-b). Such images inform the researcher as to which plane is in predominance, but not by what degree.

2.4.2 Quantitative Analysis

Hernandez (1987, 1990) obtained 3-D SEM photographs of sample M 0 O3 crys­ tals, which are given in Plate I(a-b) . Hernandez slightly tilted the sample plane in the SEM to simulate stereo separation (Figure 4), as recommended by Goldstein 24

(101),[101]

(010),[100]

( 100),[001]

(001),[100]

F ig u re 3: M 0 O3 crystal faces and directions. P la te I: (a) Left SEM catalyst image, (b) Right SEM catalyst image 26

micrograph A B

F ig u re 4: Photographic simulation of stereo vision by image separation. 27 et al. (1981) and Watt (1985). The resultant images are viewable using stereo viewers. Mirror stereometers can be used to analyze parallax in stereo images. To determine the ratio of basal to side planes, however, Hernandez manually matched

3-D points and calculated depth from these points, a tedious job at best. CHAPTER III

EXPERIMENTAL

3.1 Introduction

This project involves the use of a general-purpose imaging facility in the areas of laminar and turbulent flow, and catalyst suface structure. The facility is outlined in section 3.2. In addition, this chapter covers the first five steps of imaging as outlined in section 2.2 for each of the three areas looked at in this project. The sixth step of imaging, Image Analysis, is the subject of the next chapter.

3.2 Imaging Facilities

The imaging facilities used are those operated by Koffolt Computer Graphics

Laboratory (KCGL) at The Ohio State University. A high resolution Eikonix 78/99 color digitizer is used for digitization. This in turn is connected to a Micro VAX II host computer, which also supports a Dipix Aries-III Image Analysis Workstation.

The Micro VAX II is networked to KCGL’s VAX 8550, which is the central server for a small cluster of VAXstation 3100 color workstations.

28 29

3.2.1 Eikonix 78/99

The Eikonix digitizer camera is capable of 2048 x 2048 maximum resolution with 16.7 million colors ((28)3). The scanner head contains a linear photodiode array, which is stepped across the image plane by a stepper motor during scanning.

The head contains a 4 position filter wheel for color digitization (one position for each of the three color filters, Red, Green and Blue, and one non-filtered postion).

The image must be digitized three times for a color image, once through each of the color filters. A 2048 x 2048 color image can easily take 15 minutes to digitize and store.

The scanner software is operated through the Dipix Aries-III software, and allows the user to manipulate the size of the digitized image (anything up to 2048 x

2048 may be chosen), the location of the digitized image (within the 2048 x 2048 maximum area) and the integration time (sampling time for the photodiodes at each pixel location). The files are stored in the Files-11 format on the Micro VAX

II’s 400 MB hard disk, and can be accessed by the Dipix Aries-III software or by

any VAX program on the Micro VAX or a connected computer.

3.2.2 Computer Hardware

The Micro VAX II used in this work operates VMS 5.1 on a 400 MB disk primar­

ily devoted to imaging work. A monochrome monitor is used for terminal access,

but a converted Megatek 1650 graphics terminal is used as a color display monitor

for the digital images by the Dipix Aries-III in conjunction with the monochrome

terminal.

The VAX 8550, operating with 32 MB of main memory, was sometimes used 30 for the programming work as it is a much faster machine than the Micro VAX

II operating with only 5 MB of main memory. The VAXstation 3100s, with 16

MB of main memory as well as high resolution color displays, were used for the catalysis work as it required a level of operator interaction that was impractical on a monochrome monitor. The VAXstation 3100s operate DECWindows, with graphics programming done in X-Windows.

3.2.3 Dipix Aries-III

The Dipix Aries-III consists of an array processor, graphics display (currently connected to a Megatek 1650 color terminal), 8 MB of system memory for holding images, and a mouse digitizer pad for inputting commands and cursor movement on the graphics display. The Dipix Aries-III operates under its own operating system (OIS), using the Files -11 format for its images stored on a standard VAX disk.

The Dipix Aries-III Image Analysis Workstation has had subroutines written for it to operate the Eikonix 78/99 camera, as well as to undertake specialized image processing operations. In this work only the routine image processing software available for the Dipix was used, which in the case of the Dipix is a very strong core of software.

Built-in to the Dipix software are all of the fundamental image processing rou­ tines. Look-up tables can be created to form binary images, pseudo-colored images, stretched or enhanced images, as well as operators to add, subtract, multiply or combine in various manners two or more images with ease. Filtering and warping operations are also available. 31

3.3 Laminar Flow

3.3.1 Identification

For this particular project, the variable of interest is velocity. Researchers at

NASA-Lewis would like to be able to monitor flow velocities (if any) in a micro­ gravity molten salt crystallization experiment. This experiment is conducted in a small directional crystallization furnace as shown in Plate II. Due to the high tem­ perature gradients, the sudden application of a force (probably by space shuttle course corrections) could induce the unwanted flow in the experiment, at which time it is desired to stop the experiment, refreeze the crystal, and restart. The existance of this flow was observed in crystallization experiments wherein visible contaminants were seen to move in the molten salt, with estimated velocities of 1 cm/sec. One run of the experiment could take upwards of 30 hours to complete.

3.3.2 Visualization

Velocity can be visualized in a flow system with the addition of tracer particles.

These particles should be neutrally buoyant, have good optical properties, and otherwise not contaminate the system. For this experiment, choosing the optimum tracer particle is no easy solution. The particles need to be able to survive the harsh environment of a molten salt, be highly visible and need to have a particle density not only matching that of the salt at its solidification temperature, but needs to approximately match it over the entire temperature range the salt can be expected to be exposed to. Tracer particle selection is currently being researched by NASA-Lewis.

For this work, however, it is sufficient to simulate the microgravity test condi- P la te II: Crystallization furnace 33 tions. Small (160 /xm) plastic particles were placed in a test tube inserted into the crystallization furnace. These particles were made of Pliolite (Goodyear’s trade­ mark for poly(vinyl toluene butadiene)), which has a specific gravity of 1.024. A little stirring in the test tube provided reasonable particle motion, approximating that seen in the crystallization experiment.

3.3.3 Acquisition

Two 35 mm cameras using orthogonal views and motorized film drives were used to acquire the test images. A sample pair of images is given in Plate Ill(a-b).

Subsequent image pairs were acquired at 0.5 sec intervals. In the final project, two synched video cameras connected directly into the host computer via a pair of digitizer boards will be used.

3.3.4 Digitization

Each of the sets of image pairs was digitized as a 512 x 512 neutral image, to further simulate the final system. Plate IV shows the digitized image of Plate

111(a). Digitization was done from 35 mm slides.

3.3.5 Image Processing

The function of image processing is to enhance the information you wish to extract in image analysis. In this case that information is particle location, tracked through multiple frames. Image processing should result in an image with particles easily identified.

The simplest such operation is the transformation of the digitized image into a binary image. If the original image is easily segmented, this operation is very Plate III: (a) Left flow image, first frame (t = 0), of particles in lami nar flow in the crystallization furnace, (b) Right flow image corresponding to (a). (a) (b) P la te IV : Digital image of Plate IH(a). 36 effective and can be done using simple thresholding techniques. The result of a binary image operation using simple thresholding1 on Plate IV is given in Plate

V. All particle locations are easily identified. The excessively large white areas are the heater elements still in view; these can be simply removed during particle identification due to their gross size.

To sufficiently track the particles through 3 frames would require a significant amount of processing time on three 512 x 512 digitized frames. To reduce this computational load, the multiple frames can be added together using

= (2-1) i= 1 for n=3,4,5,6,7, or 8, and for every ( j , fc)th point in the original images (/,-). Three of the NASA slides (Plates V, VI and VII), digitized and added together by the above equation, is given in Plate VIII(a). A fourth frame would provide a potential check in particle tracking, but is not used in the analysis. The summed image represents the final output of image processing. The equivalent summed image for the right hand view is given in Plate VIII(b).

3.4 Turbulent Flow

3.4.1 Identification

For this particular project the ultimate variable of interest is vorticity. This can be derived from velocity, given enough data. Unlike the laminar flow case, a

1 There are multiple methods and algorithms by which an image can be segmented into a binary

image. Levine (1985) and Horn (1986) both offer good background information in this area,

while Kiritsis (1989) offers a detailed look at segmentation specifically in noisy particle images,

as a precursor to particle identification. P la te V : Binary image of Plate IV. P la te V I: Binary image of the second frame of the time sequence, t = At, for the laminar flow. 39

Plate VII: Binary image of the third frame of the time sequence, = t A21, for the laminar flow. Plate VIII: (a) Summation of Plates V, VI, and VII, i.e. left view flow, using Eq. 3.1. (b) Equivalent image for right view flow. 41 three-frame particle track is not considered sufficient, and seven or more frames are desired. Economikos et al. (1990) presented the predictor-corrector algorithm, which accurately tracks particles given initial start-up information. Therefore this work is intended to get the visual data as well as provide the start-up track for the more rigorous predictor-corrector analysis.

To provide a simple platform from which to obtain the visual data, a small mixing vessel was set up. This equipment is easy to modify and change. Ultimately images will be obtained in a turbulent flow channel, which is not as convenient to modify.

3.4.2 Visualization

Velocity is in this case visualized by adding small neutrally buoyant tracer par­ ticles to the flow. 160 n m Pliolite [Goodyear’s trademark for poly(vinyl toluene butadiene)] particles were dyed various colors (red, green, blue, yellow, and pink) using oil soluble dyes. The details of the particles and dyes is given in Russ (1988).

The use of dyed particles was originally needed to enhance particle matching, and seven distinct colors was considered to be optimum [Economikos et al. (1990)].

Color has since been relegated to secondary matching information, but is illustra­ tive of the need to match one’s visualization to the final image analysis.

Praturi and Brodkey (1978) estimated that 3000 particles in the field of view are the maximum that can be easily distinguished by a human observer. In this work this same particle density has been maintained. 42

3.4.3 Acquisition

The flow system set up is given in Figure 5. A high-speed 16 mm Milliken film camera running at 128 fps was used to record the motion. A Bolex stereo lens was used to acquire depth information, as side-by-side stereo views on the 16 mm frame.

6 GE DVY quartz bulbs were used to illuminate the mixing vessel, providing 3900 watts of light. A plastic grid point plate, with registration points drilled every one inch, was attached to the rear wall and approximately equal portions of red, green, blue, white, pink and yellow particles were added to the vessel. A small mixer was added and provided simple particle motion.

A resultant 16 mm film frame is given in Plate IX. A subarea is outlined which is used for the analysis.

3.4.4 Digitization

The subarea outlined in Plate IX was digitized at 512 x 512 resolution as a neutral image [Plate X(a-b)]. This size allowed for faster analysis on the VAX, and since this is a test image the information from the entire frame is not of interest. For this start-up level of work, color information was not needed and as such the color digitization was not done.

3.4.5 Image Processing

The image processing done on the turbulent flow is essentially identical to that done on the laminar flow, since the desired information is effectively the same. The binary image of Plate X(a) is given in Plate XI, and the summed image in Plate

XII(a). The grid points can be removed during particle identification, as they Black background with fluorescent holes drilled every 1 inch 6 GE DVY 650 watt bulbs

Variable speed m ixer to provide motion

f *.st 4. * -•}»$ M' tj*t tv v- ?•» «•,& sCvf ;#*■

. .ftv..•‘ >• V‘ . • -■■ -su' fI - . *ucts, t " s.-.. . •<*'...•; . An^ •»««% ^Xx.’. : ,\.&._A f.^. *

Milliken 16 mm camera set ai 128 fps, with Bolex stereo lens. M irror under tank to improve lighting

Figure 5: Experimental setup for turbulence work. P late IX: 16mm stereo film frame for turbulent flow, with subarea used for analysis marked. P late X: (a) Digital image of Plate IX, left side, (b) Digital image of Plate IX, right side. (a) (b)

Cn P late XI: Binary image of Plate X(a). Plate XII: (a) Summation of left hand view, turbulent flow, (b) Summation of right hand view, turbulent flow. (a) (b) 48 are significantly larger than the particle images. Plate XII(b) is the equivalent summed image for the right hand view.

3.5 Catalysis

3.5.1 Identification

In section 2.4, the background of the catalysis research was given. Mo03 crys­ tals can be grown to enhance the ratio of basal (010) to side (100) planes, but this ratio cannot be easily measured. The ratio of the areas, determined from 3-D SEM photographs, is the desired quantity.

3.5.2 Visualization

The catalyst surfaces are directly visible given the right equipment, in this case a scanning electron microscope (SEM). Catalyst samples had to be prepared for the SEM. A small amount of catalyst was sprinkled on a double-sided adhesive tape attached to a 1 /2 inch diameter carbon disk, with the excess catalyst blown off. The carbon disk was attached to a sample holder, and the sample gold coated by a gold sputtering apparatus to a thickness of approximately 200 angstroms to improve electron scattering [Hernandez (1987)].

3.5.3 Acquisition

The images were acquired using the built-in photographic equipment of the

SEM. Some SEM machines are equipped to provide digital images directly, but the particular machine used to acquire the images (a Hitachi S-510) does not have this capability. As shown in Figure 4 in section 2.4.2, the sample plane in the SEM can be slightly tilted to provide a simulated stereo view; tilts (a/2 in Figure 4, and 49 later referred to as a in this work) of ±5° were used. The resultant images are given in Plate I(a-b).

3.5.4 Digitization

The image pairs given in Plate I(a-b) were carefully viewed under a stereo viewer and taped together when they were correctly matched. The combined images were digitized at 1024 x 512 resolution, and the digitized image was then broken into two 512 x 512 images for ease of handling. The 512 x 512 images were individually rotated and shifted to align the y values on the display. The digitized images are given in Plate XIII, as viewed on the VAXstation display.

3.5.5 Image Processing

Initially, the images were transformed into binary images, using multiple thresh­ olds to adequately outline the edges. A large amount of time was consumed in try­ ing to program the computer to use 8-connectedness to trace each area’s outline, determine corner points, and otherwise automate the area identification process.

It was then decided to provide all corner points manually, which in turn elimi­ nated the need for binary images. The binary image used is given in Plate XIV, with the disclaimer that it effectively represents a failed attempt at image analy­ sis. Note that as with the flow images more complex binarization algorithms could have been used, but would not have made the images more amenable to the area outline tracing. Plate XIII: Digital image of Plate 1(a). 51

Plate XIV: Binary image of Plate XIII. Arrow marks an open-end area of the type that made automatic processing impossible. C H A PT E R IV

RESULTS AND DISCUSSION

4.1 Introduction

This chapter discusses the sixth step of imaging, Image Analysis, as applied to each of the three experiments. Due to the similarity of the analysis for the laminar and turbulent flows (sections 4.2 and 4.3, respectively), some of the fluid flow developmental material is covered only in the laminar flow experiment. Section

4.4 covers the catalyst surface structure analysis, while section 4.5 deals with some general results.

4.2 Laminar Flow

A pseudo-color operation on the particle paths seen in Plate VIII(a) is given in Plate XV(a), which is the left hand view of the laminar flow experiment. Red represents first frame data, green is second frame, and blue is third frame. Plate

XV(b) shows the equivalent right hand view. The 3D particle paths, clearly vis­ ible in Plate XV(a), represent three operations: particle identification , whereby the particles are located by their centers, grey levels, and size; particle tracking, whereby the particles are tracked within the image; and 3D track matching , which

52 P late XV: (a) Psuedo-color representation of Plate VIII(a). (b) Pseudo­ color representation of Plate VIII(b). (a) (b)

Cn CO 54

matches particle tracks within the two views to determine the 3D information.

The first two tasks are undertaken in this work.

4.2.1 Particle Identification

The particle identification algorithm, a modification of that by Chang et al.

(1985), identifies the particles from top to bottom, left to right, row by row. An

ideal particle would have a digital image similar to that shown in Figure 6. As each non-zero pixel is encountered, it is broken down into its constituent image

values, each of which is compared to 4 nearby pixels (in the order given in Figure

7; left, left and above, above, right and above) to determine if it belongs to already

identified particles. If no match is found, the pixel labelled A is checked, along

with Bj if a three-way match exists between the pixel, A and B then the pixel is

added to the equivalent grey particle already identified at A. The need for this

final check is obvious from the idealized digital image in Figure 6.

For this simple particle identification, only six pieces of information for each

particle are kept during processing: minimum and maximum x extents, minimum

and maximum y extents, number of pixels, and grey level. After all the particles are

identified, some simple screening is conducted to eliminate spurious anomalies left

over from image processing. First, the number of pixels in the particle is checked

against a preset minimum and maximum, usually 8 and 200, which represent a

minimum and maximum diameter for the expected particles. Second, the x and y

area extents are compared. This is done by retaining areas satisfying

MAX(x extent, y extent) ^ ^ ^ ^ MIN(x extent, y extent)

Third, the number of pixels (or area; AREAtrue) is checked against a “pseudo- 55

(A) (B)

Figure 6: (a) Ideal particle image, infinite domain, (b) Real particle im­ age, digital (discrete) domain. 56

Column Column Column Column 3 ~ 1 3 3 -f 1 3 + 2

Row i - 1 2 3 4 A

Row i 1 B

Figure 7: Pixel checking order for particle identification. Pixels are checked in the order 1, 2, 3, and 4 / if no match is made, pixels A and B are checked simultaneously. 57 area” (AREApSeucjQ), namely the area of a box bounded by the particle’s mins and maxes. These two values are checked using

AREAfrnp „ “ “ < AREApseudo where TOLarea is a parameter used to roughly characterize the expected shape.

A perfect square would have a TOLarea of 1, whereas a perfect circle would have a TOLarea of 0.785[(ird2/4:)/cP]. Considering this comparison is being done in a finite domain, a less restrictive TOLarea is required. For this work, TOLarea of

0.5 is used, which would allow a3 X 3 particle in the shape of a to pass (i.e.

5/9; note that such a particle would not meet the minimum particle area defined previously, however). Extremely long and thin shapes that do not lie horizontally or vertically, however, will not pass such a test. This could include the outer edge of the heating wire which is often sloped.

A flowchart of the completed identification code is given in Figure 8. The actual

Fortran code, PART_ID.FOR, is given in Appendix A. The code is m oderately slow to run, taking about 30 seconds on the VAX 8550 to analyze a single 512 x 512 image.

The results of the particle identification code on the particles in Plate XV(a) is given in Figure 9. Plate XVI is a zoom of a subarea located approximately in the center of Plate XV(a), showing how well the identification program has done for this image. The algorithm has done an excellent job of identifying the visible particles, and a reasonable job of eliminating non-particles identified within the image. Load In Line t i l l — i Load next line Redesignate line as #2 * 2 as« l

Scan next Pixel (X(i,2)) Out JUt Panic 1c list

Decompose Grey (X(i^)->G(X(U))) Rejec t bad sizes aru 1 shapes

Check left (X (i-U ))

All G(X(i,2)) matched? ^V^^scannedT^X ^ v>ss^scanned?^

Check above-left (X(i-l.l))

AllG(X(i,2)) Y matched?

Y Check above (X(i,l) <^AHGOCfi.2«V\ N „ Create new matched? Panicle ID(s)

Y All G(X(i,2)) matched?

Y

Check above-right GfX(i,2)) ^ Check X(i+2,1) (X (i+l.l) matched? and X (i+I2)

Figure 8: Flowchart for particle identification. iue 9: Figure Y Pixel Location 500 400 300 200 100 10 0 30 0 500 400 300 200 100 0 frame; atce ietfe fr lt X a ( frt frame; first (• (a) XV Plate for identified Particles ++ third frame). third X Pixel Location XPixel * second

59 Plate XVI: Zoom of Plate XV(a), with identified particles in view. 61

4.2.2 Particle Tracking

The particle tracking algorithm utilizes a rudimentary form of path and velocity

coherence, which is just the concept that motion will be smooth if the framing rate

is high enough. The particles found from the previous step are sorted into three

lists, one for each frame. Due to the nature of the particle ID algorithm, these

particles are already approximately sorted in the lists from low to high y values.

Starting from the beginning, each 1st frame particle is compared with candidate

2nd frame particles, and each 2nd frame with candidate 3rd frame particles, until

one of the following conditions (see Figure 10) is met:

A y > A ymax (4.3a)

-V 2 7 ^ 31 < T0Lv and l0i-2 - 02-3| < TOL* (4.36) 1 —2 2 - 3 2 where 1,2, and 3 refer to frames, and 6 is the angle as if the particle pair were in

polar coordinates (of which the magnitude of the velocity is the other coordinate).

A ymax is the maximum expected frame-to-frame particle movement in y, TOL„ is

the maximum change in average velocity between each pair of frames, and TOL# is the maximum angular motion of the particle. This represents the basic tracking algorithm; its implementation was modified by using a simulated image.

Some of the particle paths in Figure 9 are clearly visible; others are not. It is not easy to quantify the particle tracking algorithm in such a case. To help determine the accuracy of the tracking algorithm, a test image was created to approximate the velocity vectors seen in the real image. For 200 vectors, a random direction, initial velocity, curvature and acceleration was calculated. The initial x and y components of velocity were roughly normally distributed around 5 pixels, 62

2 —»3

1

Figure 10: 3-point simplistic tracker. 63 ranging from 0 to 10. The curvature was roughly normally distributed around

0 radians, ranging from -0.52 to +0.52 radians. The acceleration was roughly normally distributed around 0, ranging from -30% to +30% of the initial velocity.

The results are given in Figure 11.

It was found that the tracking algorithm [Eq. 4.3(a-b)] ran exceedingly fast. To increase the tracker’s accuracy, multiple passes (varying Aymax, TOL„, and TOL

The choice of optimum tracker parameters (TOL„, TOL#, A ymax) requires some knowledge of the particles being tracked. The effect of varying the tracking pa­ rameters for a single pass on the simulated image is given in Figure 14(a-d), for varying TOL„, TOLg, and MAXDIS, which is basically A ymax over the expected maximum displacement, or effectively a dimensionless displacement. All three of these parameters are dependent on the actual particle track characteristics. From

Figures 14(c) and 14(d) it is obvious that both TOL„ and TOLg have some opti­ mum value (about 0.25 and 0.3, respectively), at least when there is some heavily weighted average value as is the case for the simulated image. Comparing Figures

14(a), 14(b), 14(c) and 14(d) is is also apparant that MAXDIS has some opti- Figure 11: 11: Figure Y Pixel L ocation 500 400 300 200 100 10 0 30 0 500 400 300 200 100 0 frame). Simulated particle image (• first frame; first (• image particle Simulated X Pixel Location XPixel * second frame; second + + third 64

65

Table 2: Results of the tracker on the simulated image, Figure 11.

Iteration ^ U m a x TOL„ TOL0 % Not Found % Wrong % Correct

1 5 1.0 1.0 81.00 1.00 18.00 2 10 0.8 1.0 12.00 4.50 83.50 3 15 0.5 1.0 5.00 5.50 89.50 Figure 12: 12: Figure V Pixel L ocation - 0 0 4 300- 500-1 2 loo^js *-y *-y loo^js o i -- / / * ^ * ; V \ ' / / - - oo—i^ / ^ X ^ / C / / / / ^^ ^ ^ \ \ x v ^ iii i rr ii i I i i i" r ii i n i i ii i i i i i i i ii ii i i i i i ii i i i i i i i ii x ^ \ \ * ' // \ / / // /^ v • - n .• v / / ^ w ^ .\ / > ./\ \ \ ^ — third frame). third nldn pril lctos • is frame; first (• locations particle including a Pril tak dniid o h iuae mg, iue 11, Figure image, simulated the identified, for tracks Particle (a) \ ' 0 20 0 40 500 400 300 200 100 - / x ' v

-f • 4 ' \ ’4r '# 1 ^ - . ^ I X Pixel Location PixelX f ~ ~ T ' r \N \N 7 i / i x / n h / W \/N : / j:

* second frame; second / V, + 66

Y Pixel L ocation 12 re u ig F - 0 0 4 - 0 0 5 - 0 0 3 0 0 2 0 0 1 - - 0 otx b Pril tak ietfe o h smltd mg, Fig­ image, simulated the for identified tracks Particle (b) cont.x 100 ure ure V 1 1 vcos only. vectors , 200 X Pixel Location XPixel 300 400 V 500 v. 67 Load Particle START List into Record Array STOP (SA1.SA2)

Load Tracker Last Parameter Inc i ^ Set loaded?^ Parameters for Pass i

Set Forward & Reverse Compare SA1 & SA2 in List = Forward (SAI) Arrays; Fill SAC

Set Forward & Reverse Change List Change direction in Sublisi = Forward Direction yet? in List (SA2)

Loop Grey = 1 1<- l-MAX(GI)

Inc I

Is Panicle Unmatched? No match

Change direction in Sublist MAX(G2)

Inc J

Is Particle Unmatched?

^Change Sut^s Ju t direction yet?,

Matched Loop Grey =3

K <- Next- MAX(G3)

IncK

Meet TOLv? Is Panicle Meet Unmatched? AYmax?

Figure 13: Flowchart for particle tracking. Percent Found 20 F ig u re 14: 14: re u ig F 0.4 TolV b Takrpromnef snl as o AXDIS=0.58. M IS—0.29. for AXD pass, M single a for r pass, fo single a performance for Tracker (b) performance Tracker (a) 0.8 0.2 0.3 4-1.0 1 - .4 0 0. (z. 73 40 « u v 60 2 v c O C 100 20 80 0.2 TolV 0.8 0.2 0.8 0.3 0.6 5 0 co Percent Found 100 80 60 40 20 0.2 Figure 14 14 Figure 0.4 TolV 0.6 cont(c) 0.8 o MXI=.7 () rce promne o a sin­ a for performance Tracker (d) MAXDIS=0.87. for gle pass, for MAXDIS=1.16. for pass, gle rce promne o a ige pass, single a for performance Tracker 0.2 0.8 0.3 0.6 0.4 C 40 80 60 20 0.2 0.4 TolV 0.6 0.8 " 0.6 0.2 0.4 0.3 0.8 71 mum value near 1.0, as expected. This would also be true for flows with some characterizable bulk velocity, with minimal local fluctuations.

The accuracy of the tracker is highly dependent on two image parameters as well. Both the maximum expected displacement of the particles and the number of particle tracks affect the results. To study the interaction of the two image parameters, a series of simulated images was created. The number of vectors in the images was varied from 25 to 400 in increments of 25, and the maximum initial x and y displacement of those vectors was then varied from 1 to 30 pixels in increments of 1 pixel. The average x and y displacement in each image was set to half the maximum. This results in 480 separate simulated images; to smooth out the results, ten unique images were made for each set of conditions and the individual results averaged. Figure 11 is representative of these types of images, with the number of vectors equal to 200 and the max initial displacement set to

10 pixels with an average displacement of 5 pixels.

The results of passing the tracker using the parameters given in Table 2 over the series of simulated images just described is given in Figure 15(a-p). As max initial displacement (initial frame-to-frame particle motion) increases, accuracy decreases.

The optimum displacement appears to be about 5 pixels for this particular set of tracking parameters. Not surprisingly, accuracy is degraded rather significantly by increasing the number of particle tracks, as the potential for track overlap is detrimental. For initial particle displacements of 10 pixels or less, however, the tracker does a good job up to 400 vectors [Figure 15(p)].

The interaction of varying tracking parameters and varying imaging parameters is given in Figures 16(a-b), 17(a-b), 18(a-b). Figure 16’s parameters are given in PERCENT 100 80 20 60 40 ) A IIIL DISPLACEMENT INITIAL MAX F ig u re 15: 15: re u ig F 20 feto ayn h nme ftak adpril displacement particle and tracks of number the varying of Effect nsmltdiae, sn ut-as rce aaees ( parameters tracker multi-pass using images, simulated in correct; vectors.

30 o ound; d n u fo not w a, ce U W 2 H

100 40 20 60 80 rn) () 5 etr, b 50 (b) vectors, 25 (a) wrong), ) A IIIL DISPLACEMENT INITIAL MAX ------20 30 PERCENT 100 80 60 20 40 0, 0 10 A IIIL DISPLACEMENT INITIAL MAX Figure 15 15 Figure 20 ot: ( cont.: etr, d 10 vectors. 100 (d) vectors, -----

30 correct; ------100 80 20 40 60 o f ; d n u fo not ...... 20 ) A IIIL DISPLACEMENT INITIAL MAX rn) () 75 (c) wrong), 30 PERCENT 100 40 60 00 10 A IIIL DISPLACEMENT INITIAL MAX Figure 15 15 Figure 20 otx ( cont.x etr, f 10 vectors. 150 (f) vectors, ----- 30

correct; ------100 20 40 60 80 0, not found; not 0 10 A IIIL DISPLACEMENT INITIAL MAX rn) () 125 (e) wrong), 20 30 PERCENT 100 60 80 40 20 20 ) A IIIL DISPLACEMENT INITIAL MAX Figure 15 15 Figure ot: ( cont.: etr. h 20 vectors. 200 (h) vectors. -----

correct; 30 ------a, w 01 0 w Z

100 o ound; d n u fo not 80 60 40 20 ...... ) A IIIL DISPLACEMENT INITIAL MAX

rn) () 175 (g) wrong), 20 30 PERCENT 100 80 60 40 20 20 ) A IIIL DISPLACEMENT INITIAL MAX Figure 15 15 Figure ot' ( cont.'. vectors, (j) 250 vectors. 250 (j) vectors, ----- 30

correct; H Ed CS O Ed Z 0. ------100 80 40 20 o ound; d n u fo not ...... ) A IIIL DISPLACEMENT INITIAL MAX

rn) () 225 (i) wrong), 20 30 PERCENT 100 60 - 80 40 20 20 ) A IIIL DISPLACEMENT INITIAL MAX Figure 15 15 Figure ot: ( cont.: vectors. (I) 300 vectors. 300 (I) vectors. ------30

orc; - o ound; d n u not fo - - - correct; a. W u u Z H 100 80 60 40 20 ...... ) A IIIL DISPLACEMENT INITIAL MAX

rn) () 275 (k) wrong), 20 PERCENT 100 60 80 40 20 20 ) A IIIL DISPLACEMENT INITIAL MAX Figure 15 15 Figure ot: ( cont.: vectors, (n) 350 vectors. 350 (n) vectors,

correct; 30

o ound; d n u fo not 100 60 80 20 40

rn) () 325 (m) wrong), ) A IIIL DISPLACEMENT INITIAL MAX 20 30 00 PERCENT 100 0 - 60 80 40 20 20 ) A IIIL DISPLACEMENT INITIAL MAX Figure 15 15 Figure ot: ( cont.: vectors, (p) (p) vectors, ------

30 correct; 400 - - - vectors. W 0 « Z H CL u £ 100 80 60 40 20 o ound; d n u fo not

20 ) A IIIL DISPLACEMENT INITIAL MAX rn) () 375 (o) wrong), 30 PERCENT 100 60 80 40 20 ) A IIIL DISPLACEMENT INITIAL MAX Figure 16: 16: Figure 20 ound; d n u fo fet ayn mut-as rce aa tr (al 4, with 4), (Table eters param tracker ulti-pass m varying f o Effect e oftak adpril dslcmet ( ent displacem particle and tracks f o ber aximum m

A 30 mx 5 o smuae i gs ih ayn nm­ num varying with ages im ulated sim on 15, f o ymax rn) () 5 etr, b 40 vectors. 400 (b) vectors, 25 (a) wrong), O, « u 2 E- w w 100 ao 40 60

20 ) A IIIL DISPLACEMENT INITIAL MAX orrect; co

not

30 O CO PERCENT 100 20 40 60 80 ) ) A IIIL DISPLACEMENT INITIAL MAX F ig u re 17: 17: re u ig F 20 ound; d n u fo e rcs n atce ipaeet ( with displacement 5), (Table particle and tracks f o parameters ber tracker multi-pass maximum varying f o Effect

A mx 0 o iuae mgs ih ayn num­ varying with images simulated on 30, f o ymax 30 rn) () 5vcos () 0 vectors. 400 (b) vectors, 25 (a) wrong), O, Ed OS u z H w 100 20 40 60 80

A IIIL DISPLACEMENT INITIAL MAX correct;

not

30 00 PERCENT 100 20 80 40 60 ) A IIIL DISPLACEMENT INITIAL MAX F ig u re 18: 18: re u ig F 20 ound; d n u fo e ftak adpril dslcmn ( displacement particle and tracks of ber Effect of varying multi-pass tracker parameters (Table (Table parameters tracker multi-pass varying of Effect maximum

A mx 5 o smltdiae wt vrig u ­ num varying with images simulated on f 45, o ymax 30 rn) () 5 etr, b fO vectors. fOO (b) vectors, 25 (a) wrong), CU Ed OS O Ed Z H 100 20 00 40 00

A IIIL DISPLACEMENT INITIAL MAX correct;

6 , with ), not

30 00 to 83

Table 3, and are chosen so that the maximum A y max (15) will ultimately be half

the maximum initial displacement (30). Figure 17’s parameters, given in Table 4,

are chosen so that the maximum A y max (30) will ultimately equal the maximum

initial displacement (30). Figure 18’s parameters, given in Table 5, are chosen so

that the maximum A y max (45) will ultimately be 1.5 times the maximum initial

displacement (30). From these studies it is obvious that the tracker displacement

parameter (A ymax) should be at least the maximum expected displacement in the

flow, and preferably close to that maximum. If A ymax is below the maximum, the

percentage found correctly will drop off significantly, although the error rate will

not noticeably suffer [Figure 16(b)]. If A ymax is above the maximum, the error rate

is slightly larger and the percent found rate is slightly lower [Figure 18(b)] than

for when A ymax is essentially equal to the maximum expected displacement in the

flow [Figure 17(b)], for large numbers of particles in the view. For lesser numbers

of particles the reverse is true [Figures 17(a) and 18(a)], due to the reduction in

track overlap.

From this it is best to choose tracking parameters that ultimately meet the

maximum displacement requirement, and then choose additional passes with lesser

A ymax values. TOL 0 and TOLv are chosen to start restrictive and then relax

within each A ymax (for multiple passes at constant A ymax)', f°r very small A ymax

values (below 5), restrictive TOL# and TOL„ values are actually counterproductive

as data resolution in that range is very rough. Slowly increasing A y max allows slower particle tracks to be matched and removed before the faster (and therefore more spread out) tracks are matched. Slowly restricting TOLv values as A y max is also beneficial, since the longer particle tracks are more forgiving in the TOL„ 84

T ab le 3: Results of the tracker on Figure 9.

Iteration ^V m ax TOL„ T O L g % Not Found % Wrong % Correct

1 5 1.0 1.0 2 10 0.8 1.0 3 15 0.5 1.0 9.57 4.25 86.17 Table 4: Tracker parameters used for Figure 16(a-b).

Iteration ^ V m a x TOL„ TOL

1 5 0.3 0.5 2 10 0.3 0.5 3 15 0.3 1.0 4 5 0.5 0.5 5 10 0.5 0.5 6 15 0.5 1.0 Table 5: Tracker parameters used for Figure 17(a-b).

Iteration A U max TOL„ TOL 0

1 10 0.3 0.5 2 20 0.3 0.5 3 30 0.3 1.0 4 10 0.5 0.5 5 20 0.5 0.5 6 30 0.5 1.0 87 calculations. The final parameters chosen for the NASA work are given in Table

6, with an eye to keeping the number of tracker iterations to a minimum.

The result of the particle tracking algorithm on the identified particles (Figure

9) is given in Figure 19(a-b) and Table 6. The tracker has done a reasonable job of identifying the major tracks, but has some problems in areas of high noise. In the field of view there are approximately 94 particle tracks. Of these, 81 (86.17%) are found correctly. There are 4 (4.25%) mistracks, and 9 (9.57%) tracks not found. This lower than expected success rate is directly attributable to two factors:

(1) noise present in the image which was not included in the simulated images, including the partial obscurement of view by the heating elements; and (2) the particle positions are only approximate, as the individual images were digitized from separate slides and had to be rotated and scaled without accurate reference points.

4.2.3 3-D Flow Analysis

Similar operations may be performed on the right hand view of the laminar flow, given in Plate XV(b). The identified particles are given in Figure 20, while the particle tracks are given in Figure 21(a-b). Again, the tracker has done a good job of identifying the vast majority of the tracks.

Comparing Figures 19(b) and 21(b), or more obviously Plates VIII(a) and

VIII(b), it is evident that at least half the view is obstructed in one or the other of the two views. This severely hinders 3-D matching, which is best implemented by comparing the 2-D tracks observed in each view. In addition, the need to individually scale and rotate the views and the lack of clear registration points Table 6: Tracker parameters used for Figure 18(a-b).

Iteration Aymax TOL„TOL 0

1 15 0.3 0.5 2 30 0.3 0.5 3 45 0.3 1.0 4 15 0.5 0.5 5 30 0.5 0.5 6 45 0.5 1.0 Figure 19: 19: Figure Y Pixel L ocation - 0 0 5 - 0 0 4 - 0 0 3 0 0 2 100 - - frame). cluding particle locations (• first frame; * second frame; second * frame; first (• locations particle cluding a Pril tak ietfe or h prils n iue , in­ 9, Figure in particles the r fo identified tracks Particle (a) 100 200 X Pixel Location XPixel 300 400 500 ++ third 89 Y Pixel L ocation 19 re u ig F - 0 0 5 - 0 0 4 - 0 0 3 200 100 — - 0 ot: b Pril tak ietfe o te atce i Fgr 9, Figure in particles the for identified tracks Particle (b) cont.: 100 etr only. vectors 200 X Pixel Location X Pixel s. '*■s. 300 0 500 400 90

Figure 20: 20: Figure Y Pixel L ocation 500 400 300 200 100 frame; atce ietfe fr lt X() • is frame; first (• XV(b) Plate for identified Particles 100 ++ third frame). third 200 X Pixel Location XPixel 300 0 500 400 * second 91 Figure 21: 21: Figure

- 0 0 5 - 0 0 4 V Pixel L ocation - 0 0 3 0 0 2 100 — - frame). cluding particle locations (• first frame; * second frame; second * frame; first (• locations particle cluding a Pril tak ietfe o te atce i Fgr 2, in­ 20, Figure in particles the for identified tracks Particle (a) 100 200 X Pixel Location XPixel 300 400 V 500 ++ third 92 F ig u re 21 21 re u ig F

- 0 0 5 V Pixel Location - 0 0 3 0 0 2 100 — - cont(b) 100 etr only. vectors atce rcs dniidfrteprils n iue 20, Figure in particles the for identified tracks Particle 200 X Pixel Location XPixel 300 400 500 93 94 makes the task significantly harder. As a result of these circumstances, it was decided to forgo the 3-D matching for this experiment.

4.2.4 Discussion

It is clear from the vector tracks [Figures 12(b), 19(b) and 21(b)] that the tracker performs adequately for these low seeding density flows. Figures 16(b),

17(b) and 18(b) indicate that for relatively low maximum initial displacements, the tracker is good to at least 400 vectors. For this particular experiment, the problems lie more in the images themselves than in the method used to analyze them.

To improve the 2-D matching algorithm, a fourth frame of data could be added to the image. A given track in three frames would only be accepted if it could be extended to fit a fourth frame point in the image. This is a highly simplistic version of the predictor-corrector algorithm to be used in the turbulent flows. Its implementation would cut the percentage of tracks found, but also reduce the percentage of mistracks.

The obstruction of view created by the heating elements is not easily sur­ mounted. In Plate 111(a) and Plate IV, particles are viewable behind the heater element. Simple thresholding did not work in this area, however. A locally adap­ tive thresholding technique such as mean filtering might be able to eliminate some of the visual noise caused by the heating element, but at a significant increase in computational cost over simple thresholding.

The heating element obstruction between the two views can be reduced by switching the 3-D viewing method. To better obtain 3-D data, the camera view 95 should probably be switched to stereo viewing. Alternatively, the single camera 3-D image acquisition developed by Willert and Gharib (1990) might prove attractive.

In either case, however, depth accuracy will diminish significantly.

4.3 Turbulent Flow

4.3.1 Particle Identification

The identification algorithm introduced in section 4.2.1 is equally suited to the turbulent flow problem. The minimum and maximum number of pixels allowed was set at 8 and 200, respectively. The output of the particle identification program on

Plate XVII(a), which is a pseudo-color enhanced Plate XII(a), is given in Figure

22. Clearly the seeding density is much higher than that seen in the laminar flow experiment. The higher seeding density is necessary in order to characterize the more complex flow.

It is also necessary to identify the grid point locations in the view. The grid is used to register the two stereo views, and to provide a sense of scale to the image.

The grid consists of very small holes drilled in a sheet of plastic and filled with fluorescent paint. The grid is attached to the rear of the tank where it does not obstruct the view of the flow (see Figure 5). The grid points are simply large circles in the image, and the identification algorithm will simply dismiss them as being too large. Therefore for this experiment the algorithm was modified to output these consistently oversized identified particles to a disk file. The result of retaining particles greater than 200 pixels is given in Figure 23. There are a number of non-grid points identified in the image. These can be removed by requiring an identified grid point to have three frame components of approximately equal size. Plate XVII: (a) Pseudo-color representation of Plate XII(a). (b) Pseudo­ color representation of Plate XII(b). (a) (b)

0 3 Figure 22: 22: Figure Y Pixel L ocation 500 400 300 200 100 frame; + third frame). third + frame; atce ietfe o Pae VIa ( frt frame; first (• XVII(a) Plate for identified Particles 100 200 X Pixel Location XPixel 300 0 500 400 * second 97 Y Pixel L ocation - 0 0 5 - 0 0 4 - 0 0 3 0 0 2 100 Figure 23: 23: Figure — - 0 og gi on lctos dniidfrPae XVII(a). Plate for identified locations point grid Rough 100 200 X Pixel Location XPixel 0 0 500 400 300 98 99

These requirements will also eliminate some grid points, but will leave enough to reconstruct the equally spaced grid data knowing that the original points are equally spaced one inch apart (Figure 24). This data will eventually be used with the particle (vector) tracks to compute depth information.

4.3.2 Particle Tracking

The particle tracker algorithm presented in section 4.2.2 was shown to have good performance in simulated images containing as many as 400 vectors with maximum initial frame-to-frame displacement of between 5 and 10 pixels. These image requirements are satisfied by the particle tracks shown in Figure 22. The average frame-to-frame displacement is less than that for the laminar flow, verging on the minimum allowed. For the tracker parameters given in Table 7, the vectors found by the tracker are given in Figure 25(a-b).

There is a general sense of flow to the left and towards the bottom in Figure

25(b). As the area digitized lies below the plane of the mixing impeller, it would seem the reverse flow must exist in the area above that being viewed. This general flow pattern is consistent with an impeller that pushes down and away from its central axis.

4.3.3 3-D Flow Analysis

For the right hand view of the turbulent flow given in Plate XVII(b), the identified particles are given in Figure 26. The grid points are given in Figure 27

(cleaned up in a similar way to Figure 24). The vector tracks, using the same tracker parameters given in Table 7, are given in Figure 28(a-b). Figure 24: 24: Figure - 0 0 5

- 0 0 4 Y Pixel L ocation - 0 0 3 0 0 2 100 - - idi Fgr 23. Figure in fied eut fsml rcsig n og rdpitlctos identi­ locations point grid rough on processing simple of Result 100 200 X Pixel Location Pixel X 300 400 500 100 Table 7: Tracker parameters used for turbulence work [Figures 25(a-b) and 28(a-b)].

Iteration ^Vinax TOL„ TO Le

1 5 1.0 1.0 2 10 0.8 1.0 3 5 1.0 1.3 4 10 1.0 1.3 Figure 25: 25: Figure Y Pixel L ocation 500 400 300 200 100 0 20 0 400 500 0 0 4 300 200 100 0 frame). cluding particle locations (• first frame; first (• locations particle cluding a Pril tak ietfe o te atce i Fgr 2, in­ 22, Figure in particles the for identified tracks Particle (a) • • • • X Pixel Location X Pixel * second frame; second ++ third 102 Figure 25 25 Figure

Y Pixel L ocation - 0 0 5 - 0 0 4 0 0 2 100 — - ot: b Pril tak ietfe o h prils n iue 22, Figure in particles the for identified tracks Particle (b) cont.: os only. tors 100 200 X Pixel Location X Pixel 300 400 500

Figure 26: 26: Figure Y Pixel Location 500 400 300 200 100 0 frame; atce ietfe o Pae VIb ( frt rm; second * frame; first (• XVII(b) Plate for identified Particles 100 + third frame). third 200 X Pixel Location XPixel 300 400 500 104 105

100 -

C 20 0 -

3 0 0 -

4 0 0 -

50 0 -

100 200 300 400 500 X Pixel Location

Figure 27: Grid points, after processing, for Plate XVII(b). Figure 28: 28: Figure Y Pixel Location 600 400 300 200 100 0 frame). cluding particle locations (• first frame; first (• locations particle cluding a Pril tak ietfe o te atce i Fgr 2, in­ 26, Figure in particles the for identified tracks Particle (a) 100 200 X Pixel Location XPixel 300 * 0 500 400 second frame; second + + third 106 Y Pixel Location 28 re u ig F 400 300 200 100 o 0 20 500 0 0 4 0 0 3 200 100 0 ot: b Pril tak ietfe orteprils nFgr 26, Figure in particles the r fo identified tracks Particle (b) cont.: etr only. vectors X Pixel Location Pixel X 107

108

In Figure 28(b) there are several areas noticeably devoid of tracks. This is due, in part, to the simplistic binarization operation performed on the images.

Thresholding was done without in-depth analysis of local intensities. The lighting within the experiment was not perfectly uniform, and the thresholding used could easily have been biased towards areas with direct lighting. It is also possible that the flow in these areas moves the particles out of direct lighting, but the overall flow pattern does not seem to suggest this. Finally, the right hand view is slightly out of focus; this is a problem in the stereo lens itself, but would blur particle images and enhance the errors induced by the simple thresholding used.

It was decided not to proceed with the 3-D matching as the general methodol­ ogy was developed by Economikos (1988), and accuracy would be considerably im­ proved with the longer particle tracks that the rigorous predictor-corrector tracker

Economikos developed would provide.

4.3.4 Discussion

It is harder to clarify the tracker’s accuracy in the turbulent flow. From Figures

25(b) and 28(b) the tracker clearly identifies a number of vectors, and generally these vectors tend to validate each other by following a bulk flow sort of behavior.

The tracker does not, however, seem capable of providing irreputable particle tracks with only the three frames of data provided. To enhance its abilities, the frame- to-frame displacement of the particles needs to be increased slightly. Furthermore, additional frame data is needed to correct these paths, by predicting the path’s next frame location. This is simply the implementation of the predictor-corrector algorithm as proposed by Economikos (1988), albeit simplified to handle the three 109 points of data available.

4.4 Catalysis

4.4.1 Initial Work

The initial effort for image analysis of the stereo SEM photographs given in

Plate I(a-b) was an attempt to write an unassisted analysis program. It was envisioned that the program would take a binary image such as given in Plate

XIV, track area borders using the concept of 8-connectedness, and output a list of planar areas and normals with some reference identification back to the original image. It would then remain to match sides of the same crystal to determine the ratio of the basal to side planes, which were undistinguishable by the computer.

It quickly became obvious that the task was overwhelming. Breaks in the edge of an area stopped the tracker. The computer could not be easily programmed to detect natural open-ended areas, such as shown by the arrow in Plate XIV, and would instead track around the exterior of the area once the interior was complete. Enough such areas existed in the binary image that it was decided that the experimenter would have to guide the computer not only to the crystal surface, but to the major points defining the boundaries of the crystal surface as well. To do so required the use of a graphics-capable display, some tool for pointing and inputting location data, and the ability to do so conveniently.

The project at this stage became less an application of image analysis and more applied computer graphics. Image analysis of the catalyst images, at least in the sense as has been defined herein, is thus not particularly practical given the aims of this phase of the project. 110

4.4.2 Graphics Display

The computer hardware chosen for displaying and analyzing the catalyst images was the DEC VAXstation 3100s currently operated by KCGL. These machines have color monitors capable of displaying two simulataneous digitized images, using 256 out of 16 million possible colors, and are mouse-equipped for pointing and inputting data. Programming was in Fortran, using X-Windows for the graphics routines and handling.

4.4.3 General Program

The program displays two stereo SEM photographs side-by-side on the com­ puter display. The user has several tools available to him or her including zooming, area clearing and area filling, and several colors are provided to aid in detailing the process. The program centers about an event handler loop, wherein mouse calls are interpreted as locations or commands and acted upon accordingly. The pro­ gram (CATA.FOR) is given in Appendix C; due to its complex nature no simple flowchart is feasible. The graphics display is given in Plate XVIII.

The analysis of the stereo SEM photographs requires that the plane be identified in each of the two stereo views, and that it be done so consistently. The user must input a series of points defining the area in one or the other view, complemented by the equivalent points defining the same area as seen in the opposite view. The points need to be entered in the same order in each view, but it makes no difference which view is detailed first.

With one side detailed, the user can request a fill operation. This has two effects. First, the area outlined by the chosen points will be shown. Second, Plate XVIII: Base graphics setup for SEM micrograph anaylsis. The digital image of Plate 1(a) is on the left, and the digital image of Plate 1(b) is on the right. 112 guidelines from the chosen points will be drawn in the opposite view to aid in choosing the corresponding points on the stereo area. An example of this is given in Plate XIX. Two filled areas, representing a stereo pair, are shown in Plate XX.

4.4.4 Depth Determination

Given an optical center (0,0,0) given by point O in Figure 29, and knowing that the effective view angle (tilt; a), the depth of a point P relative to the depth of the optical center can be easily determined. The SEM enjoys the feature that a 3D image is effectively collapsed onto a 2D plane without significant perspective distortion, due to its relatively long focal length. With this, depth determination becomes a matter of simple trigonometry.

For the notation given in Figure 29, the following equations hold:

sin (3 = - (4.4) a

xi = d cos(/3 + a) (4.5)

xr = dcos((3 — a) (4.6)

Combining 4.5 and 4.6 yields

A x = x r — x\ = d (cos(/? — a) — cos (/? + a)) (4.7)

From trigonometric relations,

cos(/? — a) = cos /? cos a + sin f3 sin a: (4-8)

cos(/? + a) = cos/? coso; — sin/? sin a (4-9) Plate XIX: Area in left image after its comers have been marked, showing fill and guides to the right image. P la te X X : Right hand image of Plate XIX has now been marked, and depth outputted. Figure 29: Geometry notation for the catalysis problem. 116

Substituting in Eq. 4.7 and reducing,

Ax = d (2 sin /3 sin a) (4.10)

Finally, substituting Eq. 4.4 into Eq. 4.10 and rearranging yields

Ax 2 =2 ^ sin------a 4.11

Equation 4.11 indicates that depth is simply a function of the parallax between points in the two images. For digital data, however, Ax may only have 10 to 15 unique values for a constant a.

To smooth out the depth data obtained using Equation 4.11, the 3D points representing the plane were fit by linear least squares ( LLS) to the planar equation

z = Ax + By + C (4.12)

The LLS representation for the data set (x,-, y,-, z,) for an equation of the type given in Eq. (4.12) is

L L S = Y , (Zi - z f = Y te - + B Vi + C)]2 (4.13) i=1 «=1

A, B , and C in Equation 4.12 can be determined by taking the partial derivatives of Equation 4.13 with respect to A , B , and C and setting them to zero. This yields the system of equations

A J2X1 + B ^ , xiVi + C Y xi = Y ,XiZi (4-14) «=1 i = l i = 1 1=1

A Y xiVi + B it, y2i + C it, Vi = it, y

+ n C = ( 4 -1 6 ) 1=1 i=l i=l 117

or E z, E Xiyi E Xi A E XiZi

E xiyi E yf E y . B = E y

E xi E yi n C E Zi from which A , B , and C can be determined using matrix mathematics. Once the

plane equation is found, its normal unit vector ft can be computed as

A - B t 1 n = -i + (4.18) A + B + I A B 1 J ~ 4 -f B -f 1

4.4.5 Area Determination

It is not known a priori the nature of given plane; in particular, the area can

have some degree of concavity which makes triangular dissection to determine the

area impractical. To allow for this possibility, a more brute-force area determina­

tion method was used.

After the plane equation as given in Eq. (4.12) is found, the angle of the plane

with a second plane z = A 'x -f B 'y + C' can be found using

AA' + BB' + 1 cos (6) = (4.19) V A 2 + B 2 + 1 ■ y/Aa + B I2 + 1

The 3D area of interest has a 2D projection into the xy plane, as shown in

Figure 30. This 2D projection is effectively the area bounded by points which

are the average of the xy positions given in the left and right views of the stereo pair. These points can be drawn to a “ghost” image, a temporary storage used to hold the 2D projection in the xy plane in the computer’s memory. Line drawing algorithms can be used to fully outline the area, and the pixels in the interior of the now bounded area counted. Line scanning , which is simply determining the 118

z

Figure 30: 3-D area projection to a 2-D area in the xy plane. 119 boundaries of the area by looking for its edges along each line, is a simple method of filling, or counting, an area. Line scanning can be done in the horizontal or vertical direction, but by convention is usually done horizontally. If the area is concave at any point, however, line scanning algorithms alone are insufficient. Figure 31(a-c) shows three simple concave situations. Figure 31(a) is horizontally concave, and would be properly counted if linescanning were to be done horizontally. Figure

31(b) is vertically concave, and would be properly counted if linescanning were to be done vertically. Figure 31(c) is both horizontally and vertically concave, and its area would not be properly counted.

To count an area bounded by a curve such as that given in Figure 31(c) can be done using complex area filling algorithms. The concavity represented by Figure

31(c) is about as complex as can be expected in the catalyst images, however, and a brute-force counting method using linescanning can be employed in this case.

Figure 32(a-c) represents this approach. First, the area shown in Figure 31(c) is temporarily filled by horizontal line scanning [Figure 32(a)]. Second, the area is again filled, using vertical line scanning [Figure 32(b)]. Finally, those pixels filled by both the horizontal and vertical line scanning are retained and counted [Figure

32(c)]; unfilled pixels are not within the area’s bounding limits, and pixels filled by only one of the two scans belong within the concave area.

To determine the 3D area given the 2D projection, the angle between the plane containing the 3D area and the plane containing the 2D projection must be determined. The 2D projection is contained within the xy plane, which has the (B) (C)(A)

Figure 31: Concavity possibilities, (a) Horizontally concave, (b) Vertically concave, (c) Both horizontally and vertically concave.

to o (A) (B) (C)

Figure 32: Brute-force filling of a simple 2-D concave area [see Figure 31(c)]. (a) Result of horizontal linescanning and filling, (b) Result of vertical linescanning and filling, (c) Summation of (a) and (b), with the common area being the desired fill. 122

equation 2 = 0, or A' = 0 and B' = 0. Substituting this in Eq. (4.19) yields

cos(0) = . -..... * (4.20) V ' V A 2 + B 2 + 1 V ' The determination of the 3D area from the 2D projection is then simply a matter of dividing by the cosine of the angle between the 3D area’s plane and the 2D projection’s plane, or

Area3D = (4-21)

4.4.6 Results

Hernendez (1987) studied five crystals in the image pair given in Plate I(a-b), and reported an average basal to side plane area ratio of 3.8. Hernendez’ depth equation, Eq. (11) in his original work, is incorrect as it assumes the tilt occurs around the viewer’s observation point. Instead, as shown in Figure 29, the tilt is measured from a fixed optical center in the image. However, the error is constant and does not affect his ratio results.

A more serious error encountered in Hernendez’ work, however, was the un­ derlying geometric assumptions upon which he based his derivation. Hernendez corrected the observable 2-D area of a plane by determining the depth difference between opposite ends of a crystal. He did this by overlapping one end of the crystal, and reading the disparity at the opposite end. Since overlap will consis­ tently occur at the same y displacement, this effectively limits the normal of the

3-D crystal plane to be parallel to the plane of the optical axis. Hernendez did not document which planes he analyzed, however, so it cannot be estimated how large an effect this error had in his calculations.

For this work, three areas were analyzed. The results are given in Table 8 for Table 8:Areas, normals and ratios for crystals indicated in Plate XXI.

Number 3D area Unit normal Area # Side type Ratio on crystal (square pixels) vector

1 Basal 2 9163.8 -0.571?+0.423;+ 0.704* Side 2 903.4 -0.405* + 0.876; - 0.262* Side 2 513.0 +0.629* + 0.105; - 0.7701b Side 2 389.4 +0.013* + 0.443; - 0.896* Side 2 1355.1 -0.619* - 0.172; - 0.766fc 2.89

2 Basal 2 9471.2 -0 .9 1 8 ?+ 0 .3 1 7 J-0.238* Side 2 4547.2 -0.220* - 0.824; - 0.5231b Side 2 1441.8 +0.209* - 0.075; - 0.975* 1.58

3 Basal 2 12451.2 -0.838?+ 0.234; + 0.492* Side 4 585.6 -0.282* + 0.839; - 0.465* Side 4 715.2 +0.306* - 0.269; + 0.913* 4.79 124 the areas indicated in Plate XXI. The wide variance in the calculated ratio can be attributed to a number of factors. First, estimates had to be made on the extent of the various planes since several were partially hidden. This problem can be alleviated by obtaining images with significantly fewer crystals in view.

Second, a number of the side planes in the chosen crystals were at extreme angles. Although this leads to the highest accuracy for the points in the depth equation, the 2-D observable area is also very small and highly prone to error.

Third, even though the images were carefully digitized, scaled and rotated so as to be on approximately the same y-level, there is still some degree of mismatch.

This is less noticeable for areas completely contained within a small A y, but could be more significant for areas spread out over a larger A y. In either case, however, it makes it more difficult to accurately identify corner points in the areas.

Further work needs to be done to improve the accuracy. As mentioned above, images with fewer crystals in view will make a big improvement. Individual crystals can be digitized at the maximum viewable resolution, providing the largest possible

(digital) disparity, A x , and thereby improving the depth resolution.

4.5 General Results

Each of the three projects analyzed herein utilize multiple frames to determine their information. Accuracy in two of the projects, the laminar flow and the catalysis surface structure, are hampered by the lack of proper registration marks in the original images. This requires the operator to individually scale and rotate the images until a reasonable match is made. This process will almost always introduce error. Plate XXI: Stereo SE M catalyst image with areas analyzed for Table 8 marked. The laminar and turbulent flow problems are visually and analytically simple, and the computer can be programmed to work essentially unattended on the im­ ages. The catalysis images, however, are visually much more complex. Once the corner points of the areas are determined, the analysis is very simple (effectively post-processing of the point data). Getting the corner points, however, is a much more difficult task to program. Complex images tend to have this analytical bar­ rier, and if they can be simplified the programming load can be decreased. The easiest way to simplify the catalyst images would be to significantly reduce the number of crystals in the view, as suggested in section 4.4.6. C H A P T E R V

CONCLUSIONS

In this project, the technique of imaging was successfully applied to a trio of

three-dimensional experiments: laminar flow viewed orthogonally, turbulent flow

viewed stereoscopically, and catalyst surface structure viewed stereoscopically. For

these three experiments both the acquisition methods (35 mm still film, 16 mm movie film, SEM prints; orthogonal and stereoscopic views) and the studied ex- tractable quantity (velocity - dynamic variable; surface area - static variable) were varied.

For the fluid flow problems, the following conclusions can be made:

1. Image summation of binary images provides two valuable advantages. First,

it reduces the image storage requirements by a factor of 3 and could be used

to reduce those requirements by a factor of 8, without loss of data. This

reduction applies both in permanent (disk) storage and in the more critical

temporary (RAM) storage during analysis. Second, an operator viewing a

summed image can easily spot particle tracks, and can quickly verify the

accuracy of the tracking output.

127 128

2. A modified particle identification algorithm was developed that works well

with the summed images which takes into account the expected digital rep­

resentation of the viewed particles.

3. A simplistic tracker was developed, using the basic concepts of the predictor-

corrector algorithm, which does an adequate job of matching up potential

particle tracks. It is applicable for frame-to-frame displacements of between

5 and 10 pixels and up to 400 vectors. For this range the tracker is capable

of a 90% success rate within a very short time frame (about 1 second).

For laminar flow, the following conclusions can be made:

1. The summation algorithm works well for this application, especially as it can

be implemented entirely in hardware at significant time savings.

2. The general identification and tracking algorithms performed well for this

application, capable of providing results about once per minute, or effectively

in real time (within the time frame of this experiment, which is 30 hours).

3. Orthogonal viewing has too much obstruction (heater coils), preventing good

view-to-view vector matching for determining true 3-D particle tracks. Small

to medium angle viewing (stereo) should be used instead to improve the

matching ability. Note that locally adaptive thresholding will decrease the

obstruction problem, but will increase computational overhead.

4. Accuracy is limited due to a lack of proper registration points in the two views

of the image. Each view’s time sequence of frames had to be individually

rotated and scaled, introducing some uncertainty in position as there were 129

no points to match upon. With a true video-based system this will still be

required to match the two views, but within each view the frames will be

automatically registered

For turbulent flow, the following conclusions can be made:

1. The summation and identification algorithms worked well for this applica­

tion.

2. The tracker performed adequately, providing a reasonable view of the flow.

The output is sufficient as a starting point for the rigorous predictor-corrector

tracking. As a starting point, the need is to provide likely paths for the

predictor-corrector to check and verify. Once the predictor-corrector con­

firms tracks, they can be removed from the lists and the tracker passed over

the remaining points. This is effectively an iterative combination of the sim­

ple tracker and the predictor-corrector algorithm, and should increase the

number of particles that can be tracked in an individual view.

3. Stereo viewing performed well, except that one view is still slightly out of

focus due to a lens problem.

4. Stereo matching should be done after implementation of the predictor-cor­

rector tracking, which will extend the vector tracks to 7 or more frames. The

longer tracks will be easier to match between the views.

For catalysis, the following conclusions can be made:

1 . Unassisted analysis of the catalyst images is not feasible given current com­

puter technology and imaging concepts. 130

2 . Operator assisted computer analysis of the catalyst images, including au­

tomatic area outline and corner point identification, is not impossible but

significantly more involved than first thought.

3. Computer assisted operator analysis of the catalyst images, being effectively

a computer graphics application, was implemented as an alternative to ( 1 )

and (2), above. Even this solution pushes the limits of relatively common

computer hardware.

4. Given that an area’s corner points can be adequately determined in each

view, the computer can efficiently and easily determine both the area and

the orientation of the 3-D plane describe by those points.

5. The accuracy of the 3-D area determination is dependent on the accuracy of

the 2 determination. As the maximum disparity (Ax) is rather limited, any

depths calculated can only be considered rough estimates. The maximum

disparity can be increased by increasing the view angle, a. Partially because

of this limited disparity, the calculated ratio of basal to side plane area varies

widely.

6 . The accuracy of the 3-D area determination is highly dependent on the view­

able area of the crystal. Care must be taken to obtain SEM photographs with

more isolated crystals.

7. As expected, the areas calculated using the program do not show good agree­

ment with Hernendez’s (1987) results, which are in error. 131

In general, the following conclusions can be made:

1. Imaging has the potential of obtaining reasonable quantitative data for an

experiment, but to do so the experiment must be carefully constructed to

fully show the necessary visual information.

2. Imaging is highly dependent on an operator’s input for all but the simplest

of analysis techniques.

In summary, this project has detailed the steps in applying imaging to any

experiment, and has done so with some success in three specific cases. Three

specific algorithms were developed and programmed: particle identification geared

to summed images, particle tracking, and interactive area and depth calculation for stereo static images. It also provides insight on improvements to be made for each of the experiments, and outlines potential future work that could take advantage of computer imaging. C H A PT E R VI

RECOMMENDATIONS

The following recommendations can be made as a result of this work:

1 . For the specific crystallization flow work of NASA, the 3-D views should be

obtained with a small to medium angle separation to eliminate view obstruc­

tion.

2. Registration marks should be placed in the common field of view, so as to

be able to register sequential frames and the two views.

3. Alternative viewing techniques should also be investigated for the NASA

work, potentially even eliminating the need for stereo imaging, as these could

also reduce computational overhead.

4. For the turbulence work, which does not suffer from a computational time

constraint, the 3 point vectors determined by the tracker just developed

should be extended to 5 or more points, using the predictor-corrector algo­

rithm, in an iterative scheme. This is best implemented by a) summing 8

images; b) running the simple tracker to identify 3 point vectors; c) extending

the 3 point vectors to 5 point vectors, using an upgraded simple tracker; d)

132 133

extend the 5 point vectors to 8 point vectors using the full predictor-corrector

algorithm; e) remove from the lists all particles successfully tracked over the

8 frames; and f) start over at b), until no further tracks can be found.

5. For both the laminar and the turbulent flow systems, locally adaptive thresh­

olding techniques should be studied as a means of obtaining binary images.

6 . Efforts should be made to obtain direct digital images of the catalyst SEM

images, to eliminate digitization errors and the need to individually rotate

and translate images.

7. Efforts should also be made to obtain catalyst SEM images with fewer crys­

tals in view, so that full crystals can be easily seen and identified.

8. A larger tilt angle (a) should be tested in the catalysis work for its ability to

improve results. Note that the larger tilt angle almost certainly will require

that point (7) be implemented.

9. Algorithms for the automatic corner determination of the catalyst areas

would greatly improve the usefullness of this tool. In particular, knowl­

edge based systems might be employed to better segment the viewed image

into object descriptions. These algorithms would also benefit from a study

of locally adaptive thresholding techniques. REFERENCES

Adrian, Ronald J., “Image shifting technique to resolve directional ambiguity in double-pulsed velocimetry”, Applied Optics, 25 (21), 3855-3858 (1986a) Adrian, R.J., “Multi-point optical measurements of simultaneous vectors in un­ steady flow — a review”, Int. J. Heat & Fluid Flow , 7 (2), 127-145 (1986b) / Agiii, Juan C., and J. Jimenez, “On the performance of particle tracking”, J. Fluid Mech. , 185, 447-468 (1987) Berger, Marc, Computer Graphics with Pascal , The Benjamin/Cummings Publish­ ing Company, Inc. (1986) Brodkey, R.S., Wallace, J.M., and J. Lewalle, “The Delta conferences: a discussion of coherent structures in bounded shear flows”, The Ohio State University Research Foundation Report, August (1984) Boyer, K.L., Safranek, R.J., and A.C. Kak, “A Knowledge Based Robotic Vi­ sion System”, IEEE 1984 Proceedings of the First Conference on Artificial intelligence applications , IEEE Computer Society (1984) Caffyn, J.E., and R.M. Underwood, “An improved method for the measurement of the velocity profiles in liquids”, Nature, 169 (4293), 239-240 (1952) Canaan, R.E., and Y.A. Hassan, “Full-field bubbly flow velocity measurements near a heated cylindrical conductor using digital pulsed laser velocime­ try ”, in Proceedings of the Twelth Symposium on Turbulence, University of Missouri-Rolla (1990) Chang, T.P.K., “The development of an automated measurement method for three- dimensional flow fields”, Ph.D. Dissertation, Texas A&M University (1983) Chang, T.P.K., Watson, A.T., and G.B. Tatterson, “Image processing of tracer particle motions as applied to mixing and turbulent flow - 1. The technique”, Chem. Eng. Sci 40 (2) 269-275 (1985a) Chang, T.P.K., Watson, A.T., and G.B. Tatterson, “Image processing of tracer particle motions as applied to mixing and turbulent flow - II. Results and discussion”, Chem. Eng. Sci 40 (2) 277-285 (1985b) Economikos, L., “Image processing and analysis of colored particle motions in turbulent flow”, Ph.D. Dissertation, The Ohio State University (1988) Economikos, L., Shoemaker, C., Russ, K., Brodkey, R.S., and D. Jones, “Toward Full-Field Measurements of Instantaneous Visualizations of Coherent Struc­ tures in Turbulent Shear Flows”, Experimental Thermal and Fluid Science , 3, 74-86 (1990)

134 135

Falco, R.E., and C.C. Chu, “ Measurement of two-dimensional fluid dynamic quan­ tities using photochromic grid tracing technique”, internal report reprinted in Report on Image Processing and Analysis Workshop, The Ohio State University (1988) Gad-el-Hak, Mohamed, “The use of the dye-laser technique for unsteady flow vi­ sualization”, Journal of Fluids Engineering, 108 (1), 34-38 (1986) Goldstein, Joseph I; Newbury, Dale E.; Echlin, Patrick; Joy, David C.; Fiori, Charles; and Eric Lifshin, Scanning Electron Microscopy and X-Ray Micro­ analysis, Plenum Press (1981) Hanzevack, Emil, “Concentration by laser image processing”, Chem. Eng. Prog., 82 (1), 47-50 (1986)

Hernandez, Reinaldo A., “The effect of structural specificity of M 0 O3 catalyst in selective and complete oxidation of 1-butene, 1,3-butadiene, furan and maleic anhydride”, MS Thesis, The Ohio State University (1987) Hernandez, Reinaldo A., and Umit S. Ozkan, “Structural specificity of molybde­ num trioxide in C 4 hydrocarbon oxidation”, I&EC Research, 29, 1454-1459 (1990) Hesselink, Lambertus, “Digital image processing in flow visualization”, Annual Review of Fluid Mechanics, 20, 421-485 (1988) Horn, Berthold Klaus Paul, Robot Vision, The MIT Press, Cambridge, MA (1986) Kiritsis, Nikolaos, “Statistical investigation of errors in particle image velocime- try”, MS Thesis, The Ohio State University (1989) Lakshmanan, Kris, “Quantitative computer image processing of color particle markers in flow visualization”, Ph.D. Dissertation, The Ohio State Uni­ versity (1986) Landreth, C.C., Adrian, R.J., and C.S. Yao, “Double pulsed particle image ve- locimeter with directional resolution for complex flows”, Experiments in Fluids, 6 (1), 119-128 (1988) Levine, Martin D., Vision in Man and Machine, McGraw-Hill (1985) Praturi, A.K., and R.S. Brodkey, “A stereoscopic visual study of coherent struc­ tures in turbulent shear flow”, Journal of Fluid Mechanics, 89 (251), 251— 272 (1978) Racca, R.G., and J.M. Dewey, “A method for automatic particle tracking in a three-dimensional flow field”, Experiments in Fluids, 6 , 25-32 (1988) Russ, Keith M., “Particle color considerations in flow system image acquisition”, MS Thesis, The Ohio State University (1988) Schalkoff, Robert J., Digital Image Processing and Computer Vision, John Wiley &; Sons (1989) 136

Sheu, Y.-H. E., T.P.K. Chang and G.B. Tatterson, “A three-dimensional measure­ ment technique for turbulent flows”, Chem. Eng. Commun., 17, 67-83 (1982) Smith, Charles L., “Application of high speed videography for study of complex three-dimensional water flows”, SPIE Vol. 318 High Speed Photoqraphy, San Diego (1982) Sotak, G.E., and K.L. Boyer, “Comments on ‘Fast Convolution with Laplacian-of- Gaussian Masks’ ”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 11 (12), 1329-1332 (1989) Toy, N., and C. Wisby, “Real-time image analysis of visualized turbulent flows”, in Proceedings of the Eleventh Symposium on Turbulence , University of Missouri-Rolla (1988) Utami, T., and T. Ueno, “Visualization and picture processing of turbulent flow”, Experiments in Fluids, 2, 25-32 (1984) Utami, T., and T. Ueno, “Experimental study on the coherent structure of turbu­ lent open-channel flow using visualization and picture processing”, J. Fluid Mech., 174 399-440 (1987) Watt, Ian M., The Principles and Practice of Electron Microscopy , Cambridge University Press (1985) Weinstein, L.M., Beeler, G.B., and A.M. Lindemann, “High-speed holocinemato- graphic velocimeter for studying turbulent flow control physics”, AIAA Shear Flow Control Conference, March 12-24 (1985) Weinstein, Leonard M., and George B. Beeler, “Flow measurements in a water tun­ nel using a holocinematographic velocimeter”, Fluid Dynamics Panel Sym­ posium on Aerodynamics and Related Hydrodynamic Studies Using Water Facilities, October 20-23 (1986) Wilcox, Neal A., Watson, A. Ted, and Gary Tatterson, “Multispectral image pro­ cessing of temperature sensitive tracer particles”, Chem. Eng. Sci., 41 (8), 2137-2152 (1986) Willert, C., and M. Gharib, “Three-dimensional Particle Tracking by a Single Camera”, paper presented at the 43rd Annual Meeting of the APS/DFD, Ithaca, NY, November 18-20 (1990) Yang, Wen-Jei (ed.), Handbook of Flow Visualization, Hemisphere Publishing Cor­ poration (1989 Young, Tzay Y., and King-Sun Fu, Handbook of Pattern Recognition and Image Processing, Academic Press, Orlando, FL (1986) Appendix A

PARTJD.FOR

137 138

C C PART.ID.FOR C C This program takes an ASCII image (formatted output C from program XF11_T0_ASCII.FOR) and id e n tifie s and C outputs p a rtic le locations by grey. I t is meant to work C with a 4-binary summed image, using 1, 2, 4, and 8 as C the binary image multipliers. Output is directed to C F0R008.DAT- C C------C C The structure /AREA.TYPE/ is used to hold the particle C information. Particle centers are determined by presuming them C to be reasonably shaped, ie average between min & max C extensions are used. NUM.PIX is used to count actual particle C size. COLOR is actually grey level, ie 1, 2, 3, or 4. C STRUCTURE /AREA.TYPE/ INTEGER MINHOR.MAXHOR INTEGER MINVER,MAXVER INTEGER NUM.PIX INTEGER COLOR END STRUCTURE RECORD /AREA.TYPE/ AREA(2000) STRUCTURE /SORTED.AREA.TYPE/ REAL XC,YC INTEGER NUM.PIX INTEGER NEXT,AVAIL,LINK,ACT END STRUCTURE RECORD /SORTED.AREA.TYPE/ SA(4,1000) C C IMAGE holds the 512x512 image array. Because of the size of C th is array, th is program works sig n ifican tly b etter on the C VAX 8550 than on the MicroVAX. IMTE is temporary storage; C this always contains the currect line and the line directly C above i t . Makes id en tificatio n a lo t easier. C INTEGER IMAGE(512,512),X,Y,JUNK(54),IMTE(2,512,5) CHARACTER*20 FILENAME INTEGER AREA.COUNT,JUMP,COUNT(4 ),XEXTENT,YEXTENT INTEGER I , J ,12,J2,13,J3,PSEUD0.GREY INTEGER 0RIG(8),0RIG2(8),0RIG3(8),0RIG4(8) INTEGER PSEUDO.AREA,MIN.PIX,MAX.PIX,COL,NUM C C Set some arb itrary tolerances on p a rtic le sizes, where C MIN.PIX represents the minimum number of pixels to define a 139

C p a rtic le , and MAX_PIX defines the maximum number of pixels. C MIN_PIX=4 MAX_PIX=150 T0L=0.5 TYPE*,'Enter ASCII image Filename' READ(5,1) FILENAME 1 F0RMAT(A20) □PEN ( UNIT=3, STATUS='OLD', READONLY, 1 FILE = FILENAME, DEFAULTFILE=' . I ' ) READ(3,*) LINES,PIXELS DO 10 1=1,LINES TYPE*, I READ(3,5) (IMAGE(I.J), J=l,PIXELS) READ(3,5) READ(3,5) 5 FORMAT(1814) 10 CONTINUE AREA_C0UNT=0 C C II and 12 are switching parameters for IMTE, allowing us C to only refill the row needed to be refilled. C 11=1 C C Main iteration loop. Since we check the line above the C current line, start at line 2. C DO 100 1=2,LINES IF (I1.EQ.2) GOTO 220 11=2 12=1 GOTO 230 220 11=1 12=2 230 CONTINUE C C Row iteration loop. Since we check to left and right of C current particle position, loop so that these positions are C occupied. C DO 200 J=2,(PIXELS-l) C C Set IMTE storage. IMTE at each point contains the original C grey level (IMTE(any,any,5)), and particle id for each grey C found in that pixel and identified (IMTE(any,any,1—4)). C 140

IMTE(I2,J,5)=IMAGE(I,J) IMTE(I2,J,1)=0 IMTE(I2,J,2)=0 IMTE(I2,J,3)=0 IMTE(I2,J,4)=0 C C Check if background. C IF (IMAGE(I,J).EQ.O) GOTO 200 C C DECOM_GREY takes the grey level and returns, in ORIG, a C ID matrix representing the original image flags. C CALL DECOM.GREY(IMAGE(I , J ) , ORIG) C C PSUEDO_GREY is a buildup of identified greys, which once C i t equals IMAGE(I,J), a f u ll match has been found. This C is needed since we could match in more thanone direction C correctly. Continue matching until PSEUDO.GREY is satisfied C or all directions checked. C PSEUD0_GREY=0 C C Check le f t, same row... C IF (IMTE(I2,J-1,5).NE.O) GOTO 140 C C Check le f t, row above... C 110 IF (IMTE(I1,J-1,5).NE.O) GOTO 150 C C Check above, row above... C 120 IF (IMTE(I1,J,5).NE.O) GOTO 160 C C Check rig h t, row above... C 130 IF (IMTE(I1,J+1,5).NE.O) GOTO 170 GOTO 400 C C The following set JUMP parameters from the above checks, C so that the next position can be checked if PSEUDO.GREY is C unfulfilled. C 140 13=12 J3=J-1 JUMP=1 141

GOTO 180 150 13=11 J3=J-1 JUMP=2 GOTO 180 160 13=11 J3=J JUMP=3 GOTO 180 170 13=11 J3=J+1 JUMP=4 180 CONTINUE C C Check, quick and d irty , if a complete match is made from C checked pixel to the current one. If so, ignore all previous C matches and substitute, on a one-for-one basis, the matches C in IMTE(I3,J3,x). C IF (IMTE(I3,J3,5).EQ.IMAGE(I.J)) GOTO 190 C C No quick and dirty. Decompose the grey in the appropriate C p ix e l... C CALL DEC0M_GREY(IMTE(I3,J3,5),0RIG2) C C Now check the grey, matrix element by matrix element, against C ORIG (ie the pixel greys to be added to some p a rtic le , C somewhere). C DO 300 ICNT=1,4 IF (IMTE(I2,J,ICNT).NE.O) GOTO 300 IF ( (ORIG(ICNT).EQ.0 ) .OR.(0RIG2(ICNT).EQ.0)) GOTO 300 C C A match has been made; update PSEUD0_GREY, and set C IMTE(I2,J,ICNT) equal to the new grey, IMTE(I3,J3.ICNT). C PSEUD0_GREY=PSEUD0_GREY+2**(ICNT-1) IMTE(12,J , ICNT)=IMTE(13,J3,ICNT) 300 CONTINUE C C Quick; has PSEUDO.GREY been completed? C IF (PSEUD0_GREY.EQ.IMAGE(I , J )) GOTO 320 C C Otherwise, jump back and check rest of directions. C IF (JUMP.EQ.1) GOTO 110 IF (JUMP.Eq.2) GOTO 120 IF (JUMP.EQ.3) GOTO 130 C C OK, PSEUDO.GREY has not been completely filled, ie a C new particle area must be created. Find out colors/areas C that need to be created. C 400 CALL DECOM_GREY(PSEUDO_GREY,0RIG2) DO 330 ICNT=1,4 IF (ORIG(ICNT).EQ.O) GOTO 330 IF (ORIG(ICNT).Eq.0RIG2(ICNT)) GOTO 340 C C One last check; two to right and above. A perfect sphere, C when digitized, will have this sort of structure. If C the grey being created already exists at (Il,J+2), then C check at I2,J+1 for the same grey - ie a continuous path. C Otherwise, ignore and create a new particle. C CALL DECOM.GREY(IMTE(I1, J+2,5),0RIG3) IF (ORIG(ICNT).EQ.0RIG3(ICNT)) GOTO 301 302 CALL START_NEW_AREA(I , J ,AREA.COUNT,ICNT,AREA) C C Particle identification number... C IMTE(12,J , ICNT)=AREA_COUNT GOTO 330 301 CALL DECOM_GREY(IMTE(I2,J+l,5),0RIG4) IF (0RIG4(ICNT).NE.0RIG3(ICNT)) GOTO 302 IMTE(I 2 ,J , ICNT)=IMTE(I 1 ,J+2, ICNT) C C Matched th at guy, so UPDATE th at area. C 340 CALL UPDATE.AREA(I,J,IMTE(12,J,ICNT).AREA) 330 CONTINUE GOTO 200 C C Update IMTE storage. C 190 DO 310 ICNT=1,4 IMTE(I2,J , ICNT)=IMTE(I3,J3,ICNT) 310 CONTINUE 320 CONTINUE C C Complete set of grey level matches, ie no new particles C found for a pixel. Update appropriate areas. C 143

DO 350 ICNT=1,4 IF (ORIG(ICNT).EQ.0) GOTO 350 CALL UPDATE.AREA(I ,J , IMTE(12,J , ICNT) , AREA) 350 CONTINUE 200 CONTINUE WRITE(6,500)I ,AREA.COUNT 500 FORMAT(' After ',14,' lines, ',14,' areas have been ', 1 ' found') 100 CONTINUE C C Image has now been reduced to identified areas. Check against C some preliminary idea of what is being identified, ie against C MIN.PIX and MAX.PIX (set at beginning of program), and for C a roughly spherical shape... C DO 910 1=1,AREA.COUNT IF ((AREA(I).NUM.PIX).LT.(MIN.PIX)) GOTO 911 IF ( (AREA(I).NUM.PIX).GT.(MAX.PIX)) GOTO 911 C C PSEUDO.AREA represents area of square bounded by AREA'S C min's and m ar's. This is comparable to NUM.PIX, knowing C roughly what the expected shape is. C TOL is used to set the value of the shape parameter; C A perfect circle, for example, would be a TOL of C 0.785 ((pi*d~2/4)/d~2, or (pi/4)). Since this is a C finite world, a less restrictive TOL is required C (I generally use 0.5, which would allow a 3x3 p a rtic le , C identified as a '+', to pass (ie 5/9)). C PSEUDO.AREA=(AREA(I) . MAXH0R-AREA(I ) .MINH0R) PSEUDO_AREA=PSEUD0.AREA*(AREA(I ) . MAXVER-AREA(I ) .MINVER) IF ( (T0L*PSEUD0_AREA).LT.(AREA(I).NUM.PIX)) GOTO 911 XEXTENT=(AREA(I).MAXHOR-AREA(I ) .MINHOR+1) YEXTENT =(AREA(I ) .MAXVER-AREA(I).MINVER+1) IF ((MAX(XEXTENT,YEXTENT)/MIN(XEXTENT,YEXTENT)).GT.6) 1 GOTO 911 C C OK, passes all tests... ready for output to F0R008.DAT C COUNT(AREA(I).COLOR)=C0UNT(AREA(I).C0L0R)+1 XC=FLOAT(AREA(I).MAXHOR+AREA(I ) .MINH0R)/2 IF (AREA(I).C0L0R.NE.4) GOTO 202 202 YC=FL0AT(AREA(I ) . MAXVER+AREA(I ) .MINVER)/2 X=AREA(I).COLOR Y=C0UNT(AREA(I ) .COLOR) SA(X,Y).XC=XC SA(X,Y).YC=YC 144

SA(X,Y).NUM_PIX=AREA(I).NUM.PIX IF (X.EQ.l) GOTO 921 SA(X,Y).NEXT=COUNT((AREA(I ) .COLOR)-l) 921 SA(X,Y).AVAIL=1 SA(X,Y).LINK=0 SA(X,Y).ACT=I MIH=AREA(I).MINHOR MXH=AREA(I ) .MAXHOR MIV=AREA(I).MINVER MXV=AREA(I).MAXVER COL=AREA(I).COLOR NUM=AREA(I).NUM.PIX WRITE(8,945) I ,COL,NUM,XC,YC,MIH,MXH,MIV,MXV 945 FORMAT(' ',316,2F10.1,416) GOTO 910 911 XC=FLOAT(AREA(I ) . MAXHOR+AREA(I ) . MINHOR)/2 Y C=FLOAT(AREA(I ) . MAXVER+AREA(I ) . MINVER)/2 MIH=AREA(I).MINHOR MXH=AREA(I).MAXHOR MIV=AREA(I).MINVER MXV=AREA(I).MAXVER COL=AREA(I).COLOR NUM=AREA(I).NUM.PIX WRITE(9,945) I,COL,NUM,XC,YC,MIH,MXH,MIV,MXV 920 FORMAT(' ',216,218) 930 FORMAT(' ',2F8.1) 940 FORMAT(' ',216) 950 FORMAT(' ',2 1 6 ,/) 910 CONTINUE WRITE(8,945) 0,0,0,0.0,0.0,0,0,0,0 WRITE(8,920)0,0,0 WRITE(8,961)NUMA WRITE(9,920)0,0,0 WRITE(9,961)NUMB 961 FORMAT(' ',16) STOP END 0 0RIG(5-I)=0 20 0 CONTINUE 10 O O O O Q grey level components, namely 1, 2, 4 and 8. These are are These 8. and 4 2, 1, namely components, level grey eund s lg i te ORIG matrix. the in flags as returned Subroutine to decompose the pixel value into i t s constituent constituent s t i into value pixel the decompose to Subroutine END SUBROUTINE DECOM.GREY(GREY.LEVEL, ORIG) INTEGER GREY_LEVEL,0RIG(8),GTEMP D O 10 1=1,4 1=1,4 DO 10 G T E M P = G R E Y _ L E V E L GOTO 10 GTEMP=GTEMP-(2**(4-I )) 0RIG(5-I)=1 0RIG(5-I)=1 F GEPL.2*4I) GOTO 20 (GTEMP.LT.(2**(4-I))) IF 145 o o o o o values. uruie o ar a e ae, n i talze it ts i e liz itia in and area, new a ta s to Subroutine END RECORD /AREA.TYPE/ AREA(7000) E NST D R U C T U R E I N T E G EAR R E A . C O U N T ,G R E Y /AREA.TYPE/STRUCTURE SUBROUTINE START_NEW_AREA(I, J.AREA.COUNT, GREY, AREA) A R E A ( A R E A . C O U N T ) . MA A R X E V A E ( R A = R I E A . C O U N T ) . M A X H O R = J A R E A ( A R E A .AREA(AREA.COUNT).NUM_PIX=1 C O U N T ) .C O L O R = G R E Y AREA(AREA.COUNT).MINVER=IAREA(AREA.COUNT).MINHOR=J A R E A _ C O U N T = A R E A _ C O U N T + 1 INTEGER NUM.PIX I N T E G EM R I NI N V T E E R G , EM M R A I N X H V O E R R , M A X H O R I N T E G ECO R L O R 146 o o o a uruie o pae n ra n is prmeters. param its and area an update to Subroutine END RECORD /AREA.TYPE/ AREA(7000) E NST D R U C T U R E STRUCTURE /AREA.TYPE/ INTEGER AREA.NUM, IVAL SUBROUTINE UPDATE.AREA(I, J.AREA.NUM,AREA) AREA(IVAL).NUM_PIX=AREA(IVAL).NUM.PIX+l F AREA(IVAL) ( ( .MAXVER).LT.I) IF AREA(IVAL).MAXVER=I F ((AREA(IVAL).MAXHOR).LT.J) IF AREA(IVAL).MAXHOR=J ((AREA(IVAL).MINHOR).GT.J) IF AREA(IVAL).MINHOR=J I V A L = A R E A _ N U M INTEGER COLOR INTEGER NUM.PIX I N T E G EMI R N V E R , M A X V E R I N T E G EM R I N H O R , M A X H O R 147 A ppendix B

DET.VECT.FOR

148 149

C C DET_VECT.FOR C C This program takes particle location data (formatted output C from PART.ID.FOR) and compiles likely vector matches. C It reads sequential matching parameters from a file C VECTCONTROL.DAT, and will attempt a 3 point match directly C (l->2->3). Output is directed to F0R090.DAT (vectors) and C F0R099.DAT (summary). C C------c C The structu re /SORTED.AREA.TYPE/ is used to hold the p a rtic le C information found from PART.ID.FOR. The array SA(4,1000) C is of /SORTED.AREA.TYPE/. The f i r s t element in the array (1-4) C indicates the 'grey' of the particle, while the second is that C particles relative location within its own sorted list. Because C of the nature of the particle identification, the particles are C already roughly sorted from top to bottom by Yc, where Xc and C Yc are located particle centers. C C NEXT is an integer pointer to the element in the next grey C that would be the p a rtic le immediately above the current Yc of C the current color. AVAIL is a flag to indicate the particle C has or has not been matched. LINK is an integer pointer at C the next particle in a vector, ie to the next color. Because C the algorithm is checking 1 —> 2 —> 3, if AVAIL is false C (ie 0) fo r SA(l,any), then SA(1, , any).LINK has some non-zero C value, and SA(1,SA(2,any).LINK).LINK also has some non-zero C value (since a three-point match is required). C STRUCTURE /SORTED.AREA.TYPE/ REAL XC,YC INTEGER NUM.PIX INTEGER NEXT,AVAIL,LINK,ACT INTEGER ICNTVC END STRUCTURE RECORD /SORTED.AREA.TYPE/ SA(4,1000),SA_TW0(4,1000), 1 SA.C0RRECT(4,1000) CHARACTER*20 FILENAME CHARACTER+30 FILEVC INTEGER I,J,I2,J2,I3,J3,C0UNT(4 ).MAXDIS.DIRECT,ICNTVC REAL T0LV,T0LR,BEST.T0LR,BEST.T0LV,BEST.TEST,BEST.XPTP,WF INTEGER BEST.MAXDIS,SUPERPASS C C To prevent data corruption, data file outputs from PARTICLE.ID C were renumbered to whatever seemed appropriate at the time. C correspondingly). C dt fil s en cnoiae (hnig upt e names le i f output (changing consolidated being operation r is VECT0RT.FOR). fo , le VECT0RT.F0R i (ie f required F0R001.DAT ts without programs what know but data le to 102 C useless values, 10 Writing consolidation almost of the C is of separation statement C This the C done. C C recorded in F0R001.DAT, so la te r consolidation could be be could being were values 10 consolidation These r te la F0R001.DAT, so in values. 10 recorded separate C to C C C C At one point, the various ite ra tio n s were being sent sent being were s n tio ra ite various the point, one At C values. these to ited lim not is commonly used; program ll a is F0R051 were th and F0R041,C F0R031, F0R008, C 0 READ(102,*)I,C0L,XC,YC 40

o o o o o o o o o o o SIMULL', r fo TURB, 31 NASA, for 51 ata’) d for FORMATC’ (8 le rtic a p 672 for number 10 FORMAT( Enter ’ 671 ed n aa on (omt ie i PARTICLE.ID). in given (format point data in Read oe eodr ary, SA_TW0 SA_C0RRECT. SA_TW0 and arrays, is secondary Note eevd or akad (n h list) mthn, n SA_C0RRECT and matching, ) t s i l the (in backwards r fo reserved data. le rtic a p in Read s eevd or ace bten A n SA_TW0 SA and between matches r fo reserved is Y=C0UNT(C0L) X=C0L COUNT(COL)=C0UNT(COL)+1 DO 1=1,4 WRITE(1,1010)102 F IE.) GOTO 30 (I.EQ.0) C0L=C0L-1IF (102.EQ.31) IF 102 READ(5,*) WRITE(6,672) WRITE(6,671) ' n 4 fr SIMULR)’) for 41 and ' 1 END DO D O J=l,1000 DO J=l,1000 END DO SA_C0RRECT(I,J).ACT=0 AT0I .ACT=0SA_TW0(I, ) J .ACT=0 ) J SA(I, SA_C0RRECT(I .LINK=0 ) J , SA(I,J).LINK=0 SA_TWD(I,J).LINK=0 150 c c c c C Sml VECTCONTROL.DAT Sample C : e l i C f fil ut n wt te i ". 00 0". . t 0.0 s i l "0.0 le , e rtic a lin p The MAXDIS. the and the on with TolR, end TolV, must is C le sequentially i f parameters executed C of is Order C vectcontrol C C needed for p a rtic le vector matching. Each line in in line Each matching. vector le rtic a p parameters for the needed VECTCONTROLC contains in which Read , C ile f C 00 F0RMAT(A30)5050 0 CONTINUE 30 o o o o o SA(X,Y).AVAIL=1 20 . . 30 30 1.5 20 1.0 0.5 0.5 0.5 0.5 . . 10 0.5 0.5 Ve .tor determination. Tolerances are set by the the by . set n tio are ra ite Tolerances per VECTCONTROL, e lin le one i f has which determination. Ve.tor F I L E V C = * V E VECTCONTROL.DAT C IVCREAD(5,*) T C O N T preset R want O TYPE*, (1/0)' you ’Do L .D A T » info? RT(,00 10 WRITE(1,1010) F ICN.) THEN (IVC.NE.l) IF ' uuly VECTCONTROL.DAT (usually F0R003.DAT)' ' or 1 ITER=0 10=90 GOTO 40 SA_TWO(X,Y). ACT=SA(X,Y).ACTSA_TWQ(X,Y).LINK=SA(X,Y).LINK SA_TWO(X,Y). AVAIL=SA(X,Y).AVAIL SA(X,Y).ACT=I SA(X,Y).LINK=0 SA_TWO(X,Y).NEXT=SA(X,Y).NEXTSA(X,Y).NEXT=COUNT(COL+1) SA_TWO(X,Y). NUM_PIX=SA(X,Y).NUM.PIXSA_TWO(X,Y). YC=SA(X,Y).YC F XE.) GOTO (X.EQ.4) 20 IF SA_TWO(X,Y).XC=SA(X,Y).XCSA(X,Y).NUM_PIX=NUMSA(X,Y).YC=YC SA(X,Y).XC=XC END IF ED555) FILEVCREAD(5,5050) YE,Pes etr oto fil aa name', data le i f control enter TYPE*,'Please 151 C be done. For non-simulated images, you can only gather gather only can can you matching images, incorrect non-simulated percentages. and For matched/unmatched accurate correct C images, done. of be simulated C determination for C keeping; Record C C 7 CONTINUE 777 SUPERPASS=SUPERPASS+1734 55 YE, ' TYPE*,' 5555 oooooooo FORMAT(14)1010 ooooooooooo Main routine fo r matching. Every ite ra tio n reduces the the reduces n tio ra ite Every matching. r fo routine Main DIRECT=1 for forward match (SA), DIRECT=-1 for backward backward (SA_TW0). DIRECT=-1 for match (SA), match forward DIRECT=1 for es n ls r rctve parameters. VECTCONTROL e so tiv tric s matching, use re for can s less le and rtic a p less le ib lig e ec err n te ed o err orcin i diminished. is correction) error for need the and error hence point, the number of tracks are so few that overlap (and (and overlap that few so are tracks of number the point, a fte r the error correction, under the premise that at that that at that premise implemented the the mode, is under superpass Superpass the In correction, error disabled. is the place. VECTC0NTR0L.DAT r in fte a is correction basically, error correction counter; error integer an is Superpass 0.0 is opened twice; in the regular mode, the forward/backwards forward/backwards the mode, regular the in twice; opened is

CALL D0.VECT(COUNT, SA,T0LV, TOLR, MAXDIS, DIRECT=1 E D 2, TOLV,READ(2 TOLR, ) ,* MAXDIS YE,Ieain '.ICNTVC+l TYPE*,'Iteration ITER=ITER+1 DIRECT,ICNTVC) 1 F TL.Q(.) THEN (TOLV.EQ.(0.0)) IF I C N T V C = I C N T V C + 1 0PEN(UNIT=2, FILE=FILEVC, STATUS=' OLD') SUPERPASS=0 ICNTVC=0 END IF 0.0 F (SUPERPASS.IF END IF ELSE GOTO 778

GOTO 734 CLOSE(UNIT=2) 0

Eq.l) THEN 152 oooo ooo o o o o o o o o o o o ooo n n o o o o o o Check if Ith p a rticle s are eq u iv alen t... if so, update update so, if t... alen iv u eq are s rticle a p Ith if Check SA.CORRECT. matched... is le rtic a p Ith at th Check hc t t h mth a md i te ot eet t aton n tio ra ite recent most the in made was match the at th Check matching. do not agree, the p a rtic le s are made available for further further for available made are s le rtic a p the comparing by agree, done not is do SA.CORRECT. This g illin f rt ta s to Time vector matches made in SA and SA.TWO; i f they agree, they are are they agree, they SA.TWO; f SA and i in made matches vector found number up Tally considered correct and the entry made in SA.CORRECT. If they they SA.CORRECT. If in made entry the and correct considered A L V C otu i sn t te prpit fie. ile f appropriate the to sent TALLYVECT is output akad mthn (SA.TWO, DIRECT=-1). matching Backwards All percentages are based on the number of grey=l p a rticle s s rticle a p grey=l of number the on based are percentages All ohrie led acutd ... ) r o f accounted already (otherwise nii l present. lly itia in 2 1 1 CALL TALLYVECT(SA.TWO,COUNT(l), ICNTVC,2 TOLV,TOLR,MAXDIS) , 1 F (SUPERPASS.EQ.1) IF THEN CALL DO.VECT(COUNT, DIRECT=-1 SA.TWO, TOLV, TOLR, MAXDIS, CALL TALLYVECTCSA,COUNT( .ICNTVC,1 ) TOLV,TOLR,MAXDIS) 1 , DIRECT,ICNTVC) 1 DO 1=1,C0UNT(1) F S(,)INV.qINV) THEN (SA(1,I).ICNTVC.Eq.ICNTVC) IF F S(,I).LINK.NE.O).AND.(SA_TW0(1,I).LINK.NE.O)) (SA(1, ( IF F (A(,)LN.qS.W( 1).LINK).AND. ,1 (1,I).LINK.Eq.SA.TWO(1 ((SA IF THEN SA.CORRECT(2 ) LINK).LINK=SA(2 . ,I ,SA(1 ) ,SA(1,I SA.CORRECT( ICNTVC=SA( . ) ICNTVC 1 . , ) 1 1 SA.CORRECT(1 , 1 .ACT ).ACT=SA(1 ) ,1 ,I SA.CORRECT( .LINK .LINK=SA(1, ) ) 1 ,1 1 SA.TWO(2 THEN ,SA.TWO(1 ).LINK).LINK)) ,1 S(,A11).LINK).LINK.Eq. ) (SA(2,SA(1,1 .LINK).LINK 153 ooo OOO OOO hc t t t pari e i matched... is le rtic a p Ith at th Check pae ATO ra fr a matches... m bad for SA.TWO Update array pae A ra fr a matches... bad for SA array Update .LINK).LINK 1 .EQ.ICNTVC).AND.(SA_CORRECT(1 THEN ).LINK.EQ.O)) ,1 1 .AND.(SA.CORRECT(1 THEN ).LINK.EQ.O)) ,1 1 S(,A2S(, .LINK).LINK).LINK ) SA(3,SA(2,SA(l,1 1 SA(3,SA(2,SA(1,I).LINK).LINK).ACT .LINK).ACT 1 1 ELSE DO 1=1,COUNT(l) END DO F ((SA.TWO(1 ).LINK.NE.O).AND.(SA.TWO(1 ,1 IF ,I).ICNTVC F S( 1).LINK.NE.O).AND.(SA(1,I).ICNTVC.EQ.ICNTVC) ,1 (SA(1 ( IF END IF F (SA(1,I).ICNTVC.EQ.ICNTVC) THENIF END IF END IF SA.TWO( AVAIL=1 . ) 1 , 1 SA.TWO( ICNTVC=0 . ) 1 , 1 SA.TWO(1 ,I).LINK=0 SA.TWO( ,SA.TWO( .LINK).AVAIL=1 2 ) ,1 1 SA.TWO(2 ,SA.TWO( .LINK).LINK=0 ) ,1 1 SA.TWO( ,SA.TWO( 3 ,SA.TWO( .LINK).LINK).AVAIL=1 2 ) ,1 1 A11).AVAIL=1SA(1,1 A11).ICNTVC=0 ) SA(1,1 SA(1,I).LINK=0 I).LINK).AVAIL=1 SA(2,SA(l, SA(2,SA(1,I).LINK).LINK=0 A3S(,Al I).LINK).LINK).AVAIL=1 SA(3,SA(2,SA(l, F (SA.TWO(1 THEN ).LINK.NE.O)IF ,1 END IF SA_C0RRECT(2,SA(1, I).LINK).ACT=SA(2,SA(1,I).LINK) SA_C0RRECT(2,SA(1, I) I).LINK).LINK=SA(2,SA(1, SA_CORRECT(1 .ICNTVC ) SA.CORRECT( ,I).ICNTVC=SA(1 1 , .ACT=SA(1,I).ACT ) ,1 1 SA.CORRECT(1 I).LINK ).LINK=SA(l, ,1 END IF SA.CORRECT(3 ).LINK).LINK).LINK= ,1 ,SA(1 ,SA(2 SA_C0RRECT(3,SA(2,SA(1,1 .LINK).LINK).ACT= ) SA.CORRECT(2,SA(1 LINK).ACT=SA(2 ) . ,I ) ,SA(1 ,I 154 oo o pae ATO ra fr a matches... m bad for SA.TWO Update array .AVAIL .LINK 1 1 S(,A2S(,I).LINK).LINK).AVAIL SA(3,SA(2,SA(1, 1 .QINV)AD(A1I.IKN.) THEN .EQ.ICNTVC).AND.(SA(1,I).LINK.NE.O)) 1 .LINK).AVAIL 1 .LINK).LINK 2 .LINK SA.TWO(2,SA.TWO(1 ).LINK).LINK).AVAIL ,1 1 1 .AND.(SA.TWO(1 THEN ).LINK.NE.O)) ,1 1 .LINK).ACT 2 .LINK=SA_TWO(3 ,SA.TWO(2 LINK) . ,SA.TWO(1 ) ,I 1 .ACT=SA_TWO(3 ,SA.TWO(2 SA.TWO(1 ).LINK).ACT ,SA.TWO(1 ,1 ).LINK) ,1 1 1 SA.TWO(1 ).LINK).LINK ,1 1 SA(3,SA(2,SA(1,I).LINK).LINK).LINK 1 S(,A2S(,I).LINK).LINK).ACT SA(3,SA(2,SA(l, .ACT 1 1 F ((SA.TWO(1 ).LINK.EQ.O).AND.(SA.TWO(1 ,1 ).ICNTVC IF ,1 F S( I).IKE. .N.(A1 ICNTVC.EQ.ICNTVC) . ) I ).AND. .LINK.EQ.0 (SA(1, ) ,I (SA(1 ( IF SA.TWO( .AVAIL=SA(1,I).AVAIL ) ,1 1 SA.TWO(1 ).ICNTVC=SA(1 ,1 ).ICNTVC ,1 A T O 1, IKS( I).LINK . LINK=SA(1SA.TWO(1 ) . ) ,I ,I SA.TWO(2 ,SA.TWO(1 ).LINK).AVAIL=SA(2,SA(1, I).LINK) ,1 SA.TWO(2,SA.TWO(1 ).LINK).LINK=SA(2,SA(1,I).LINK) ,1 SA.TWO( ,SA.TWO( 3 ,SA.TWO( .LINK).LINK).AVAIL= ) 2 ,1 1 END IF END IF A 1, AVAIL=SA_TWO(1 . ) ,I SA(1 ICNTVC=SA.TWO(1 ).AVAIL . ,1 ) ,I SA(1 ).ICNTVC .LINK=SA_TWO( ,1 ) I SA(1, ).LINK ,1 1 A 2,A1, LINK).AVAIL=SA_TWO(2 . ) ,I ,SA(1 SA(2 ,SA.TWO(1,1) .LINK).LINK=SA_TWO(2,SA.TWO(1 ) SA(2,SA(1,1 ).LINK) ,1 SA(3,SA(2,SA(1,I).LINK).LINK).AVAIL=SA_TW0(3, END IF L E F (SA.TWO(1 THEN ).LINK.NE.O)ELSE ,1 IF SA_C0RRECT(3,SA_TW0(2,SA.TWO(1 ,I).LINK).LINK) SA_C0RRECT(3,SA.TWO( ,SA.TWO( .LINK).LINK) ) 2 1 , 1 SA.CORRECT(2 ,SA.TWO(1 ).LINK).ACT=SA_TW0(2, ,1 SA.CORRECT( ,SA.TWO( .LINK).LINK=SA_TW0(2, ) 2 SA.CORRECT( 1 , ICNTVC=SA_TWO( 1 . ) 1 , 1 SA.CORRECT( ACT=SA_TWO(l, .ICNTVC . ) ) 1 1 , , 1 1 .ACT ) 1 SA.CORRECT( .LINK=SA_TW0(1,I).LINK ) ,1 1 SA_C0RRECT(3,SA(2 ,I).LINK).LINK).LINK= ,SA(1 SA_C0RRECT(3, S A .LINK).LINK).ACT= (2, SA 1) , 1 ( 155 ooooo ooo TALLYVECT. Final piece of information: determine '/.'a based on most recent recent most on based '/.'a determine information: of piece Final tracking/combination. cumulative on based '/,’s Determine iteration/com bination, non-cumul. Algorithm sim ilar to to ilar sim Algorithm non-cumul. bination, iteration/com F (ICNTNEW.NE.O) IF THEN DO 1=1,C0UNT(1) SA(2,IX).XC,SA(2,IX).YC,SA(3,IY).XC,SA(3,IY).YC 1 .LINK).ACT 1 IWR0NG=0 CALL TALLYVECT(SA_C0RRECT,C0UNT(1).ICNTVC,3, ICNTNEW=0ITHREE_PT=0 THEN 1 IN0T_F0UND=0 TOLV,TOLR,MAXDIS)1 END IF END DO END IF WRITE( ITER.MAXDIS, ) 1 6 ,5 6 TOLV, TOLR, XPNF, XPTP, XPWR, ICNTNEW WRITE( )ITER, MAXDIS, 1 6 ,5 9 9 TOLV,XPWR=FLOAT( TOLR, XPNF, XPTP, XPWR, IWRONG)* ICNTNEWXPNF=FLOAT( O/FLOAT(ICNTNEW) . 0 0 1 INOT.FOUND)* .O/FLOAT( 0 0 1 ICNTNEW) XPTP=FLOAT( ITHREE.PT)*1 .O/FLOAT(ICNTNEW) 0 0 F S( 1).ICNTVC.EQ.ICNTVC).OR.(SA(1,I).ICNTVC.EQ.O)) ,1 (SA(1 ( IF ENDIF END DO F ((SA.CORRECT(1 THEN ).LINK).EQ.O) IF ,1 I C N T N E W = I C N T N E W + 1 END IF ELSE END IF RT(010 II,BI,A11)X,A11).YC, ).XC,SA(1,1 I,IA,IB,IC,SA(1,1 WRITE(I0,120) F I.QI)AD I.QI) THEN (IB.EQ.IC)) (IA.EQ.IB).AND. ( IF IX=SA.CORRECT( .LINK ) 1 , 1 IC=SA_C0RRECT(3, SA.CORRECT(IB=SA_C0RRECT(2, ,SA.CORRECT( LINK) 2 . SA.CORRECT(IA=SA_CORRECT( ) .LINK).ACT ,I ) 1 1 , 1 .ACT ) 1 , 1 IY=SA_C0RRECT(2,IX).LINK IN0T_F0UND=IN0T_F0UND+1 ENDIF ELSE I W R 0 N G = I W R 0 N G + 1 ITHREE_PT=ITHREE_PT+1 156 2 FRA( '218,2F10.2,18) 8 FORMAT(' 1 ',2 320 2 FRA( ',4I8,6F10.2) FORMAT(' ' 120 7 WIE111) 0 WRITE(1,1010) 778 o o o o o ooo , .2 5 TOLV \F ,' - 4 MAXDIS ',1 - ,13,' ’ n FORMAT( tio Itera ’ 561 Final record-keeping is to output a ll non-matched p a rticle s to to s rticle a p non-matched ll a output to is record-keeping Final n upt e (000DT s sd here). used (F0R020.DAT is le i f output an end. and output to values l a fin Write loop. of Out STOP ,0,0.0,0.0 0 WRITE(IO.NF,320) END DO 1=1,C0UNT(3) DO 1=1,C0UNT(2) WRITE(I0,120)0,0,0,0,0.0,0.0,0.0,0.0,0.0,0.0 WRITE(I0,120)0,0,0,0,0.0,0.0,0.0,0.0,0.0,0.0 4 SA(3,SA(2,SA(1,I).LINK).LINK).YC SA(3,SA(2,SA(1,I).LINK).LINK).YC 4 .IK.CS(,A2S(, ).LINK).LINK).XC, .LINK).YC,SA(3,SA(2,SA(1,1 3 DO 1=1,C0UNT(1) 1 SA(3 ,1) . ,1) SA(3 YC,. ,1) SA(3 ACT 1 SA(1,I).YC,SA(2,SA(1,I).LINK).XC,SA(2,SA(1,I) 2 SA(2,I).YC,SA(2,I).ACT 1 I0_NF=20 s’) k c tra remaining ' ',14, for ’ ', , ) k c a r t (Combined ' 3 GOTO 777 S(,A2S(, ).LINK).LINK).ACT,SA(1,I).XC, SA(3,SA(2,SA(1,1 1 ’,F6.2, Wrong - ,/,' .2 6 ',F - Three ’ 2 ’ OR ',5 Nt on - F6. ,/, .2 6 ,F ’ - Found Not ' , / , ,F5. 2 ' TOLR ’ - 1 END DO END DO END DO IF (SA (3,I).AVAIL.EQ.l) WRITE(IO.NF,320) I,3,SA(3,I).XC, I,3,SA(3,I).XC, WRITE(IO.NF,320) (3,I).AVAIL.EQ.l) (SA IF I,2,SA(2,I).XC, WRITE(IO.NF,320) (2,I).AVAIL.Eq.l) (SA IF F S(,)AALE.) THEN (SA(1,I).AVAIL.EQ.1) IF END IF ELSE R T (010 I,SA(l,I).ACT,SA(2,SA(1,I).LINK).ACT, WRITE(10,120) WRITE(IO.NF,320) I,3,SA(l,I).XC,SA(1,I).YC,SA(l,I).ACT I,3,SA(l,I).XC,SA(1,I).YC,SA(l,I).ACT WRITE(IO.NF,320) 157 blw rci ( ar a S(,)NX+ ad nrmn up increment and found. SA(1,I).NEXT+1 the is to at match no rt switches ta if (s and max)C n to 1), C to irectio d back below decrement C and SA(1,I).NEXT C C in itia l search is in the above direction (ie s ta r t at at t r ta s (ie direction used; above is the in search is search double-switching A l itia in parameters. C Switching C C

5 DJ=-1 250 OOQOOOQOOOOO OOOO D o not match grey=2 or above if grey=l p a rtic le is not not is end. le to go rtic a and p 1 grey=l at if tart S above or grey=2 grey=4. match on Do not loop, main Begin s le rtic a p of frame-to-frame MAXDIS maximum displacement is TO L V is maximal magnitude change in vector (as fractio n of of n fractio (as radians) (in vector in vector change in magnitude change maximal TOLV ial is rad maximal TOLR is valble. ailab av e tlrne ad i t (asd variables) (passed its lim and tolerances Set sn te aaees asd rm h mi program. main the from passed parameters the using Subroutine th at does the grunt work of p a rtic le matching, matching, le rtic a p of work grunt the does at th Subroutine SUBROUTINE DO.VECT(COUNT, SA, TOLV, TOLR, MAXDIS, DIRECT,ICNTVC) 1 O 1 I=ID1,ID2,DI DO 110 F DRC.Ql THEN (DIRECT.EQ.l) IF RECORD /SORTED.AREA.TYPE/ SA(4,1000) ISWITCH=0 vrg vco magnitude) vector average INTEGER COUNT( DJ,DK,DL,DIRECT,DI,ICNTCV,ISWITCH , ) 4 STRUCTURE /SORTED_AREA_TYPE/ END IF ELSE E NST D R U C T U R E F DRC.El GOTO (DIRECT.NE.l) 50 IF DI=-1 DI=1 F S( I).AVAIL.EQ.0) GOTO . ) 110 ,I (SA(1 IF ID2=1 ID1=C0UNT(1) ID2=C0UNT(1) ID1=1 REAL XC,YC INTEGER ICNTVC INTEGER NEXT,AVAIL,LINK,ACT INTEGER NUM.PIX 158 159

13=1 I2=SA(1,1 ).NEXT+1 IF (12.GT.COUNT(2)) I2=C0UNT(2) GOTO 60 50 DJ=1 I3=C0UNT(2) I2=SA(1,I).NEXT IF (I2.LT.1) 12=1 C C Main grey=2 loop. C 60 DO 20 J=I2,I3,DJ C C Check availability. Also check MAXDIS (quickest elimination). C IF (SA(2,J).AVAIL.EQ.O) GOTO 20 DY1=(SA(1,I).YC-SA(2,J).YC) IF (ABS(DYl).GT.MAXDIS) GOTO 10 DX1=(SA(1,I ) .XC-SA(2 ,J ) .XC) IF (ABS(DXl).GT.MAXDIS) GOTO 20 DV1=SQRT(DX1*DX1+DY1*DY1) IF (DV1.EQ.0) GOTO 20 C C DV1 rep. l->2 vector C IF (DJ.EQ.l) GOTO 80 C C Switching parameters for the grey=3 loop, following the C same procedure as for grey=2. It is likely that the C grey=3 particle is in the same direction as grey=2, and C this direction is checked first. ISWITCH is used to C reverse the grey=3 track locally. C 280 DK=-1 J3=l J2=SA(2,J).NEXT+1 IF (J2 .GT.C0UNT(3)) J2=C0UNT(3) GOTO 70 80 DK=1 J3=C0UNT(3) J2=-SA(2,J) .NEXT IF (J2.LT.1) J2=l C C Main grey=3 loop. C 70 DO 30 K=J2,J3,DK C C Check availability. Check MAXDIS for quick elimination. C IF (SA(3,K).AVAIL.Eq.O) GOTO 30 DY2=(SA(2,J).YC-SA(3,K).YC) IF (ABS(DY2).GT.MAXDIS) GOTO 31 DX2=(SA(2 ,J ) .XC-SA(3, K) . XC) IF (ABS(DX2).GT.MAXDIS) GOTO 30 DV2=SQRT(DX2*DX2+DY2*DY2) C C DV2 rep. 2->3 vector C IF (DV2.EQ.0) GOTO 30 C C Check TolV... C IF (ABS((DV2-DV1 ) /( (DV2+DV1)/2 ) ) . GT.TOLV) GOTO 30 C C Check TolR (note 'heavy' calculations). C IF (DY1.GT.0) THEN DR1=AC0S(DX1/DV1) ELSE DR1=2*3. 14159-ACOS(DX1/DV1) ENDIF IF (DY2.GT.0) THEN DR2=AC0S(DX2/DV2) ELSE DR2=2*3.14159-ACOS(DX2/DV2) ENDIF IF (ABS(DR2-DR1).GT.TOLR) GOTO 30 C C At th is point a three-point match e x ists, 1 —> 2 —> 3. C Set the appropriate linkages and availability flags, and C check whether a fourth-point match is required. 4 point C matching tends to significantly reduce matching, and is C not used. C SA(3,K).AVAIL=0 SA (2,J).LINK=K SA (2,J).AVAIL=0 SA(l, I).LINK=J SA(l,1 ).AVAIL=0 SA (1,I).ICNTVC=ICNTVC ISWITCH=0 GOTO 11 30 CONTINUE 31 IF (ISWITCH.EQ.1) THEN 161

ISWITCH=0 GOTO 20 ELSE ISWITCH=1 IF (DJ.EQ.l) THEN GOTO 280 ELSE GOTO 80 END IF END IF 20 CONTINUE 10 IF (DIRECT.EQ.DJ) GOTO 11 IF (DJ.EQ.l) GOTO 250 GOTO 50 11 CONTINUE 110 CONTINUE RETURN END o o o o uruie o upt oe ipe t s n ace paril s rticle a p matched on ts sta simple some output to Subroutine SUBROUTINE TALLYVECT(SA, ICNTVC, IEND, NUMB, MD) TR, TV, XPWR=FLOAT(XPTP=FL0AT( IWRONG) ITHREE.PT)*100.O/FLOAT(XPNF=FLOAT( O/FLOAT(IEND) . 0 0 1 * IEND) INOT.FOUND)*100.0/FL0AT(IEND) F (NUMB.EQ.3) IF THEN F (NUMB.EQ.2) IF THEN (NUMB.EQ.l) THEN IF DO 1=1,IEND REAL TV,TR RECORD /SORTED.AREA.TYPE/ SA(4,1000) IF0UR_PT=0 ITHREE_PT=0 IWR0NG=0 STRUCTURE /SORTED.AREA.TYPE/ IN0T_F0UND=0 INTEGER IEND,ICNTVC,NUMB,MD,10 END IF END IF END DO E NST D R U C T U R E WRITE(99,573)ITER,MD,TV,TR,XPNF,XPTP,XPWR, IEND WRITE(6 )ITER,MD,TV, 2 7 ,5 TR, XPNF, XPTP,WRITE(99,572)ITER, MD,TV,TR,XPNF, XPWR, IEND XPTP, XPWR, IEND WRITE(6 )ITER,MD,TV,TR, 1 7 ,5 XPNF, XPTP,WRITE(99,571)ITER, MD,TV,TR,XPNF, XPWR, IEND XPTP, XPWR, IEND REAL XC.YC F S( 1)LN)E.) THEN ).LINK).EQ.O) ,1 (SA(1 ( IF INTEGER NUM.PIX INTEGER ICNTVC INTEGER NEXT,AVAIL,LINK,ACT ENDIF ELSE F (AE.B.N.I.QI) THEN ((IA.EQ.IB).AND.(IB.EQ.IC)) IF IY=SA(2, IX).LINK IX=SA(1,I).LINK IC=SA(3,SA(2,SA(1,I).LINK).LINK).ACT I).LINK).ACT IB=SA(2,SA(l, IA=SA(1, I).ACT IN0T_F0UND=IN0T_F0UND+1 END IF ELSE I W R 0 N G = I W R 0 N G + 1ITHREE_PT=ITHREE_PT+1 162 163

WRITE(6,573)ITER,MD,TV,TR,XPNF,XPTP,XPWR,IEND END IF 571 FORMAT(’ Iteratio n ’ ,13 ,' MAXDIS - ',1 4 ,' TOLV - \F5.2, 1 ' TOLR - ’ ,F 5 .2 ,/,’ Not Found - \ F 6 . 2 ,/, 2 ’ Three - ’,F6.2,/,’ Wrong - \F6.2, 3 ’ (Forward track),’,' all ’,14,’ tracks’) 572 FORMAT(' Iteratio n ’ ,1 3 ,’ MAXDIS - ’ ,1 4 ,’ TOLV - \F5.2, 1 ' TOLR - ’ ,F5. 2 , / , ’ Not Found - ’ ,F 6 .2 ,/, 2 ’ Three - ’.F6.2,/,' Wrong - \F6.2, 3 ’ (Backward tr a c k ) ,’ , ' a ll ’ ,1 4 ,' tra c k s’) 573 FORMAT(’ Iteratio n ’ ,1 3 ,’ MAXDIS - ’ ,1 4 ,’ TOLV - \F5.2, 1 ’ TOLR - ’ ,F5. 2 , / , ’ Not Found - ',F 6 .2 ,/, 2 ’ Three - \F6.2,/,’ Wrong - \F6.2, 3 ’ (Combined track),',' all ’,14,’ tracks') RETURN END A ppendix C

CATA.FOR

164 165

C C Program CATA.FOR C C This program allows the user to call up two stereo images, show C them simultaneously on the VAXStation 3100, and analyze them for C 3D area. The user must outline the area under analysis by C providing a series of points which, when connected, outline the C area. The points must be entered in their correct order. Once C an area has been designated by these corner points, it can be C filled to check its extent and to provide guides to the opposite C stereo image. After providing the stereo points, the computer C determines the depth of each point, the plane equation for all C points, the area of the 2D projection of the 3D area, and the C area of the 3D area. C C A major portion of the program deals with the X-Windows calls C and event handling. C IMPLICIT NONE INCLUDE ' SYS$SHARE:DECW$XLIBDEF' INCLUDE 'IMAGE.HEADER' STRUCTURE /IMAGE.TYPE/ INTEGER A(512,512) END STRUCTURE RECORD /X$IMAGE/ IMAGE RECORD /X$EVENT/ EVENT RECORD /X$VISUAL/ SAVED.VISUAL, VISUAL RECORD /IMAGE.HEADER/ HDR RECORD /X$C0L0R/ COLORS(128) RECORD /IMAGE.TYPE/ IMAGEC(2),0VERLAY(2) COMMON /BIG.AREA/ IMAGEC,OVERLAY CHARACTER*200 FILENAME,FILENAME2,FILEOUT CHARACTER*20 ANS REAL CONTRAST INTEGER DISP,SCREEN,ROOT,WINDOW INTEGER C0L0RMAP,I ,STATUS,FLEN,NAM.LEN INTEGER PENDING,X,Y,L,P,B,FC,I0,X.MOUSE,Y.MOUSE INTEGER LIB$GET_FOREIGN,STSVAL,RESTORE.DRAWABLE INTEGER EXTERNAL RESTORE.DRAWABLE.EVENT.NOTIFY PARAMETER ERROR ■ .FALSE. C C Get L & R stereo filenames (usually SIA3:[IMAGES]xxxFFyyyy.I , C where xxx represents the DIPIX area name, yyyy the 4 character C DIPIX filename, and SIA3:[IMAGES] the directory on the image C disk where the f ile s are sto red ). C 10 IF ( .NOT.LIB$GET_FOREIGN(FILENAME,'Image f i l e #1: 166

1 '.NAM.LEN)) CALL EXIT IF (NAM.LEN.Eq.O) GOTO 10 20 IF ( .NOT.LIB$GET_FOREIGN(FILENAME2, ' Image f i l e #2: 1 ' , NAM.LEN)) CALL EXIT IF (NAM.LEN.Eq.O) GOTO 20 C C Load X-style image, after appropriate stretching and other C manipulations. Load Background image (commands, colors, and C so forth). C WRITE(6,50) 50 FORMAT(' Please enter output filename') READ(5,60)FILE0UT 60 F0RMAT(A200) IF ((FILEOUT.Eq.' NONE').0R.(FILE0UT.Eq.'none')) THEN 10=6 ELSE OPEN(UNIT=98,STATUS='NEW', FILE=FILEOUT) 10=98 END IF CALL X.IMAGE(FILENAME,FILENAME2,DISP,EVENT,WINDOW,COLORMAP, 1 SCREEN,IMAGE,COLORS,FC) C C Call the Event Handler routine. C CALL GET.EVENT(DISP,EVENT,WINDOW,COLORMAP,SCREEN,IMAGE, 1 COLORS,FC,10) C C Done... C STOP END onno oooo ed edr nomto (discarded) information header read Open image f ile s as unit 3 (le ft) and 4 (right) and and (right) 4 and ft) (le 3 unit as s ile f image Open akrud cmad) n te w iae l specified. s ile f image two the and (commands) background Subroutine to form the display image, using the pre-defined pre-defined the using image, display the form to Subroutine SUBROUTINE X.IMAGE(FILENAME,FILENAME2, EVENT.WINDOW, DISP, COLORMAP,SCREEN,IMAGE,COLORS,FC) 1 READ(4)HDR3 READ(3)HDR F (HDR3.BYTES.PER.PIXEL.NE.1) IF FILE=FILENAME2,DEFAULTFILE=' ') 2 I . OPEN(UNIT=4, STATUS=' OLD' READONLY, , FORM=' UNFORMATTED'OPEN(UNIT=3, , STATUS=' OLD' READONLY, , F0RM=' UNFORMATTED' , PARAMETERE X ERROR T E .FALSE. R N = ACHARACTER+200 RE L S T O R E . D R A W FILENAME,FILENAME2 A B L E . E V E N T . N O T I F Y R E ACO L N T R A S T RECORD /IMAGE.TYPE/RECORD IMAGEC(2),0VERLAY(2) /X$C0L0R/ COLORS(128) F (HDR.BYTES_PER_PIXEL.NE.1) IF FILE=FILENAME,DEFAULTFILE=' ') 2 I . COMMON /BIG.AREA/ IMAGEC,OVERLAY RECORD /IMAGE.HEADER/RECORD HDR.HDR3 /X$VISUAL/ RECORD SAVED.VISUAL, /X$EVENT/ VISUALRECORD EVENT /X$IMAGE/ IMAGE SO 'ant el ih ie size' pixel with deal e' siz STOP 'Cannot pixel with 1 deal STOP 'Cannot 1 INTEGER FC,TRAN(0:255) INTEGER L,B,P,GC,TEMP INTEGER LIB$GET_FOREIGN, RESTORE.DRAWABLE STSVAL, Y, X, INTEGER PENDING INTEGER COLORMAP, STATUS,INTEGER SCREEN, FLEN, NAM.LEN , DISP, I ROOT, WINDOW, X$CREATE_IMAGE STRUCTURE /IMAGE.TYPE/ RECORDTYPE='FIXED',RECL=128,BL0CKSIZE=8192,1 INTEGER X.MOUSE, Y.MOUSE INCLUDE 'IMAGE.HEADER' RECORDTYPE='FIXED',RECL=128,BL0CKSIZE=8192,1 INCLUDE SYS$SHARE: ' IMPLICIT NONE DECW$XLIBDEF’ E NST D R U C T U R E INTEGER A(512,512) 167 0 CONTINUE 500 OOO OOO ooooo ooo ooo oo s Xwnos o rae mg a seiid y , , n B and P, L, by specified as image create to X-windowsUse rae n st mg wno na te otm f h screen the of bottom the window near image set end Create image. displayed in place image, COLORMAP. actual Load calculacted the neutral to presumed are according images All slated tran values, stretching. r fo colormap Build and are stretched fo r display. TRAN contains the stretching stretching the TRAN contains display. r fo stretched are and pn oncin o sre ad e dfut colormap default get and server X to connection Open e iae ie Iae s 0x04 1 ye deep. byte 1 600x1024, is Image size. image Set RETURN END WINDOW X$CREATE_SIMPLE_WIND0W(DISP,Y=250 » ) ,0 ROOT, ,0 L,5 , P X,Y, C A LX$ L M ACALL P _ X$ST0RE_NAME W I N D O(D W I S P (DISP,WINDOW,'ARIES , W I N D O W image') ) X=0 CALL LOAD.IMAGE(3 ,P,L,'/.VAL(IMAGE.x$a_imag_data) ,TRAN) ,4 ,9 ,8 CALL X$DEFAULT_VISUAL(DISP, SCREEN, VISUAL) CALL BUILD.C0L0RMAP(DISP, COLORMAP, TRAN) CALL LIB$INIT_TIMER() R00T=X$DEFAULT_R00T_WIND0W(DISP) GC=X$DEFAULT_GC(DISP,C0L0RMAP=X$DEFAULT_C0L0RMAP(DISP,SCREEN) SCREEN) DISP=X$OPEN_DISPLAY() L=600 F IAExaia.aaE.) THEN (IMAGE.x$a_imag.data.EQ.O) IF STATUS=X$CREATE_IMAGE (DISP,VISUAL,8 ,X$C_Z_PIXMAP, SCREEN=X$DEFAULT_SCREEN(DISP) B=1 P=1024 0 ,,, ,B*P,IMAGE ) ,P,L,8 0, 1 END IF CALL LIB$GET_VM(B*L*P+511,IMAGE.x$a_imag_data) 168 Cl o o o oto o te rga, n cn e bi hr t follow. to hard it b a be can and program, the of portion Event handling subroutine. This is the heart of the display display the of heart the is This subroutine. handling Event SUBROUTINE GET.EVENT(DISP, EVENT, WINDOW, COLORMAP, SCREEN IMAGE,COLORS,FC,10) 1 P A R A M E T ECU R R S O RP A _ F R O A N M T E _PARAMETER N T E AFO R M N E = T _ N ERROR A M .FALSE. E = = C H A R A C T E R * 2FI 0 0 L E N A M E E X T E R N ARE L S T O R E . D R A W A B L E . E V E N T . N O T I F Y R E ACO L N T R ACOMMON S T , P L A N ERECORD . M /BIG.AREA/ U IMAGEC, L T /CORNERS/ RECORD OVERLAY XY(2) /X$C0L0R/ COLORS(1 RECORD ),FG,BG 8 2 /IMAGE.TYPE/ IMAGEC(2),0VERLAY(2) RECORD /IMAGE.HEADER/ RECORD /X$VISUAL/ HDR SAVED.VISUAL,VISUAL INTEGER FONT, GC,INTEGER XM, LCT,LCM,LC,FIX,FIY,STX,STY,AREA YM, CURSORFONT, GC2, IPLANE.TYPE RECORD /TEXT.TYPE/ ALL_TEXT(50) RECORD /X$EVENT/ RECORD EVENT /X$IMAGE/ IMAGE 1 '-Bigelow '-Bigelow 1 INTEGER ST(2),SF(2),SIDE,XMA(100),YMA(100),VPT,VPTL,CURSOR I N T E G EXO R NI E N ,Y T O E G N EXM E R ,X T CINTEGER W , O Y ,Y X1 M WID, ,Y1 Y2, ,X2, T LOCAL.EVENT, HEI, C W ,INTEGER O X M X.MOUSE, R , LCA LINEAR.VAL, FC, , YINTEGER J M , Y.MOUSE, R I , ,LLY(2).MODE,TEMP LLX( ) X 2 MINTEGER LINES, PIXELS, O JLV, ZOOMVAL(2) ILV, N LIB$GET_FOREIGN, E , XI N M T E G TPE E W R STSVAL, N D O IINTEGER N ,Y G M RESTORE.DRAWABLE Y, X, , O A COLORMAP, N N S E , , 10, M Y F M A C T , STATUS, W X M O T FLEN, , R T E U NAM.LEN S E T , Y M T R U E , Z V , W I D T H INTEGER DISP,SCREEN,ROOT,WINDOW STRUCTURE /TEXT.TYPE/ STRUCTURE /CORNERS/ STRUCTURE /IMAGE.TYPE/ INCLUDE 'IMAGE.HEADER' INCLUDE SYS$SHARE:DECW$XLIBDEF' ' IMPLICIT NONE E NST D R U C T U R E E NST D R U C T U R E E NST D R U C T U R E C H A R A C T E RTE * 2 0 X T . C O M INTEGER X,Y INTEGER X(100),Y(100) INTEGER A(512,512) k Holmes-Menu-Medium-R-Normal—12-120-*' 169 170

1 'DECW$CURSOR'

CURSORFONT=X$LOAD_FONT(DISP,CURSOR_FONT_NAME) GC=X$DEFAULT_GC(DISP,SCREEN) CALL X$SET_FONT(DISP,GC,CURSORFONT) CURSOR=X$CREATE_GLYPH_CURSOR(DISP,CURSORFONT,CURSORFONT, 1 DECW$C_WAIT_CURS0R,DECW$C_WAIT_CURS0R+1,1,0) C C Open Unit 3 to get the button texts. This is done separately C from loading the image as tex t is an overlay and occasionally C needs to be updated. Text is stored in the ALL_TEXT record C array, which contains the character strings and starting C locations. C C See the f i le 'TEXT_F0R_BUTT0NS.DAT' to see the format for C the text data. C OPEN(UNIT=3,STATUS=’OLD’.READONLY, 1 FILE=' TEXT.FOR.BUTTONS.DAT') 1=0 101 1= 1+1 READ(3,*)ALL_TEXT(I ) .X,ALL.TEXT(I).Y,ALL_TEXT(I).TEXT.COM IF (ALL.TEXT(I).X.GT.0) GOTO 101 CALL X$SELECT_INPUT(DISP,WINDOW,'OOIFFFFFF'X) C C Pending is the loop exit condition. If set to '-1', exit. C 333 PENDING=1 C C Set initial parameters for each side of the image (left and C r ig h t) . C C Z00MVAL = current magnification factor, never less than 1 C LLX, LLY = 'true' upper left hand coordinates of displayed image C (=1,1 for ZQ0MVAL of 1, but could be almost anything C fo r larger ZOOMVALs) C ST = starting location in the image for the left and C right subimages (=0 for left, =512 for right) C SF = shift factor for the X-windows image (which doesn't C segment the image into le f t and right) C Z00MVAL(1)=1 LLX(l)=l LLY(l)=l ST(1)=0 SF(1)=-1 Z00MVAL(2)=1 0 0 0 0 ( 0 0 0 0 o o o o o o oooo ooooo oooo e eet n process. and event Get See i f EVENT is a movement or exposure event end act act end event exposure or movement a EVENT f is i See od ot o dslyd . t x e t displayed for font Load accordingly. at f an loop. main of tart S =0 attemps. automated the to holdback a MODE again is h oiia itn o poiig uoai i ntfcton (A) n tificatio en id automatic to providing of holdbacks LCA LCM and are intent sys^am. display original the the for Colors r aul de iiai (M). n tificatio en id manual or MANUAL, AUTOMATIC. s i is =1 IE s sd o fer i e l ad ih; 1 s =2 , t f e l is =1 right; and t f le te tia n re iffe d to used is SIDE s i t. h rig is EVENT.evnt_expose.x$l_exev_x, 4 EVENT.evnt.expose.x$l_exev_y, 3 IMAGE, EVENT.evnt.expose.x$l_exev_x, 2 DO WHILE (PENDING.GT.O) M0DE=0 CALL X$SET_FONT(DISP, FONT=X$LOAD_FONT(DISP, GC, FONT) FONT.NAME) LCM=9 LCA=8 X$DEFAULT_GC(DISP,SCREEN), 1 SIDE=1 SF(2)=0 LLY(2)=1 LLX(2)=1 ST(2)=512 CALL X$NEXT_EVENT(DISP,EVENT) F EETen_yeE.$_a_oiy THEN (EVENT.evnt_type.EQ.x$c_map_notify) IF L E F EETen.yeE.$_xoe THEN (EVENT.evnt.type.EQ.x$c_expose) ELSE IF C A LX$ L S E T _ W I N D O W _ C O L O R M A P ( D I S P , W I N D O W , C O L O R M A P ) F EETen.xoexlee_on.E-) THEN (EVENT.evnt.expose.x$l_exev_count.NE.-l) IF CALL X$PUT_IMAGE(DISP,WINDOW, 171 ooo ooo ooo ooooo ooo oooo wth oe fo fill t outline. to l l i f from modes Switch e wa eet a be cle ad c accordingly act and called been has event what See e i ihn omn bto etns rst 1 pixels) 616 t s ir f ( extents command button within if See lines) 88 t s ir f ( e command lin within if See (MAIN 1 pushed.) BUTTON) been Button has talze SD ad X.MOUSE SIDE and Y.MOUSE. and e liz itia n I oain o eemn action. determine to location f VN i a utn rs, oee, s bto nme and number button use however, press, button a EVENT is If EETen.xoexlee_egt ) EVENT.evnt.expose.x$l_exev_height 7 EVENT.evnt.expose.x$l_exev_width, EVENT.evnt_expose.x$l_exev_y, 6 5 L E F EETen.yeE.$_utnpes THEN (EVENT.evnt.type.EQ.x$c_button_press) ELSE IF y_mouse=EVENT.evnt.button.x$l_btev_y y_mouse=EVENT.evnt.button.x$l_btev_y x_mouse=EVENT.evnt.button.x$l_btev.x F EETen.utnxlbe_utnE.) THEN (EVENT.evnt.button.x$l_btev_button.EQ.1) IF SIDE=1 F EETen.utnxlbe_.T52 SIDE=2 (EVENT.evnt.button.x$l_btev_x.GT.512) IF END IF F (Y.MOUSE.LE.88) THEN IF CALL DO.TEXT(DISP, WINDOW, GC, ALL.TEXT) F (MODE.EQ.1) THENIF END IF ELSE F (X.MOUSE.LE.616) THEN IF CALL X$DRAW_STRING(DISP, WINDOW,GC,560,CALL X$DRAW_STRING(DISP, OUTLINE') ' , 9 4 WINDOW, FILL») ,» 9 ,4 GC,5 0 6 F (LOCAL.EVENT.EQ.1)IF THEN F (Y.MOUSE.LE.44) THEN IF END IF ELSE CALL X$FLUSH(DISP) CALL X$DEFINE_CURSOR(DISP,WINDOW,CURSOR) LOCAL_EVENT=INT(X.MOUSE/88)+7 LOCAL_EVENT=INT(X.MOUSE/88)+1 172 o o o o This clears the chosen side of a ll accumulated overlay overlay accumulated ll a of side chosen the clears This nomto, n cer te cuuae cre pit a well. as points corner accumulated the clears and information, IMAGEC(SIDE).A,X.MOUSE,Y.MOUSE,SF(SIDE), 1 2 2 (DISP,SCREEN),560,49,'FILL') 1 (DISP,SCREEN),645,71,'F->') (DISP,SCREEN),IMAGE,560,0,560, 1 1 (DISP.SCREEN).IMAGE,560,0,560, 1 (DISP,SCREEN),645,27,'G->') 1 (DISP,SCREEN),560,49,'OUTLINE') 1

L E F (LOCAL.EVENT.EQ.2)ELSE IF THEN C A LMO L D _ I M A G E ( ' / . V A L ( I M A G E . X $ A _ I M A G _ D A T A ), DO 1=1,512 DO 1=1,100 CALL X$FLUSH(DISP) CALL X$DEFINE_CURSOR(DISP, WINDOW, CURSOR) F SD.Q1 THEN (SIDE.EQ.1) IF SIDE=1 CALL X$UNDEFINE_CURSOR(DISP,WINDOW) F (Y.MOUSE.GT.22) SIDE=2IF F (Y.MOUSE.GT.22) THENIF END DO END IF ELSE END DO ELSE VPTL=0 DO J=1,512 VPT=0 XY(SIDE).Y(I)=0 XY(SIDE).X(I)=0 M0DE=1 END IF C A LX$ L D R A W _ S T R I N G ( DC I S A P . L WX$ L I D N R D A O W W _ S ,X T $ R D I N E G F ( D A I U S P L ,W T I _ N G D C O WCALL ,X $ D X$PUT_IMAGE(DISP.WINDOW, E F A U L T _ G C X$DEFAULT_GC M0DE=0 C A LMA L N A U T ( * / . V A L ( I M A G E . X $ A _ I M A G _ D A T A ) ) C A LX$ L D R A W _ S T R I N G ( D I S P ,W I N D O W ,X $ D E F A U L T _ G C CALL X$PUT_IMAGE(DISP.WINDOW, C A LMA L N A U T C / , X$DEFAULT_GC V A L ( I M A G E . X $ A _ I M A G _ D A T A ) ) C A LX$ L D R A W _ S T R I N G ( D I S P . W I N D O W ,X $ D E F A U L T _ G C END DO OVERLAY(SIDE).A(I,J)=0 0 0 , , 100 100 , , 88 88 ) ) 173 174

2 LCA,LCM,MODE,ZOOMVAL(SIDE), 3 LLX(SIDE),LLY(SIDE),SIDE) CALL X$PUT_IMAGE ( DISP, WINDOW, 1 X$DEFAULT_GC ( DISP, SCREEN ), 2 IMAGE,ST(SIDE),88,ST(SIDE),88,512,512) CALL X$UNDEFINE_CURSOR(DISP,WINDOW) ELSE IF (LOCAL_EVENT.EQ.3) THEN C C This clears a subarea of overlay information, but does not C affect corner point information. To get the subarea, the C next two button pushes (on any button) axe used to define C the opposite corners of a box. The overlay is cleared within C that box. Button pushes are checked to ensure they are in C the correct side. C SIDE=1 IF (Y.MOUSE.GT.22) SIDE=2 C C Get f i r s t point. C 904 CALL X$NEXT_EVENT(DISP,EVENT) C C See if it's a motion or exposure event. C IF (EVENT.evnt_type.EQ.x$c_map_notify) THEN CALL X$SET_WINDOW_COLORMAP(DISP,WINDOW, 1 COLORMAP) ELSE IF (EVENT.evnt.type.EQ.x$c_expose) THEN IF (EVENT.evnt.expose.x$l_exev_count.NE. -1) 1 THEN CALL X$PUT_IMAGE (DISP,WINDOW, 1 X$DEFAULT_GC(DISP,SCREEN), 2 IMAGE, EVENT.evnt.expose.x$l_exev_x, 3 EVENT.evnt_expose.x$l_exev_y, 4 EVENT.evnt_expose.x$l_exev_x, 5 EVENT.evnt_expose.x$l_exev_y, 6 EVENT.evnt.expose.x$l_exev_width, 7 EVENT.evnt.expose.x$l_exev_height) CALL X$SET_F0REGR0UND(DISP,GC,0) CALL X$SET_BACKGROUND(DISP,GC, 1) CALL D0_TEXT(DISP,WINDOW,GC,ALL.TEXT) CALL X$DRAW_STRING(DISP,WINDOW,GC,560,49, 1 'AUTOMATIC') C C Check next event, looking for a button push. C GOTO 901 175

END IF ELSE IF (EVENT.evnt_type.EQ.x$c_button_press) 1 THEN X_MOUSE=EVENT.EVNT.BUTTON.X$L_BTEV_X- 1 ST(SIDE) Y_MOUSE=EVENT.EVNT.BUTTON.X$L_BTEV_Y-88 C C If in command line, loop. If in wrong side, loop. C IF (Y.MOUSE.LT.O) GOTO 904 IF ( (X.MOUSE.LT.O).OR.(X.MOUSE.GT.512)) GOTO 1 904 C C Store first coordinates in real space. C XMC=LLX(SIDE)+(x.mouse-1)/ 1 (2**(ZOOMVAL(SIDE)-1)) YMC=LLY(SIDE)+(y.mouse-1)/ 1 (2**(ZOOMVAL(SIDE)-1)) C C Get next event (looking for 2nd coordinate) and process. C 901 CALL X$NEXT_EVENT ( DISP, EVENT ) C C See if it's a motion or exposure event. C IF (EVENT.evnt.type.EQ.x$c_map_notify) THEN CALL X$SET_WIND0W_C0L0RMAP(DISP,WINDOW, 1 COLORMAP ) ELSE IF (EVENT.evnt_type.EQ.x$c_expose) 1 THEN IF (EVENT.evnt.expose.x$l_exev_count.NE. 1 -1) THEN CALL X$PUT_IMAGE(DISP,WINDOW, 1 X$DEFAULT_GC (DISP,SCREEN).IMAGE, 2 EVENT.evnt.expose.x$l_exev_x, 3 EVENT.evnt.expose.x$l_exev_y, 4 EVENT.evnt.expose.x$l_exev_x, 5 EVENT.evnt.expose.x$l_exev_y, 6 EVENT.evnt.expose.x$l_exev_width, 7 EVENT.evnt.expose.x$l_exev_height) CALL X$SET_FOREGROUND(DISP,GC,0) CALL X$SET_BACKGROUND(DISP,GC,1) CALL DO.TEXT(DISP.WINDOW,GC,ALL.TEXT) CALL X$DRAW_STRING(DISP,WINDOW,GC,560, 1 4 9 ,'AUTOMATIC') GOTO 901 ooo ooo ooo ooo op hog codnts n st vra t 0. to overlay set and coordinates through Loop u codnts n rpr re fr looping. for order proper in coordinates Put tr scn codnts n el space. real in coordinates second Store f n omn lne, op I i wog ie loop. side, wrong in If loop. , e command lin in If LLX(SIDE),LLY(SIDE),SIDE) 3 SF(SIDE),LCA,LCM,MODE,ZOOMVAL(SIDE), 2 IMAGEC(SIDE).A,X.MOUSE,Y.MOUSE, 1 (2**(Z00MVAL(SIDE)-1)) (2**(ZOOMVAL(SIDE)-1)) 1 1 ST(SIDE) 1 GOTO 901 THEN 1 x$c_button_press) 1 L E F (EVENT.evnt.type. ELSE IF XMR=1+(2**(ZOOMVAL(SIDE)-1 )* ) C A LMO L D _ I M A G E ( ' / , V A L ( I M A G E . X $ A _ I M A G _ D A T A ) , DI= O X M C , X M R YMR=LLY(SIDE)+(y.mouse-1)/ XMR=LLX(SIDE)+(x.mouse-1)/ CALL X$FLUSH(DISP) Y_MOUSE=EVENT. EVNT.BUTTON. X$L_BTEV_Y-88 CALL X$DEFINE_CURSOR(DISP.WINDOW, CURSOR) X _ M O U S E = E V E N T .E V N T . B U T T O N .X $ L _ B T E V _ X - F (YMC.GT.YMR) IF THEN F (XMC.GT.XMR)IF THEN F ((X.MOUSE.LT.O).OR.(X.MOUSE.GT.512))IF (Y.MOUSE.LT.O)IF GOTO 901 END DO END IF END IF END IF DJ= O Y M C , Y M R YMR=TEMPYMC=YMR TEMP=YMC TEMP=XMC XMR=TEMPXMC=XMR END DO OVERLAY(SIDE).A(I,J)=0 Eq. 176 oooo oooo Modify le f t and right images, since the guidelines extend to to extend guidelines the since images, halves. right and both t f le Modify F ill the area bounded by the points already given, fo r the side side the r fo given, already points the by bounded area the ill F chosen. 3 3 4 IMAGE,ST(1),88,ST (l),88,1024,512) LCA,LCM,MODE,ZOOMVAL(SIDE), 2 2 3 LCA,LCM,MODE,ZOOMVAL(SIDE), 2 2 X$DEFAULT_GC SCREEN DISP, ( ), IMAGEC(SIDE).A,X.MOUSE,Y.MOUSE,SF(SIDE), 1 IMAGEC(SIDE) 1 1 IMAGEC(SIDE).A,SIDE,MODE) 1 LLX(SIDE),LLY(SIDE),SIDE) LLX(SIDE),LLY(SIDE),SIDE) L E F (LOCAL_EVENT.ELSE IF THEN EQ.4) CALL X$PUT_IMAGE WINDOW, DISP, ( C A L L M O D _ I M A G E ( ' / , V A L ( I M A G E . X $ A _ I M AC G A . D L A L T A M ), 0 D _ I M A G E ( ' / , V A L ( I M A G E . X $ A _ I M A G _ D A T A ) , CALL FILLAREA(XY, VPT, VPTL, LINES, PIXELS, CALL X$DEFINE_CURSOR(DISP, WINDOW, CURSOR) SIDE=2 SIDE=1 CALL X$FLUSH(DISP) SIDE=1 F (Y.MOUSE.GT.22) IF SIDE=2 ELSE END IF GOTO 904 ELSE END IF C A LX$ L U N D E F I N E _ C U R S O R ( D I S P , W I N D O W ) YMC=1+(2**(ZOOMVAL(SIDE)-1))* GOTO 901 CALL X$PUT_IMAGE WINDOW, DISP, ( XMC=1+(2**(Z00MVAL(SIDE)-1))* YMR=1+(2**(ZOOMVAL(SIDE)- * ) ) 1 (XMC-LLX(SIDE)) (YMC-LLY(SIDE)) (YMR-LLY(SIDE)) (XMR-LLX(SIDE)) X$DEFAULT_GC SCREEN DISP, , ) ( ST(SIDE)+XMC-1,88+YMC-l, (XMR-XMC), IMAGE, ST(SIDE)+XMC-1 88+YMC-1, , (YMR-YMC)) .A, X.MOUSE,Y.MOUSE,SF(SIDE) , 177 ooo oooo oe ih ons cer on mmr ol (o overlay). (not memory only point clear points, with Done ok VPT=VPTL. f i work Determine depth information, area, and so fo rth . Will only only Will . rth fo so and area, information, depth Determine 1 3 2 1 L E F (LOCAL.EVENT.Eq.5) ( ELSE IF CALL X$UNDEFINE_CURSOR(DISP,WINDOW) VPTL=0 VPT=0 DO 1=1,100 CALL RUNXY(XY, VPT, IPLANE.TYPE, VPTL,, 0 1 TYPE*, PLANE.MULT, IPLANE.TYPE F (VPT.NE.VPTL) THEN IF F (Y.MOUSE.LT.22) THEN IF CALL X$FLUSH(DISP) CALL X$DEFINE_CURSOR(DISP, WINDOW, CURSOR) CALL X$UNDEFINE_CURSOR(DISP,WINDOW) CALL X$UNDEFINE_CURSOR(DISP,WINDOW) F (X.MOUSE.LT.443) THEN IF END DO ENDIF ELSE END IF (Y.MOUSE.LT.44)ELSE THEN IF END IF ELSE (X.MOUSE.LT.465)ELSE THEN IF DO J=1,2 match' not do points corner Number of TYPE*,' GOTO 987 CALL X$UNDEFINE_CURSOR(DISP, WINDOW) YE, Pes etr ie lil ' ultiple m side READ( enter PLANE.MULT ) * , 5 Please TYPE*,' IPLANE_TYPE=3 IPLANE_TYPE=2 PLANE_MULT=2.0 PLANE_MULT=1.0 IPLANE_TYPE=1 (LOCAL.EVENT.Eq.12)) THEN .OR. O.(LOCAL.EVENT.Eq.6.OR. ).OR. END DO XY(J).Y(I)=0 XY(J).Y(I)=0 XY(J).X(I)=0 PLANE.MULT) (LOCAL_EVENT.Eq.il) 178 ooo ooo o m u o coe sd, sn te etr o om u on. out zoom to center the using side, chosen on Zoom out o m n n hsn ie uig h cne t zo i on. in zoom to center the using side, chosen on Zoom in 3 ZOOMVAL(SIDE),LLX(SIDE),LLY(SIDE), 2 3 2 IMAGE,ST(SIDE),88,ST(SIDE),88,512,512) IMAGE,ST(SIDE),88,ST(SIDE),88,512,512) LCA,LCM,MODE,ZOOMVAL(SIDE),LLX(SIDE), 2 2 LCA,LCM,MODE,1,SIDE) 3 IMAGE,ST(SIDE),88,ST(SIDE),88,512,512) 2 IMAGEC(SIDE).A,XM,YM,SF(SIDE), X$DEFAULT_GC SCREEN DISP, ), ( 1 1 ZOOMVAL(SIDE),LLX(SIDE),LLY(SIDE), 2 IMAGEC(SIDE).A,X.MOUSE,Y.MOUSE,SF(SIDE), 1 X$DEFAULT_GC(DISP,SCREEN), 1 IMAGEC(SIDE).A,XM,YM,SF(SIDE), 1 LCA,LCM,MODE,-1 ,SIDE) LLY(SIDE),SIDE) L E F (LOCAL.EVENT.EQ.8)ELSE IF THEN L E F (LOCAL.EVENT.EQ.7)ELSE IF THEN ZOOMVAL(SIDE)-ZOOMVAL(SIDE)-1YM=256 XM=256 CALL X$FLUSH(DISP) CALL X$DEFINE_CURSOR(DISP,WINDOW,CURSOR) CALL X$UNDEFINE_CURSOR(DISP.WINDOW) SIDE-1 C A LZO L O M ( * / . V A L ( I MYM-256 A G E . X $ AXM=256 _ I M A G _ D A T A ), CALL X$FLUSH(DISP) F Z $A(IE.T2 THEN 7$VAL(SIDE).LT.2) (Z IF F (Y.MOUSE.GT.66) SIDE-2IF CALL X$PUT_IMAGE(DISP,WINDOW, ZOOMVAL(SIDE)-ZOOMVAL(SIDE)+1 CALL X$DEFINE_CURSOR(DISP, WINDOW, CURSOR) SIDE-1 F (Y.MOUSE.GT.66) SIDE=2IF ELSE CALL X$PUT_IMAGE WINDOW, DISP, ( C A L L Z O O M ( % V A L (IM A G E .X $ A . I M A G . D A T A ), LLY(SIDE)=1 CALL X$PUT_IMAGE WINDOW, DISP, ( C A LMO L D _ I M A G E ( ' / , VLLX(SIDE)-! A L ( I M A G EZOOMVAL(SIDE)=1 .X $ A _ I M A G _ D A T A ), 179 oooo ooo ooo Not a command request, but p o ten tially a color change request. request. change color a tially ten o p but command request, a Not f UOA I md, oo cags C, tews cags LCM. changes LCA, otherwise AUTOMATIC changes If color mode, handler. event PENDING leave use to request, Exit e iae o anfcto 1 i oiia state) original (ie 1 magnification to image Set 2 3 2 1 1 2 IMAGE,ST(SIDE),88,ST(SIDE),88,512,512) IMAGE,ST(SIDE),88,ST(SIDE),88,512,512) 2 X$DEFAULT_GC SCREEN DISP, ), ( 1 ELSE F (X.MOUSE.GT.760)IF THEN END IF L E F (LOCAL.EVENT.EQ.10)ELSE IF THEN LE F LCLEETE.) THEN (LOCAL.EVENT.Eq.9) IF ELSE Y_M0USE=-1 F (Y.MOUSE.LE.44) THENIF PENDING=-1 X_M0USE=-1 CALL X$UNDEFINE_CURSOR(DISP, WINDOW)CALL X$PUT_IMAGE WINDOW, DISP, ( C A LMO L D . ILLY(SIDE)=1 M A G E ( ' / . V A L (IM A G E . X $ A _ I M A G . D A T A ), LLX(SIDE)=1 ZOOMVAL(SIDE)=1 SIDE=1 CALL X$FLUSH(DISP) CALL X$DEFINE_CURSOR(DISP.WINDOW,CURSOR) F (Y.MOUSE.GT.66) IF SIDE=2 CALL X$UNDEFINE_CURSOR(DISP, WINDOW) END IF LC=LCM LCT=8+INT( (X.M0USE-760)/44) F LTN.C THEN (LCT.NE.LC) LC=LCA IF (MODE.Eq.1) IF A L X$FLUSH(DISP) CALL WINDOW, X$DEFINE_CURSOR(DISP, CURSOR)CALL F MD.qO THEN (MODE.Eq.O) IF X$DEFAULT_GC SCREEN DISP, , ) ( LCA,LCM,MODE,ZOOMVAL(SIDE),LLX(SIDE), LLY(SIDE), SIDE) IMAGE,ST(SIDE),88,ST(SIDE),88,512,512) IMAGE,ST(SIDE),88,ST(SIDE),88,512,512) IMAGEC(SIDE).A,X.MOUSE,Y.MOUSE,SF(SIDE), CALL M0DLC('/,VAL(IMAGE.X$A_IMAG_DATA) , LCM=LCT 180 o o o o o o window. The designated point is added to the array of corner corner of ists. ex image array the the currently to within added t i is occurred as has such point press points, designated button The point, window. is th At 88,512,512) LLX(SIDE),LLY(SIDE),SIDE) 3 3 IMAGE,ST(SIDE),88,ST(SIDE), 2 645,71,'F->’) 2 SF(SIDE),LCA,LCM,MODE,ZOOMVAL(SIDE), 2 645,27,'G->') 2 670,0,60,90) 2 X$DEFAULT_GC SCREEN DISP, ), ( IMAGEC(SIDE).A,X.MOUSE,Y.MOUSE, 1 1 645,71,'F->') 670,0,60,90) 2 2 X$DEFAULT_GC(DISP,SCREEN), 1 X$DEFAULT_GC(DISP,SCREEN), 1 88,LCA) 645,27,'G->') 1 2 (DISP,SCREEN),IMAGE,670,0, 1 0,LCM) 0 1 X$DEFAULT_GC(DISP,SCREEN), X$DEFAULT_GC(DISP,SCREEN), (DISP,SCREEN),IMAGE,670,0, 1 1 1 ELSE SIDE=1 END IF END IF END IF END IF CALL X$UNDEFINE_CURSOR(DISP.WINDOW) SIDE=1 DO SIDE=1,2 END DO ELSE CALL X$PUT_IMAGE WINDOW, DISP, ( C A LMO L D _ I M A G E ( * / , V A L ( I M A G E .X $ A _ I M A G _ D A T A ), END IF C A L L X $ D R A W _ S T R I N G ( D I S P , W IC N D A O L W L , X $ D R A W _ S T R I N G ( D I S P , W I N D O W , CALL X$PUT_IMAGE(DISP, WINDOW,X$DEFAULT_GCC A LMO L D LLCA=LCT C ( * / , V A L ( I M A G E . X $ A _ I M A G _ D A T A ), C A L L X $ D R A W _ S T R I N G ( D I S P , W I N D O W , C A L L X $ D R A W _ S T R I N G ( D I S P , W ICALL N D O X$PUT_IMAGE(DISP, W , WINDOW, X$DEFAULT_GC 181 ooo oooo ooo utn i ue t cl fr zo i a te os point. mouse the at in zoom a for call to used is 2 Button o e mutpe ons n overlay. in points ultiple m set to Put point on overlay, place on screen (due to zoom, may have have may zoom, to (due screen on place overlay, on point Put rnlt t ra coordinates. real to Translate J,SF(SIDE).X.MOUSE,I,LCA) 1 X.MOUSE+ST(SIDE),Y.MOUSE,WIDTH+l,WIDTH+l) 2 SCREEN).IMAGE,X.MOUSE+ST(SIDE).Y.MOUSE, 1 L E F EETen_utnxlbe_utnE.) THEN (EVENT.evnt_button.x$l_btev_button.EQ.2) ELSE IF y_mouse=EVENT.evnt.button.x$l_btev_y-88 y_mouse=EVENT.evnt.button.x$l_btev_y-88 x_mouse=EVENT.evnt.button.x$l_btev_x-ST(SIDE) CALL X$FLUSH(DISP) CALL X$DEFINE_CURSOR(DISP,WINDOW,CURSOR) F (Y.MOUSE.GT.O)IF THEN END IF CALL X$PUT_IMAGE(DISP, WINDOW, X$DEFAULT_GC(DISP, DO 1=1,WIDTH+1 Y_M0USE=(YM-LLY(SIDE))*(2**(ZOOMVAL(SIDE)-1))+88 X_M0USE=(XM-LLX(SIDE))*WIDTH=2**(Z00MVAL(SIDE)-l) (ZOOMVAL(SIDE)-1 * * (2 )) OVERLAY(SIDE).A(XM,YM)=2 XM=LLX(SIDE)+(x_mouse-l)/(2**(ZOOMVAL(SIDE)-1)) YM=LLY(SIDE)+(y_mouse-l)/(2**(ZOOMVAL(SIDE)-1)) y_mouse=EVENT.evnt.button.x$l_btev_y-88 x_mouse=EVENT.evnt.button.x$l_btev.x-ST(SIDE) x_mouse=EVENT.evnt.button.x$l_btev.x-ST(SIDE) F SD.Q1 THEN (SIDE.EQ.1) IF (Y.MOUSE.LT.O) IF GOTO 987 F EETen.utnxlbe_)G.1) SIDE=2 (EVENT.evnt.button.x$l_btev_x).GT.512) ( IF CALL Z00MC/.VAL(IMAGE.X$A_IMAG_DATA) .IMAGEC(SIDE) .A, ZOOMVAL(SIDE)=Z00MVAL(SIDE)+1 END DO END IF ELSE DO J=1.WIDTH+l XY(SIDE).Y(VPTL)=YMXY(SIDE).X(VPTL)=XM V P T L = V P T L + 1 XY(SIDE).Y(VPT)=YMXY(SIDE).X(VPT)=XM VPT=VPT+1 END DO CALL MOD.IMA('/,VAL(IMAGE. X$A_IMAG_DATA) Y.MOUSE, , 182 C 0 CONTINUE 500 o o o n o o vn i ntig n tc ar la rticu a p in nothing is Event utn i ue f a om u a te os poi t. in o p mouse the at out zoom a r fo used is 3 Button LLY(SIDE),SIDE) 3 LLX(SIDE),LLY(SIDE) ,LCA,LCM,MODE, SIDE) -1, 2 LCA.LCM,MODE,ZOOMVAL(SIDE),LLX(SIDE), 2 IMAGE,ST(SIDE),88,ST(SIDE),88,512,512) 2 IMAGE,ST(SIDE),88,ST(SIDE),88,512,512) 2 X$DEFAULT_GC SCREEN DISP, ( ), X.MOUSE, Y.MOUSE, SF(SIDE), ZOOMVAL1 (SIDE), 1 X$DEFAULT_GC SCREEN DISP, ), ( 1 IMAGE,ST(SIDE),88,ST(SIDE),88,512,512) 2 IMAGEC(SIDE),X_MOUSE,Y_MOUSE,SF(SIDE), 1 LLX(SIDE),LLY(SIDE),LCA,LCM,MODE,1,SIDE) 2 X$DEFAULT_GC SCREEN DISP, ( ), 1 X_MOUSE,Y.MOUSE,SF(SIDE),ZOOMVAL(SIDE), 1 END DO END IF END IF ELSE L E F EETen_utnxlbe_utnE.) THEN (EVENT.evnt_button.x$l_btev_button.EQ.3) ELSE IF CALL X$UNDEFINE_CURSOR(DISP, WINDOW) y_mouse=EVENT.evnt.button.x$l_btev_y-88 y_mouse=EVENT.evnt.button.x$l_btev_y-88 x_mouse=EVENT.evnt.button.x$l_btev_x-ST(SIDE) CALL X$FLUSH(DISP) CALL X$DEFINE_CURSOR(DISP, WINDOW, CURSOR) F (Y.MOUSE.GT.O) IF THEN END IF C A LX$ L U N D E F I N E _ C U R S O R ( D I S P , W I N D O W ) ZOOMVAL(SIDE)=ZOOMVAL(SIDE)-1 F (ZOOMVAL(SIDE).LT.2)IF THEN END IF END IF ELSE CALL X$PUT_IMAGE WINDOW, DISP, ( CALL X$PUT_IMAGE WINDOW, DISP, ( CALL ZOOMC/.VAL(IMAGE.X$A_IMAG_DATA) CALL IMAGEC(SIDE) , X$PUT_IMAGE, WINDOW, DISP, ( C A L L M OLLY(SIDE)=1 D _ I M A G ELLX(SIDE)=1 ( ' / , V A L ( I M A G E . X $ A _ I M A G _ D A T A ), ZOOMVAL(SIDE)=l 183 o o o la u display up Clean RETURN END CALL X$CLOSE_DISPLAY(DISP) C A LX$ L D E S T R O Y _ W I N D O W ( D I S P ,W I N D O W ) 184 o o o o o eod ra vle o ALL_TEXT. of values array record Subroutine to display the tex t according to the the to according t tex the display to Subroutine SUBROUTINE WINDOW, DO.TEXT(DISP, GC,ALL.TEXT) RETURN END DO WHILE (ALL.TEXT(I).X.GT.O) C H A R A C T E RTV * 1 0 A L RECORD /TEXT.TYPE/ ALL_TEXT(50) 1=1 INTEGER I,XVAL,YVAL STRUCTURE /TEXT.TYPE/ END DO E NST D R U C T U R E TVAL=ALL_TEXT(I).TEXT.COM CALL X$DRAW_STRING(DISP, WINDOW,YVAL=ALL_TEXT(XVAL=ALL_TEXT(I).X GC, .Y XVAL, ) I YVAL, TVAL) C H A R A C T E RTE * 2 0 X T . C O M 1 INTEGER X,Y = 1+1

185 o o o o o oe o e be o ces h IAE tutr correctly. is IMAGE is th structure the access point; image to able single be a to modify done to Subroutine SUBROUTINE MOD.IMA( IMAGE, MFAC, LCA) , YM, , J XM, I RETURN Y=(YM+J-1)*2+MFAC RECORD /IMAGE.BLOCK/ IMAGE(2000) END X=XM+I-1 STRUCTURE /IMAGE.BLOCK/ IMAGE(Y).BLK(X)=LCA INTEGER YM,J,MFAC,XM,I,LCA,X,Y E NST D R U C T U R E BYTE BLK(512) 186 o o o ooo ooo ooo e clr hie i upr ih hn corner hand right upper in choices color Set e cmad i bcgon t wie (1) white to background e command lin Set eo u te memory the out Zero uruie o culy od h dsrd images. desired the load actually to Subroutine SUBROUTINE LOAD.IMAGE (LUN2.LUN3,PIXELS,LINES,IMAGE, LCA,LCM,TRAN) 1 D O J=1,6 DO J=1,6 DO 1=1,88 DO 1=1,600 EqUIVALENCE(TRANS.TRANS.B)BYTE TRANS_B(4,0:255) RECORD /IMAGE.TYPE/RECORD IMAGEC(2).OVERLAY(2) /IMAGE.BLOCK/ IMAGE(2000),TEMP2,TEMP3 COMMON /BIG.AREA/ IMAGEC,OVERLAY INTEGER TRAN(0:255) INTEGER ,ICOLOR, LUN2,LUN3.PIXELS, 3 ,1 2 LCA, ,1 LCM J LINESI, STRUCTURE /IMAGE.BLOCK/ STRUCTURE /IMAGE.TYPE/ END DO END DO E NST D R U C T U R E E NST D R U C T U R E DO 12=1,44 DO J=1,512 DO J=1,512 BYTE BLK(512) INTEGER A(512,512) END DO END DO DO 13=1,44 IMAGE(I*2).BLK(J)=1 IMAGE(I*2-1).BLK(J)=1 IMAGE(2*I).BLK(J)=0 IMAGE(2*I-1).BLK(J)=0 END DO IMAGE((I2)*2-l).BLK((J—1)*44+760+13)=IC0L0R IC0L0R=0 (ICOLOR.Eq.19) IF IC0L0R=1 (ICOLOR.Eq.14) IF IC0L0R=7+J 187 C oooo ooo non ooooo Current colors for LCM and LCA are displayed in center-right center-right in displayed LCM LCA are and for location. colors Current ria bak i , emn of h 6 oo choices. color 6 the off segment s, e lin black ertical V oiotl lc lne cne o clr structure color of center e, lin black Horizontal Display colorchart in upper right hand c o rn er... the the er... rn o c hand right upper in colorchart Display te ueu purpose. useful other pc ws hr, o ae a od hc. evs no Serves check. good a makes t i so there, was space D O 1=2,88,2 DO 1=2,88,2 DO 1=1,6 DO 1=1,265 DO 12=1,121 DO 1=1,6 END DO END DO END DO END DO END DO END DO DO J=675,718 DO J=1,44 DO J=45,88 DO J=45,88 IMAGE(88).BLK(513-I)=0 I=(I2+6)*2 I=(I2+6)*2 END DO END DO END DO END DO END DO IMAGE(1+88).BLK(J-512)=LCA IMAGE(I).BLK(J-512)=LCM F IE.) IMAGE(J*4).BLK(512-I*44)=0 (I.EQ.6) IF IMAGE(J*2).BLK(512-1*44)=0 DO 12=1,4 IMAGE(J*2).BLK((512-254)+I)=I2 IMAGE(J*2).BLK((512-254)+I)=I2 IMAGE(J*2).BLK((512-254)+1-1)=12 END DO IMAGE(J*2).BLK((512-254-6)+( I - 1)*4+(12))=I 1)*4+(12))=I - I IMAGE(J*2).BLK((512-254-6)+( 188 OOCl ooo oo lc lns udvdn sm cmad utn. Individualized. command some buttons. subdividing lines Black ria bak i sgetn of command buttons. off segmenting e lin black ertical V oiotl lc lne on etr f command structure. of center down e lin black Horizontal DO 1=70,88 DO 1=50,69 DO J=1,88 DO J=1,88 DO J=1,88 DO 1=1,25 DO 1=420,512 DO 1=1,352 END DO END DO END DO END DO END DO END DO END DO DO 1=1,4 DO 1=1,4 IMAGE(43).BLK(I+176)=0 IMAGE(4 ).BLK(I+88)=0 3 IMAGE(43).BLK(I)=0 IMAGE(43).BLK(I)=0 IMAGE(2*J).BLK(27)=0 IMAGE(2*J).BLK(26)=0 IMAGE(2*J).BLK(25)=0 IMAGE(2*J-1).BLK(354)=0 IMAGE(2*J-1).BLK(353)=0 IMAGE(132).BLK(I)=0 IMAGE(90).BLK(I)=0 IMAGE(44).BLK(I)=0 IMAGE(131).BLK(I)=0 IMAGE(89).BLK(I)=0 IMAGE(43).BLK(I)=0 IMAGE(87).BLK(I)=0 IMAGE(89).BLK(I)=0 END DO END DO IMAGE(2*J-l).BLK(22*1+398)=0 IMAGE(2*J-1).BLK(88*1+1)=0 IMAGE(2*J-l).BLK(88*1)=0 189 o o o o o o o o oiy mg bsd n rnsai u t clral and colortable to due n slatio tran on based image Modify tr r t f mg (1x04 ipa space). display (512x1024 image of st re Store stretching. RETURN CALL ADJUST.IMAGE(IMAGE,TRAN) END DO 1=89,600 DO J=2,88,2 END DO END DO END DO DO J=1,512 READ(LUN3) IMAGE(I3).BLKREAD(LUN2) IMAGE(I2).BLK 13=2*1 12 IMAGE(131).BLK(I+176)=0 IMAGE(J-l+88).BLK(246)=0 IMAGE(J-l+88).BLK(158)=0 IMAGE(J-l).BLK(334)=0 IMAGE(J-l).BLK(158)=0 IMAGE(J-l).BLK(50)=0 IMAGE(131).BLK(I+88)=0 IMAGE(43).BLK(I+264)=0 IMAGE(J-l+88).BLK(70)=0 IMAGE(J-l).BLK(246)=0 IMAGE(131).BLK(I)=0 END DO IMAGEC(2 ).A(J,1-88)=ZEXT(IMAGE(13).BLK(J )) = IMAGEC(l).A(J,I-88)=ZEXT(IMAGE(I2).BLK(J)) 2 * 1-1

190 OOO OQO oooo o o o o o id iiu ad aiu i ec iae l t n ri ). t) h ig r and ft (le image each maximum minimum and in Find o aclt srthn prmtr fr h to images. two the for parameters stretching calculate to aclt hsorm n ah image. each in histogram Calculate Set histogram to 0 fo r each of the two images. HIST is used used is HIST images. two the of each r fo 0 to histogram Set aa o tece dt dslybe n h sre i some in screen the on displayable data stretched to data Subroutine to convert image from non-stretched original original non-stretched from image convert to Subroutine ebac o is oiia s lf. se original its of semblance URUIE ADJUST.IMAGE(IMAGE,TRAN) SUBROUTINE HIST(1,ZEXT(IMAGE(I2).BLK(J)))+1 1 HIST(2,ZEXT(IMAGE(I3).BLK(J)))+1 1 MINL=255 DO 1=89,600 DO 1=1,255 RECORD /IMAGE.TYPE/RECORD IMAGEC(2),0VERLAY(2) /IMAGE.BLOCK/ IMAGE(2000),TEMP2,TEMP3 EQUIVALENCE(TRANSR,TRANS.BR)EQUIVALENCE(TRANSL,TRANS.BL)BYTE TRANS_BL(4,0:255),TRANS_BR(4,0:255) COMMON /BIG.AREA/ IMAGEC, OVERLAY INTEGER MINL, MAXL, MINR, MAXR INTEGER TRANSL(0:255),TRANSR(0:255),TRAN(0:255) INTEGER ,12,13,HIST(2,0:255) J , PIXELS, LINES, I STRUCTURE /IMAGE.BLOCK/ STRUCTUREIMPLICIT /IMAGE.TYPE/ NONE END DO END DO E NST D R U C T U R E E NST D R U C T U R E DO J=1,512 HIST(2,I)=0 HIST(2,I)=0 I)=0 HIST(1, 13=2*1 12 BYTE BLK(512) INTEGER A(512,512) END DO IT2,ZEXT(IMAGE(I3).BLK(J)))=HIST(2 HIST(1,ZEXT(IMAGE(I2).BLK(J)))= = 2 * 1-1

191 QOOO OQQC1 OOOO Use TRANS.BR and TRANS_BL to tra n sla te the le f t and righ t sections sections t righ and t f le the te TRANS.BRUse sla TRANS_BL n and tra to f h image the of by defined as colormap, current into HIST t TRANSR. h rig in TRAN, the placed Translate rnlt te IT no urn clra, s eie by defined as colormap, current into HIST t TRANSL f e l in TRAN, the placed Translate RETURN END DO 1=89,600 F (MAXR.NE.MINR) IF THEN F (MAXL.NE.MINL) IF THEN MAXR=0MINR=255 DO 1=0,255 MAXL=0 END DO END IF END IF END DO DO J=1,512 13=2*1 12=2*1-1 DO I=MINR,MAXR DO I=MINL,MAXL F HS(,)N.) THEN (HIST(2,I).NE.O) IF F HS(.)N.) THEN (HIST(l.I).NE.O) IF END DO END DO END DO END IF END IF IMAGEC(2 ).A(J,1-88)=ZEXT(IMAGE(13).BLK(J)) I-88)=ZEXT(IMAGE(I2).BLK(J)) IMAGEC(l).A(J, TRANSR(I)=TRAN(INT( )/(MAXR-MINR))+7)) 0 2 I-MINR)* (1 ( ( IMAGE(I )=TRANS_BR(1,ZEXT(IMAGE(I ).BLK(J 3 IMAGE(12).BLK(J )=TRANS_BL(1,ZEXT(IMAGE(I2).BLK(J))) ))) ).BLK(J 3 TRANSL(I )=TRAN(INT(((I-MINL)*(120)/(MAXL-MINL))+7)) M A X R = M A X ( M A X R ,I) MINR=MIN(MINR, ) I M A X L = MMINL=MIN(MINL, A X ( M A ) X I L ,I) 192 193

SUBROUTINE MANAUT(IMAGE) C C Subroutine to switch from manual to automatic, as fa r as the C display shows (basically clears image behind the C MANUAL/AUTOMATIC text, since the text overlay would otherwise C s t i l l be v is ib le ). C STRUCTURE /IMAGE.BLOCK/ BYTE BLK(512) END STRUCTURE RECORD /IMAGE.BLOCK/ IMAGE(2000) INTEGER I ,J

DO 1=2,88,2 DO >550,570 IMAGE(I).BLK(J-512)=1 END DO END DO

RETURN END noon display. Subroutine to modify the local color (LC) as shown in the the in shown as (LC) color local the modify to Subroutine URUIE OL (IMAGE,FAC,LC) ( MODLC SUBROUTINE RETURN D O 1=2,88,2 DO 1=2,88,2 RECORD /IMAGE.BLOCK/ IMAGE(2000) END STRUCTURE /IMAGE.BLOCK/ INTEGER FAC,LC,I,J 1 END DO E NST D R U C T U R E = DO J=675,718 BYTE BLK(512) 1+1 END DO IMAGE(I+FAC).BLK(J-512)=LC 194 195

SUBROUTINE MOD.IMAGE (IMAGE,IMAGE2,X_MOUSE,Y.MOUSE,MFAC, 1 LCA.LCM,MODE,ZV.LLX,LLY,SIDE) C C Main image modification subroutine. This subroutine will take C the components of the displayed (512x1024) image, namely IMAGE, C IMAGE2 and OVERLAY, and combine them as appropriate (given C zooms and other events). C

STRUCTURE /IMAGE.BLOCK/ BYTE BLK(512) END STRUCTURE RECORD /IMAGE.BLOCK/ IMAGE(2000) STRUCTURE /IMAGE.TYPE/ INTEGER A(512,512) END STRUCTURE RECORD /IMAGE.TYPE/ IMAGEC(2).OVERLAY(2) COMMON /BIG.AREA/ IMAGEC,OVERLAY BYTE IVALB INTEGER IMAGE2(512,512),IVAL,SIDE INTEGER LUN, PIXELS, LINES,I,J,I2 ,LCA.LCM INTEGER COUNT.PIX,K,X.MOUSE,Y.MOUSE,MFAC,MODE,ZV,LLX,LLY EQUIVALENCE (IVAL, IVALB)

If ZV (ZOOMVAL) is one, some calculation time can be saved...

IF (ZV.EQ.1) THEN DO 10 1=1,512 DO 10 J=1,512 IVAL=IMAGE2(I,J) IMAGE(J+2+MFAC+176).BLK(I)=IVALB IF (OVERLAY(SIDE).A(I,J).NE.O) THEN IF (OVERLAY(SIDE).A(I, J).EQ.1) THEN IMAGE(J*2+MFAC+176).BLK(I)=LCA ELSE IMAGE(J*2+MFAC+176).BLK(I)=LCM END IF END IF CONTINUE ELSE

Need to convert to re a lity due to zooming.

ZFAC=(2**(ZV-1)) XM=LLX+(256/ZFAC) YM=LLY+(256/ZFAC) DO 20 1=1,512 196

DO 20 J=1,512 I_DBL=LLX+(I-1)/ZFAC J_DBL=LLY+(J-1)/ZFAC IVAL=IMAGE2(I.DBL, J.DBL) IMAGE(J*2+MFAC+176).BLK(I)=IVALB IF (OVERLAY(SIDE).A(I.DBL,J.DBL).NE.O) THEN IF (OVERLAY(SIDE).A(I.DBL,J.DBL).EQ.1) THEN IMAGE(J*2+MFAC+176).BLK(I)=LCA ELSE IMAGE(J*2+MFAC+176).BLK(I)=LCM END IF END IF 20 CONTINUE END IF RETURN END oooo ooooo o o o o o mouse position to real space, need to use older ZV... which is is which ZV... older use to need space, real to position mouse on uig I (ieto cret om s i . ) in is zoom current (direction DIR using found ipa ae, edut o s o s f l screen. ll fu use to as so outside If readjust area, max's. and zoomdisplay min’s requested Check current te sla n tra to so changed, been (ZOOMVAL)ZV already has edge. an near if zoom the location center zoom Calculates appropriate ZOOM provide capability. to Subroutine c h arac teristics, such as new LLX and LLY, as well as as well as LLY, LLX new and as such teristics, arac h c Y_MIN=YM-(256/ZFAC) Y M = Y _ M 0 U S E RECORD /IMAGE.TYPE/ IMAGEC(2),0VERLAY(2) RECORD /IMAGE.BLOCK/ IMAGE(2000) X_MAX=XM+(256/ZFAC)X_MIN=XM-(256/ZFAC) ZFAC=2**(ZV-l) X M = X _ M 0 U S E EQUIVALENCE IVAL, ( IVALB) BYTE IVALB COMMON /BIG.AREA/ IMAGEC, OVERLAY SUBROUTINE ZOOM( IMAGE, IMAGE2, X.MOUSE, Y.MOUSE, MFAC, ZV, STRUCTURE /IMAGE.TYPE/ STRUCTURE /IMAGE.BLOCK/ F DRE.) THEN (DIR.EQ.l) IF INTEGER X.MIN, X.MAX, INTEGER Y.MIN,Y.MAX, COUNT.PIX, INTEGER X.FAC, X.MOUSE.Y.MOUSE, K, Y.FAC, LLX, LUN, LINES,I,J,I2,ZV LLY, PIXELS, XM, INTEGER YM IMAGE2(512,512),IVAL,SIDE MFAC, ZFAC, DIR F (X.MAX.GT.512) X_FAC=512-X_MAXIF X_FAC=-(X.MIN-1) (X.MIN.LT.l) IF LLX,LLY, LCA.LCM,MODE,DIR,SIDE) 1 E NST D R U C T U R E E NST D R U C T U R E END IF ELSE Y M = L L Y + Y MX / M ( Z = F L A L C X * + 2 X ) M / ( Z F A C * 2 ) YM=LLY+YM/(ZFAC/2)XM=LLX+XM/(ZFAC/2) BYTE BLK(512) INTEGER A(512,512) 197 0 CONTINUE 20 O O O O Q O O oiy h iae o te zoom. the for image the Modify and center zoom new LLY LLX using and Recalculate F C wih ss ZV). uses ZFAC (which RETURN Y_MAX=YM+(256/ZFAC) END D O 20 1=1,512 1=1,512 DO 20 LLY=YM-(256/ZFAC)LLX=XM-(256/ZFAC) Y M = Y M +X Y M _ = F X A M C + X _ F A C F LYL.) THEN (LLY.LT.l) IF F YMNL.) Y_FAC=-(Y_MIN-1) (Y.MIN.LT.l) IF F LXL.) THEN (LLX.LT.l) IF (Y_MAX.GT.512)IF Y_FAC=512-Y_MAX END IF END IF Y M = Y M + ( 1 - L L Y ) D O 20 J=1,512 J=1,512 DO 20 LLY=1 LLX=1 X M = X M + ( 1 - L L X ) IMAGE(J*2+MFAC+176).BLK(I)=IVALB F (OVERLAY(SIDE).A(I.DBL,J.DBL).NE.O)IF THEN IVAL=IMAGE2(I.DBL, J.DBL) I_DBL=LLX+((I-1)/ZFAC) J_DBL=LLY+((J-1)/ZFAC) END IF F (OVERLAY(SIDE).A(I_DBL,J.DBL).EQ.l) THEN IF ELSE END IF IMAGE(J*2+MFAC+176).BLK(I)=LCM IMAGE( J *2+MFAC+1 ).BLK(I)=LCA 6 7 198 o o o o o handler. Sim plistic X-windows c a ll to ease the main event event main the ease to ll a c X-windows plistic Sim SUBROUTINE RESTORE_DRAWABLE_EVENT_NOTIFY(DISP) RETURN END CALL SYS$WAKE ) , ( INTEGER EVENT(50), PENDING, DISP, X$PENDING, IMPLICIT NONE / COUNT 0 / 199 a o o o o o ooooo hs clr gt e i 8 9 1, 1 1, n 1 (hopefully) 13 and 12, 11, 10, 9, the 8, for in RGB set values the get includes colors e l These i f The data. color in Read bsc oos rd gen bu, yn mgna ad elw). yellow and magenta, cyan, blue, green, (red, colors basic 6 the from table translation a and colormap, the Returns eie clr t te oomp (TRAN). colormap the to colors desired uruie o ul te oomp o te display. the for colormap the build to Subroutine DO 1=1,6 0PEN(UNIT=18, STATUS=' OLD' .READONLY, COLORS. FILE=' DAT' ) EQUIVALENCE (C,J(1)) RECORD /X$COLOR/ COLOR STATUS 1 EQUIVALENCEBYTE (TRANS,TRANS_B) TRANS_B(4,0:255) REAL*8 STRETCH.OFFSET, REAL*8 STRETCH.FACTOR, OUT.SUM, R OUT.SUM.SQUARED.VARIANCE,STD.DEV E ACO L N AVERAGE T R A S T x$m_do_blue 1 INTEGER TRAN(0:255) INTEGER*2 J(2) INTEGER*4 C INCLUDE SYS$SHARE: ' DECW$XLIBDEF' INTEGER STRETCH.LOW, INTEGER NONZERO,LOW,HIGH,HIST(0:2 STRETCH.HIGH I N T E G EWO R * 2 R ),TRANS(0:255) 5 D 5 , G A D INTEGER THRESHOLD,STATUS INTEGER COLORMAP, DISP, WIDTH,HEIGHT, N, VALUE SUBROUTINE BUILD.COLORMAP (DISP,COLORMAP,TRAN) READ(18,*)R,G,B COLOR.x$b_colr_flags=x$m_do_red+x$m_do_green+ COLOR.x$w_colr_blue=J(l) C=B STATUS=X$ALL0C_C0L0R (DISP,COLORMAP, COLOR) COLOR.x$w_colr_green=J(1) C=G COLOR.x$w_colr_red=J(1) C=R F (.NOT.STATUS) IF THEN END IF YE,Dfnn on colormap' own TYPE*,'Defining STATUS=X$ALL0C_C0L0R(DISP,C 0 L 0 R M A P COLORMAP, = X $ C 0 P Y _ C 0 L 0 COLOR) R M A P _ A N D _ F R E E ( D I S P , C O L O R M A P ) F .O.TTS TP*'ro alctn col r:', lo o c allocating (.NOT.STATUS) IF TYPE*,'Error 200 OOO QOOQ OOO pae rnsaton table. n tio sla tran Update white. Set 120 of the remaining colors to a grey lev el, from black to to black from el, lev grey a to colors remaining the of 120 Set ... le b ta n tio sla tran Set RETURN END DO 1=7,127 STATUS 1 x$m_do_blue 1 END DO END DO TRAN( =C0L0R. ) I X$L_C0LR_PIXEL COLOR.x$b_colr_flags=x$m_do_red+x$m_do_green+ COLOR.x$w_colr_red= (1) J TRAN( =C0L0R. ) I X$L_COLR_PIXEL F (.NOT.STATUS) THENIF STATUS X$ALL0C_C0L0R(DISP,COLORMAP,COLOR) = COLOR.x$w_colr_blue=J(1) COLOR.x$w_colr_green=J(1) C=INT((FL0AT(I-7)/(120))*FL0AT(65535)) END IF YE,Dfnn on colormap' own TYPE*,'Defining STATUS X$ALL0C_C0L0R(DISP,COLORMAP,COLOR) = C O L O R MX$ A= C P O P Y _ C O L O R M A P _ A N D _ F R E E ( D I S P , C O L O R M A P ) F .O.TTS TP*'ro alctn col r:', lo o c allocating (.NOT.STATUS) IF TYPE*,'Error 201 OOO OOO OOO e mti fr L analysis. LLS for matrix Set efr LS n Y dt, o moh u Z values. Z out smooth to XYZ LLS on data, Perform uruie o eemn te ra ie te points. the given area the determine to Subroutine SUBROUTINE FIND.AREA(PT, NORM, VPT, AREA.TRUE) MAT(2,2)=SUMY2 MAT(2,1)=SUMXYMAT(l,3)=SUMX MAT(1,2)=SUMXY 1)=SUMX2MAT(l, DO 1=1,VPT SUMY2=0.0 REAL DET,LENGTH,AREA.TRUE SUMYZ=0.0 SUMXZ=0.0 SUMXY=0.0 SUMX2=0.0 SUMZ=0.0 SUMY=0.0 SUMX=0.0 REAL ,MATINV(3,3),MAT2(3),ANS(3) MAT( ) ,3 3 RECORD /VECT.TYPE/ PQ,PR,PCR,qR,QP,PCR_ONE RECORD /POINT.TYPE/ PT(lOO),N0RM INTEGER AREA INTEGER VPT INTEGER RC(3,2) STRUCTURE /VECT.TYPE/ STRUCTURE /POINT.TYPE/ END DO E NST D R U C T U R E E NST D R U C T U R E SUMZ=SUMZ+(PT(I).Z)SUMXZ=SUMXZ+(PT(ISUMYZ=SUMYZ+(PT(I .X*PT(I).Z) ) SUMXY=SUMXY .Y*PT(I).Z) ) .X*PT(I).Y) SUMX2=SUMX2+((PT(I).X)**2) ) +(PT(I SUMY=SUMY+(PT(I).Y) SUMY2=SUMY2+((PT(I .Y)**2) ) SUMX=SUMX+(PT(I).X) REAL X,Y,Z REAL X,Y,Z INTEGER XL, YL, XR, ZL, YR, ZR 202 OOO OOO OOO OOO OOO OOO aclt te ouin arx y arx ultiplication. m matrix by matrix solution the Calculate aclt MTs nes uig C n soe n MATINV. in store RC and using MAT’s inverse Calculate ouin matrix. Solution aclt MTS eemnn; os slto exist? solution a does MAT’S determinant; Calculate nemdae ra t hl i mti i ri ... n ersio v in matrix in help to array Intermediate e RHS matrix. Set MAT(RC(I,1),RC(J,2))) 2 MAT(1,2)*MAT(2,1)*MAT(3,3)-MAT(1,1)*MAT(2,3)*MAT(3,2) 2 MTR(, ,CJ2)MTR(, ),RC(J,1))* MAT(RC(I,2 ),RC(J,2))-MAT(RC(I,2 1 MAT(1,3)*MAT(2,1)*MAT(3,2)-MAT(1,3)*MAT(2,2)*MAT(3,1)- 1 DO 1=1,3 ANS(l)=0.0 ANS(l)=0.0 RC(3,2)=2 RC(1,2)=3 1)=2 RC(1, ANS(3)=0.0 RC(2,2)=3 M A T 2 S (2)= U M Y Z ANS(2)=0.0 RC(3,1)=1 RC(2,1)=1 MAT2(3)=SUMZ DET=MAT(l,l)*MAT(2,2)*MAT(3,3)+MAT(l,2)*MAT(2,3)*MAT(3,l)+ M A T 2 ( 1 ) = S U M X Z MAT( =FL0AT(VPT) ) ,3 3 MAT(3,2)=SUMYMAT(3,l)=SUMX MAT(2,3)=SUMY F (DET.EQ.O) THEN IF ENDIF DO J=1,3 RETURN Y E,’arx non-invertible’ TYPE*, ’Matrix MATINV(I,J)=(C-1)**(I+J))*(MAT(RC(I,1),RC(J,1))* ANS(I )=ANS(I )+MATINV(I,J)*MAT2(J) 203 OOO O O O O O OOO OOO OOOOOOOO OOO eemn x paa area. planar xy Determine because the Left and Right images as viewed are the reverse reverse the are PCR.Z, viewed of as images negative Right Use and Left the vector. because normal the Normalize Normal mgs f h equation. the of images opt te oml o h pae sn te he poi ts. in o p three t s r i f the using plane the to normal the Compute Compute angle between determined plane plane determined between angle Compute as values. actual get to determinant by matrix solution Divide and the xy plane plane xy the and C=A*x+B*y+z cos(theta)--=(A*0+B*0+l*l)/[(A~2+B~2+l)-0.5 * (0~2+0“2+l''2)“0 .5] (0~2+0“2+l''2)“0 * cos(theta)--=(A*0+B*0+l*l)/[(A~2+B~2+l)-0.5 0=z (PT(2).Y)+ANS(3)) 2 2 +N( )(N()(T2).X)+ANS(2)* +ANS(3 ))-(ANS(1)*(PT(2 1 +N( )(N()(T2).X)+ANS(2)* +ANS(3 ))-(ANS(1)*(PT(2 1 CALL XYAREA(PT, VPT, AREA) LENGTH=SqRT(PCR.X**2+PCR.Y**2+PCR.Z**2) C.=PR Z/LENGTH PCR.Z=-PCR. Y/LENGTH PCR.Y=PCR. C.=C.X/LENGTH PCR.X=PCR. call Z=( l*( l. +ANS( *( l. ) (l).Y T (P )* (2 S N A )+ (l).X T (P ) (l)* S (2 T N -P (A = .Z P ) (l).Y q T (2 T (P = -P .Y P (l).X q T (P = .X P q qR.X=(PT(3).X-PT(2) THETA=AC0S(l/(SqRT((ANS(l)**2)+(ANS(2)**2)+l))) qR. qR.Y=(PT(3).Y-PT(2) PCR END DO N()=ANS(IANS(I) /DET ) - Z END DO

ss o r c is the cross product of of product cross the is (PT(2).Y )+A N S(3)) S(3)) N )+A (PT(2).Y (ANS (1 ( qp.qR,PCR) ) * (PT ) 3 ( .Y) .X) .X)+ANS(2)*(PT(3) .Y) .Y) .X) qR qR and qP. .Y) 204 205 AREA_TRUE=FLOAT(AREA)/COS(THETA) END N0RM.X=PCR.X N0RM.X=PCR.X NORM.Y=PCR.Y NORM.Z=PCR.Z RETURN Output area data to 10... Divide xy planar area by cos(theta) to find true area. O U OOO o o o o o ad , n rtr te es t n CRP. in lt su re the return and B, and A Subroutine to evaluate the cross product of two vectors vectors two of product cross the evaluate to Subroutine SUBROUTINE CROSS(A,B,CRP) RETURN CRP.Z=A. X*B.Y-B.X*A.Y RECORD /VECT.TYPE/ A,B,CRP END CRP.Y=B.X*A.Z-A.X*B.Z CRP.X=A. Y*B.Z-A.Z*B.Y STRUCTURE /VECT.TYPE/ E NST D R U C T U R E REAL X,Y,Z 206 C /temp.overlay/ uses FILLAREA, to it except similar C OOO OOO avoids it convexity. but or concavity for pixels), check determination square to force (in having brute much area a very C the is of this C that Note SIDE=3).C Very (effectively C area area. the common for (2DC block projection) xy the get to Subroutine C C e mn ad ae fr analysis. maxes and for mins get Outline area using lines in overlay, overlay, in lines using area Outline itxy(-)y),3) (i-l).y y int(x 1 int(xy(l).y),3) 1 uruie e xy, , e ) rea t,a p ,v y (x rea a y x subroutine xmax=l xmin=512 ymax=l ymin=512 o i=l,vpt do o i=2,vpt do area=0 call line(int(xy(vpt).x),int(xy(vpt).y),int(xy(l).x), line(int(xy(vpt).x),int(xy(vpt).y),int(xy(l).x), call record /image.type/ overlay common overlay overlay /temp_overlay/ /image.type/ record eod pittp/ xy(100) /point_type/ record nee icrossed,k,j,i,area integer ymin,ymax,xmin,xmax,xlow,ylow integer /image.type/ structure /point_type/ structure nee image(512,512) integer side.vpt integer mlct none implicit n doend end structure structure end end structure structure end xmin=min(xmin,int(xy(i).x)) ymax=max(ymax,int(xy(i).y)) ymin=min(ymin,int(xy(i).y)) call line(int(xy (i).x),int(xy(i).y),int(xy (i-l).x), (i-l).x), (i).x),int(xy(i).y),int(xy line(int(xy call real x,y,z x,y,z real nee a(512,512) integer integer xl,y l,zl,x r,y r,zr r,zr r,y l,zl,x xl,y integer 207 ooo ooo ooo ooo ooo o p n wti te i ai mx wt some leeway. aaid min with max, the within j Loop on op n wti te i ad a, ih some leeway. with max, and min the within i Loop on o p n wti te i ad a, ih some leeway. with max, and min the within i Loop on f + soe on, tp counting. stop found, slope (+) if f - soe on, ei cutn again. counting begin found, slope (-) if (vra.(-,)e.) then (overlay.a(i-l,j).eq.1)) 1 then (j.le.512)) 1 (vra.(-,)n.) then (overlay.a(i-l,j).ne.1)) 1 o j=ymin-l,ymax+ldo o i=xmin-l,xmax+ldo n doend n doend xmax=max(xmax,int(xy(i).x)) o i=xmin-l,xmax+ldo xlow=512 ylow=512 icrossed=0 icrossed=0 icrossed=0 icrossed^O n doend if ( ( i.g t.0).and.(i.le.512).and.(j.gt.0).and. t.0).and.(i.le.512).and.(j.gt.0).and. i.g ( ( if end if if end f (vra.(,).ne.l).and. ((overlay.a(i,j) if le f ((overlay.a(i,j).eq.1).and. if else end if if end xlow=i if (icrossed.eq.1) then then (icrossed.eq.1) if icrossed=l end if if end else else o k=xlow,i-l do icrossed=0 icrossed=0 n doend if (overlay.a(k,j).ne.1) then then (overlay.a(k,j).ne.1) if end if if end vra.(, =vra.(, )+2 )=overlay.a(k,j overlay.a(k,j 208 ooo ooo ooo oo Zero out the overlay (just in case). case). in (just overlay the out Zero op n wti te i ad a, ih some leeway. with max, and min the within j Loop on f + soe on, tp counting. stop found, slope (+) if f - soe on, ei cutn again. counting begin found, slope (-) if (vra.(,-)n.) then (overlay.a(i,j-l).ne.1)) 1 (vra.(,-)e.) then (overlay.a(i,j-l).eq.1)) 1 (.e52) then (j.le.512)) 1 end return o i=xmin-l,xmax+l do area=0 then 1 n do end n do end o j=ymin-l,ymax+l do icrossed=0 icrossed=0 o j=ymin-l,ymax+l do n doend n doend overlay.a(i,j)=0 overlay.a(i,j)=0 if ((overlay.a(i,j).eq.l).or.(overlay.a(i,j).eq.6)) ((overlay.a(i,j).eq.l).or.(overlay.a(i,j).eq.6)) if f . t.0).and.(i.le.512).and.(j.gt.O).and. i.g ( ( if end if if end end if if end area=area+l area=area+l f (vra.(,).ne.1).and. ((overlay.a(i,j) if end if if end le f ((overlay.a(i,j).eq.1).and. if else ylow=j if (icrossed.eq.1) then then (icrossed.eq.1) if icrossed=l icrossed=l end if if end else o k=ylow,j-l do icrossed=0 icrossed=0 n doend if (overlay.a(i,k).ne.l) then then (overlay.a(i,k).ne.l) if end if if end overlay.a(i,k)=overlay.a(i,k)+4 overlay.a(i,k)=overlay.a(i,k)+4 209 OQ QOOO O O O O O O O O OOOO e dph o points(i). for depth Get Take left and right points and insert into single array, array, single determined. once into 3D points the insert contain and also points will which right and left Take Need to have equal number of points in the le ft and right right and ft le the in points number of equal have Need to degrees. 5.0 Usually photographs. lh i te oain nl (/ i rdas fr h SEM the for radians) in (+/- angle rotation the is Alpha mg; f o, an sr n exit. and user warn not, if image; ons n dtrie on dph 2D 3D areas. and depth, point determine and points Subroutine to take the left and right side designated designated side right and left the take to Subroutine uruie ux(y vptl,i pl i pl lt) u m _ e n la ,p .id e n la ,p , l io t p ,v t p v runxy(xy, subroutine o i=l,vpt do ln_ye2= Sd (0) ' ’ & 101)’ (001 Apical (100)plane_type(3)=' Side (010) plane_type(2)=' Basal plane_type(l)=' alpha=5.0*acos(-1.0)/180.0 f vtn.pl then (vpt.ne.vptl) if alpha,beta,plane_mult,area real plane_type(3) character*20 xy(2) /corner/ points(lOO),norm record /point_type/ record integer vpt,io,vptl,i,plane_id vpt,io,vptl,i,plane_id integer /point.type/ structure structure /corner/ /corner/ structure none implicit n if end end structure structure end end structure structure end points(i).xr=xy(2).x(i) points(i).yr=xy(2),y(i) points(i).xr=xy(2).x(i) points(i).yl=xy(l).y(i) points(i).xl=xy(l).x(i) points(i).xl=xy(l).x(i) TYPE*, 'WARNING ',vpt,vptl mismatch point corner - RETURN real x,y,z x,y,z real integer x l,y l,zl,x r,y r,zr r,zr r,y l,zl,x l,y x integer integer x(100),y(100) x(100),y(100) integer 210 C 0 fra(2,31' multiple)',/, (m format(A20,F3.1,' 101 ooo l pit hv dph nw id re area. true now find depth, have points All 2 ' 3D Area = ',F20.5,' square pixels') pixels') square ',F20.5,' 3D = Area ' 2 ' oml \ 3,i \ 3,j ',1. ' ',/, ,'k ' ,F10.3 + ,'j .3 0 1 \F + ,'i .3 0 1 \F Normal = ' 1 norm.x,norm.y,norm.z,area 1 write(io,101)plane_type(plane_id),plane_mult, end return call find.area(points,vpt,norm,area) find.area(points,vpt,norm,area) call n do end call get.depth(points(i).alpha) get.depth(points(i).alpha) call 211 ooo oooo ooo ooo oooo rbe. et i sml zdla ( sin(alpha)) /(2 x z=delta simply is Depth problem. el y aus (3D) values xy Real e ra rtr values return real Set S E M effectively collapses 3D into 2D without perspective perspective 2D without 3D into collapses SEM effectively Compute parallax. parallax. Subroutine to calculate the depth of a point given its its given point a of depth the calculate to Subroutine uruie _dept poi ,alpha) t, in o (p th p e d t_ e g subroutine point.z=z_r point.y=y_r point.x=x_r z_r=del_x/(2.0*sin(alpha)) .0 l)/2 y_r=float(point.yr) x_r=float(point.xr+point.x end return del_x=float(point.xr-point.xl) el del_x,x_r,y_r,z_r real alpha,beta point real /point_type/ record tutr /point_type/ structure none implicit end structure structure end real x,y,z x,y,z real integer x l,y l,zl,x r,y r,zr r,zr r,y l,zl,x l,y x integer 212 ooo ooo oooo op o uln ae (o fill), sn lines. using , ) l l i f (not area outline Loop to area. an l l i f to points 3 least Need at Subroutine to f i l l the area bounded by the points in the the in points the bounded by area the side. l chosen l i f to Subroutine uruie larea(xy, vptllp,mage, de, ) e d o ,m e id ,s e g a ,im l,l,p t p ,v t p ,v y x ( a e r illa f subroutine o i=xmin,xmaxdo xmax=l xmin=512 ymax=l ymin=512 o =l,ifinal i= do co mm on /big.area/ imagec,overlay imagec,overlay xy(2)common /big.area/ /corner/ record imagec(2),overlay(2) /image_type/ record if (side.eq.2) ifinal=vptl ifinal=vptl (side.eq.2) if /image_type/ structure /corner/ structure if (side.eq.l) then then (side.eq.l) if ifinal=vpt ,xmid xmin,xmax,ymin,ymax,ifinal integer image(512,512),ix,iy,lea,mode last,step,x,y,i integer last,y dx,dy,x integer vptl,1,p,vpt,side,xl,x2,y1,y2,istop,j integer none implicit n do end else n doend if end structure end end structure structure end xmax=max(xmax,xy(side).x(i)) xmin=min(xmin,xy(side).x(i)) ymin=min(ymin,xy(side).y(i)) o j=ymin,ymaxdo ymax=max(ymax,xy(side).y(i)) if (v p t.It.3) return return t.It.3) p (v if if (v p tl.It.3) return return tl.It.3) p (v if a(512,512) integer x(100),y(100) integer n doend overlay(side).a(i,j)=0 overlay(side).a(i,j)=0 213 214

i=0 istop=0 987 i»i+l _ xl=xy(side).x(i) yl=xy(side).y(i) if ((xy(side).x(i+l).eq.O).and.(xy(side).y(i+l).eq.O)) 1 then x2=xy(side).x(l) y2=xy(side).y(l) istop=l else x2=xy(side).x(i+l) y2=xy(side).y(i+l) end if call Iine(xl,yl,x2,y2,side) if (istop.eq.O) goto 987 C C Fill the area, if mode is 'FILL' (1) C if (mode.eq.1) then call fill(xy,vpt,vpt1,1,p,image,side) end if C C Draw guidelines from each point, extending into C the opposite side. Set overlay to 2 for guidelines C (1 is used for lines). C if (side.eq.l) then do i=l,vpt iy=xy(side).y(i) do ix=xy(side).x(i),512 overlay(side).a(ix,iy)=2 end do end do do i=l,vpt iy=xy(side).y(i) do ix=l,512 overlay(2).a(ix,iy)=2 end do end do else do i=l,vptl iy=xy(side).y(i) do ix=xy(side).x(i),1,-1 overlay(side).a(ix,iy)=2 end do end do 215

do i=l,vptl iy=xy(side).y(i) do ix=512,l,-l overlay(l) .a(ix,iy):=2 end do end do end if return end ooo ooooo ooo is n mxs f h points. l the l maxes of and i mins f and crossovers, image for the scan Need to e m bten mn n ya (ih li e leeway) le t it l a ymax ymin (with and between Seem j vrtig ewe te rsoes Sa between Scan crossovers. the between everything uruie o h area. the l l i f to Subroutine uruie vpt,vptl,1,p,i g,sde) e sid age, im , p , 1 , l t p v , t p ,v y x ( l l i f subroutine o j=ymin-l,ymax+ldo o =l,ifinal i= do xmax=l xmin=512 ymax=l ymin=512 if (side.eq.2) ifinal=vptl ifinal=vptl (side.eq.2) if filedesc,filename character*50 alpha,beta real xy(2) /corner/ record imagec(2),overlay(2) /image_type/ record ifinal=vpt imagec,overlay points(lOO) common /big_area/ /point_type/ record integer icrossed,k,j, icnt,icnt2,ylow,yhigh icnt,icnt2,ylow,yhigh icrossed,k,j, integer ymin,ymax,xmin,xmax,xlow,xhigh,radval,ifinal integer /image_type/ structure /point_type/ structure /corner/ structure image(512,512) integer nee d,yxls, atse,,, ,xmid last,step,x,y,i last,y dx,dy,x integer side,vpt,vpt1,1,p integer none implicit n do end end structure structure end end structure structure end end structure structure end ymin=min(ymin,xy(side).y(i)) ymin=min(ymin,xy(side).y(i)) xmax=max xmin=min(xmin,xy(side).x(i)) (xmax,xy(side).x(i)) ymax=max(ymax,xy(side).y(i)) icrossed=0 nee a(512,512) integer integer xl,yl,zl,xr,yr,zr,x,y,z xl,yl,zl,xr,yr,zr,x,y,z integer integer x(100),y(100) x(100),y(100) integer 216 217

xlow=512 C C Scam i between xmin and xmax (with a lit t le leeway) C do i=xmin-l,xmax+l if ((i.gt.0).and.(i.le.512).amd.(j.gt.O).and. 1 (j.le.512)) then C C See if encounter a line (- slope) C if ((overlay(side).a(i,j).ne.1).amd. 1 (overlay(side).a(i-l,j).eq.1)) then icrossed=l xlow=i C C See if encounter a line (+ slope) C else if ((overlay(side).a(i,j).eq.1).amd. 1 (overlay(side).a(i-l,j).ne.1)) then if (icrossed.eq.1) then do k=xlow,i-l if (overlay(side).a(k,j).ne.1) then overlay(side).a(k,j)=overlay(side).a(k,j)+2 end if end do else icrossed=0 end if end if end if end do icrossed=C end do

do i=xmin-l,xmax+l icrossed=0 ylow=512 yhigh=0 C C Scam j between ymin and ymax (with a l it t le leeway) C do j=ymin-l.ymax+l if ((j.gt.0).and.(j.le.512).amd.(i.gt.0) 1 .amd.(i.le.512)) then C C See if encounter a line (- slope) C ooo e i ecutr ln ( slope) (+ line a encounter if See (vra(ie.(,-)n.) then (overlay(side).a(i,j-1).ne.1)) 1 (vra(ie.(,-)e.) then (overlay(side).a(i,j-1).eq.1)) 1 return return o i=xmin,xmaxdo end or.(overlay(side).a(i,j).eq.2). 1 n doend n doend o j=ymin,ymaxdo icrossed=0 icrossed=0 n doend n doend f (vra(ie.(,)e. ). ((overlay(side).a(i,j).eq.O if end if if end else end if if end overlay(side).a(i,j)=l overlay(side).a(i,j)=l overlay(side).a(i,j)=0 overlay(side).a(i,j)=0 f (vra(ie. ij. .1).and. e (i,j).n ((overlay(side).a if end if if end le f ((overlay(side).a(i,j).eq.1).and. if else or.(overlay(side).a(i,j).eq.4)) then then or.(overlay(side).a(i,j).eq.4)) ylow=j if (icrossed.eq.1) then then (icrossed.eq.1) if icrossed=l icrossed=l end if if end else else o k=ylow,j-l do icrossed=0 icrossed=0 n doend f oelysd)aik.e1 then (overlay(side).a(i,k).ne.1) if end if if end overlay(side).a(i,k)=overlay(side).a(i,k)+4 overlay(side).a(i,k)=overlay(side).a(i,k)+4 218 219

subroutine line(xml,yml,xm2,ym2,side) C C Subroutine to draw a line between two points; data C is output directly to OVERLAY, dependent on the C value of SIDE. If side is 1 or 2, then the standard C overlay is used. Otherwise, if side is 3 the anaylsis C overlay in /temp_overlay/ is used (this is used in C the 2D projection area analysis routine). C structure /image_type/ integer a(512,512) end structure record /image.type/ imagec(2),overlay(2) record /image_type/ tempol common /big_area/ imagec,overlay common /temp_overlay/ tempol integer xstaxt,ystart,xend,yend integer xml,yml,xm2,ym2,side integer x,y,delta_x,delta_y.error C C Line drawing algorithm is adapted from Berger (1986). C It is an integer-based algorithm, which runs faster C than real-based algorithms. It depends on determining C accumulated error to increment in a given direction. C xstart=xml ystart=yml xend=xm2 yend=ym2 if (side.ne.3) then overlay(side).a(xstart,ystart)=l overlay(side).a(xend,yend)=l else tempol.a(xend,yend)=1 t empol.a(xend,yend)=1 end if error=0 delta_x=xend-xstart delta_y=yend-ystart if (delta.y.It.0) then call swap(xstart.ystart,xend,yend) delta_y=-delta_y delta_x=-delta_x end if x=xstart y=ystart if (deltu_x.ge.O) then if (delta_x.ge.delta_y) then do count=l,(abs(delta_x)-l) if (error.It.0) then x=x+l if (side.ne.3) then overlay(side).a(x,y)=l else tempol.a(x,y)=l end if error=error+delta_y else x=x+l y=y+l if (side.ne.3) then overlay(side).a(x,y)=l else tempol.a(x,y)=l end if error=error+delta_y-delta_x end if end do else do count=l,(delta_y-l) if (error.It.0) then x=x+l y=y+i if (side.ne.3) then overlay(side).a(x,y)=l else tempol.a(x,y)=l end if error=error+delta_y-delta_x else y=y+l if (side.ne.3) then overlay(side).a(x,y)=l else tempol.a(x,y)=l end if error=error-delta_x end if end do end if else if (abs(delta_x).ge.delta.y) then do count=l, (abs(delta_x)-l) if (error.It.0) then 221

x=x-l if (side.ne.3) then overlay(side).a(x,y)=l else tempol.a(x,y)=l end if error=error+delta_y else x=x-l y=y+i if (side.ne.3) then overlay(side).a(x,y)=l else tempol.a(x,y)=l end if error=error+delta_x+delta_y end if end do else do count=l,(delta_y-l) if (error.It.0) then x=x-l y=y+l if (side.ne.3) then overlay(side).a(x,y)=l else tempol.a(x,y)=l end if error=error+delta_x+delta_y else y=y+l if (side.ne.3) then overlay(side).a(x,y)=l else tempol.a(x,y)=l end if error=error+delt a_x end if end do end if end if return end o o o o o o subroutine lineto ensure proper direction of its its of direction proper ensure Used byalgorithm. lineto subroutine values. swap point two to Subroutine uruie xlylx2, ) 2 ,y 2 l,x l,y (x p a w s subroutine yl=temp y2=yl temp=y2 xl=temp x2=xl temp=x2 end return nee temp integer xl,yl,x2,y2 integer 222 A ppendix D

IMAGEJHEADER.FOR

223 ooo eie tutr fr mg hae bok i Ais image files. Aries in blocks image header for structure Define S T R U C T U R/IM E A G E _ H E A D E R / E NST D R U C T U R E MAP END UNION END MAP END MAP MAP UNION B Y TRA E W ( 5 1 2 ) R E A L *ST 4 DR . E D A E L V *M 4 ER A E N A . P L I *M X 4 E A LREAL*4 X . P I X MIN.PIXEL E L R E A L *UT 4 M . N O R T H I N G R E A L +UT 4 R M E . A E L A *UT S 4 T MR I N E _ A G P L I X *UT 4 E L M _ H _ E P I I G X H E T L _ W I D T H I N T E G E R *IHI 4 N T S E T G O E R R YI +HE N . 4 B T L E A O G C D E R K E *DA .I R 4 C N O T _ T E B U G A N L E . T O B R * LUN C 2 O K C K _ K N C . 0 O C W O U N U N 6 N ( T 3 T 4 ) I N T E G E RI * N T E 2 G U E N R *UT K 2 N M W . Z 0 O W N E N 5 (5) I N T E G E RINTEGER*4 +UN 4 K N 0 W CCT.START.LINEI N N T E 4 G ( 2 E ) R *CCI 4 N T T E _ S G T E A R R +U 4 T _ N P I K X N E L 0 W N 3 I N T E G E R *U 2 N K N O W N 2 N E E * NM L N S nme o lns n image in pixels/line number !of lines number of ! INTEGER*4INTEGER*4 NUM.PIXELS NUM.LINESI N T E G E R +BY 2 T E S _ P EI N R T _ E P G I X E E R L *U 4 N K N 0 W N 1 224 A ppendix E

TEXT_FORJBUTTONS.DAT

225 6 27 'MODE' 98 17 'CLEAR' 98 35 'ALL' 182 17 'CLEAR' 182 35 'SUBAREA 274 27 'FILL' 365 49 'DEPTH' 10 71 'ZOOM +' 98 71 ' ZOOM -' 186 71 'ZOOM 1' 274 71 'EXIT' 55 15 'FILL' 55 37 ' OUT' 75 59 'L' 75 81 ' R' 163 15 'L' 163 37 'R' 251 15 'L' 251 37 'R' 339 15 'L' 339 37 'R' 163 59 'L' 163 81 'R' 251 59 'L' 251 81 'R' 427 15 '1' 427 37 '1' 427 59 '1' 449 15 '2' 449 37 '2' 449 59 '2' 471 15 'X' 471 37 'X' 471 59 'X' 493 15 'Basal ' 493 37 'Side ' 493 59 'Apical' 645 27 'G->' 645 71 'F->' 0 0 'NONE' A ppendix F

COLORS.DAT

227 65535 0 0 0 65535 0 0 0 65535 65535 65535 0 65535 0 65535 0 65535 65535 65000 65000 65000 50000 50000 50000 37500 37500 37500 25000 25000 25000 12500 12500 12500 100 100 100