<<

~ TECHNIQUE FOR THE AUTOMATED

DISSEMINATION OF WEATHER

DATA TO AIRCRAF~

A Thesis Presented to

The Faculty of the College of Engineering and Technology

Ohio University

In Partial Fulfillment

of the Requirements for the Degree

Master of Science

by

Craig B. farker/

June, 1989 TABLE OF CONTENTS

Page

LIST OF FIGURES •...•••.....••...... •••...•.•...... ••.. i v

LIST OF TABLES ...... ••.....••...... vi

LIST OF ABBREVIATIONS •...... •...... ••...... •...... ••. vii

CHAPTER

I. INTRODUCTION 1

II. WEATHER DATA GATHERING AND PROCESSING CAPABILITIES ...••...... •...... •.•.. 6

2.1 Ground Systems ...... •.... 8

2.2 Airborne Systems ....•...... •.... 10

2.3 Future Capabilities ••....••••....••.... 11

III. DISSEMINATION OF WEATHER DATA TO AIRCRAFT BY DATA UPLINK ....•••...... •.... 16

3.1 Cockpit Weather Data Requirements ..•...... •••...... •..... 1 7

3.2 Data Uplink System Requirements 20

3.3 Communication Systems with Data Uplink Capabilities •...... 21

3.3.1 Current Systems ..••.•....••....• 22

3.3.2 Mode S Beacon System .....••.••.• 27

3.3.3 Future Systems ...••...... •...... 31

IV. ADDITION OF A DATA UPLINK CAPABILITY TO VHF COMMUNICATION SYSTEMS .•.•...... •...• 33

4.1 Data Transmission on VHF Voice communication Systems 33

4.2 Spectrally Efficient Phase Modulation Techniques •...... •.. 35

4.2.1 Quadrature Phase Shift Keying (QPSK) ...... ••...... •.••. 37

i TABLE OF CONTENTS (Continued)

4.2.2 Minimum Shift Keying (MSK) ...... ••.....•...... •.. 45 4.3 Hybrid Amplitude/Phase Modulation Technique ..•..••...... 54

4.3.1 Ideal Characteristics 54

4.3.2 Practical Limitations 55 4.4 Hybrid System Performance 59 4.4.1 Computer simulation 59 4.4.2 Performance Evaluation 65 4.4.3 Spectral Occupancy...... 70

V. IMAGE DATA COMPRESSION TECHNIQUES ...... •. 73 5.1 Error-free Techniques 74 5.1.1 Compact Codes ...•...... 74

5.1.2 Differential Encoding 76 5.1.3 Contour Encoding 76 5.1.4 Run Length Encoding 77

5.2 Predictive Encoding .....••...... •... 77

5.3 Transform Encoding ...... •.. 79

5.4 Fractal Image Compression 81

5.5 optimum Techniques for Graphic Weather Data 86

VI. AN EXPERIMENTAL WEATHER DATA UPLINK SYSTEM ...... •....•...... 92

6.1 Gathering and Processing of Ground-based Weather Data 94 6.2 Image Data Compression Technique ...•....•..•...... 101

ii TABLE OF CONTENTS (Continued)

6.3 Modulation Technique ..••...... 104

6.4 Airborne Processing and Display 106

6.5 Flight Evaluation Results ...•...... •... 111

VII. CONCLUSIONS AND RECOMMENDATIONS .••...... ••.. 114

7.1 Summary and Concluding Remarks 114

7.2 Recommendations for Future Research •••.•....•...... 119

REFERENCES 121

APPENDIX

A. Derivation of an Expression for Eb/No of a Hybrid Modulated Carrier ••.... 129

B. Derivation of an Expression for Power Spectral Density of a Hybrid Modulated Carrier 137

C. Kavouras RADAC Color Weather Receiver Modifications 145

D. Micromint ImageWise Video Digitizer Modifications 150

E. Image Processing Program Listings 158

F. Other Program Listings .....•...... 202

~C~OWLEDc;~~E~TS ...•...... •...... 211

iii LIST OF FIGURES

Figure Page

2.1 Functional Diagram of the Aviation Weather System •..•.....••...... • 7

2.2 Illustration of a Typical Microburst . 13

4.1 QPSK Signal Waveforms ...•...... 42

4.2 QPSK Receiver Block Diagram ...... •...... 43

4.3 QPSK Signal Space Diagram •...... 44

4.4 MSK Signal Waveforms .....•.....•...... •.... 48

4.5 MSK Receiver Block Diagram ...... •.....•.... 49

4.6 MSK Signal Space Diagram .....•.•...... •.... 50

4.7 Calculated Bit Error Probability for QPSK and MSK . 51

4.8 Calculated Baseband Power Spectral Density for QPSK and MSK .••••.....•...... 53

4.9 Hybrid Modulation Transmitter and Receiver ...... •...... 56

4.10 simulated Voice Modulation Waveform . 63

4.11 Power Spectral Density for simulated Voice Modulation 64

4.12 Output SIN Results for Simulated Voice Signal ..•...... ••..••..•...... 66

4.13 output SIN Results for Actual Voice Signal ....•...... •...... •.. 67

4.14 Eb/NO Results for Data Signal ...... •.•.. 68

4.15 Normalized Power Spectral Density as Calculated for Hybrid Modulated Carrier ...... •...... 72

5.1 Euclidean Geometry Example ...•...... 83

5.2 Fractal Geometry Example . 84

5.3 NWS Color Weather Radar Display . 89

iv LIST OF FIGURES (continued)

5.4 Pixel Level Histogram for NWS Color Weather Radar Display 90

6.1 Experimental Weather Data Uplink System ..•...... ••...... •...... •. 93

6.2 Kavouras RADAC Color Weather Radar Receiving System ••..•...... •.. 96

6.3 Micromint ImageWise Video Digitizer 98

6.4 Engineering Center stocker Flying Laboratory 107

6.5 King KX 175B VHF Navigation/ Communication Transceiver 108

6.6 Airborne Color Weather Radar Display ...•...... •...... •..•.....•..... 110

C.1 Kavouras RADAC Video Board Schematic 1 of 2 ...... •...... 147

C.2 Kavouras RADAC Video Board Schematic 2 of 2 •...... •...... 148

C.3 Kavouras RADAC Video Board Modifications •..•...... 149

0.1 ImageWise Digitizer Schematic 1 of 3 154

0.2 ImageWise Digitizer Schematic 2 of 3 155

D.3 ImageWise Digitizer Schematic 3 of 3 156

D.4 ImageWise Digitizer Modifications ...... •...... •...... 157

v LIST OF TABLES Table Page

1.1 NTSB Broad Cause/Factor Assignments for u.s. General Aviation Accidents with Fatalities 1981 - 1985 ••••••.•••••..•.••• 2

1.2 NTSB Broad Cause/Factor Assignments for u.s. Air Carrier Accidents with Fatalities 1981 - 1985 ••••••••••.••.••••• 3 3.1 Some Recommended Weather Data Products for Uplink to Aircraft ...•.•...... 18

4.1 QPSK Symbol Assignments .•...... 38

4.2 simulated Voice Modulation Parameters . 62

5.1 Huffman Encoding Example 75

5.2 Comparison of Image Data Compression Techniques 88

6.1 Kavouras RADAC Color Weather Radar Intensity Levels 95

6.2 Data Format for Digital Weather Images 103

vi LIST OF ABBREVIATIONS

AC alternating current

A.CARS ARINC Communication Addressing and Reporting System

AIRMET airman's meteorological information

ALPA Air Line pilots Association

AM amplitude modulation

AOPA Aircraft Owners and pilots Association

ARINC Aeronautical , Incorporated

ARSR Air Route Surveillance Radar

ARTCC Air Route Traffic Control Center

ASOS Automated Surface Observing System

ATA Air Transport Association

ATC Air Traffic Control

ATCRBS Air Traffic Control Radar Beacon System

ATCT Air Traffic Control Tower

ATIS Automatic Terminal Information Service

AWGN additive white Gaussian noise

AWOS Automated Weather Observing System

BPSK binary phase shift keying

CMOS complimentary metal oxide semiconductor cwsu Center Weather Service unit DABS Discrete Address Beacon System dB decibel

DC direct current

DME Distance Measuring Equipment

vii LIST OF ABBREVIATIONS (continued)

DOD Department of Defense

DPCM differential pulse code modulation

DPSK differential phase shift keying

EGA enhanced graphics adapter

ELM Extended-length message

FA area forecast

FAA Federal Aviation Administration

FCC Federal Communications commission

FFT fast Fourier transform

FT terminal forecast

GA general aviation

GHz gigahertz

GMT Greenwich mean time

GOES Geostationary Operational Environmental Satellite

Hz Hertz

IC integrated circuit

IF intermediate frequency

ILS Instrument Landing System kHz kilohertz

KLT Karhunen-Loeve transform

LLWAS Low Level Shear Alert System

LOS line of sight

MHz megahertz

MLS Landing System

viii LIST OF ABBREVIATIONS (continued)

MOPS minimum operational performance standards

MSK minimum shift keying rosl mean sea level

MUX multiplexer

NASA National Aeronautics and Space Administration

NBAA National Business Aircraft Association

NOB Non-directional Beacon

NEXRAO Next Generation Radar nm nautical miles

NOAA National Oceanic and Atmospheric Administration

NOTAM notice to airmen

NTSB National Transportation Safety Board

NTSC National Television System Committee

NWS

PIREP

PPM pulse position modulation

QPSK quadrature phase shift keying

RAM random access memory

RTCA Radio Technical Commission for Aeronautics

SIN signal-to-noise power ratio

SA sequence weather report

SIGMET significant meteorological information

TACAN Tactical Air Navigation

TOWR Terminal Doppler Weather Radar

ix LIST OF ABBREVIATIONS (continued)

TTL transistor-transistor logic

VHF very high frequency

VOR VHF Omnidirectional Range

UHF ultra high frequency

USAF united states Air Force

x CHAPTER I

INTRODUCTION

The National Transportation Safety Board (NTSB) data for general aviation (GA) accidents in the united States [1] show weather to be either a factor in, or the overall cause of 39.6 percent of general aviation accidents with fatali­ ties occurring from 1981 to 1985 (table 1.1). According to these data, only pilot error outranks weather as the most significant overall factor in GA accidents with fatalities.

Similarly, the data for united States air carriers over the same time period [2] show this statistic to be 35.0 percent

(table 1.2). Included in the data for air carriers, is the

August 1985 Delta L-1011 crash at Dallas/Fort Worth Inter­ national Airport. One hundred thirty-five lives were lost in this crash when the pilot attempted to penetrate a severe containing a microburst on final approach.

This crash led the NTSB to express concern that although the

Federal Aviation Administration (FAA) had addressed nearly all of the actions proposed by the board since 1973, one important item was not adequately being addressed. In the aircraft accident report for the Delta accident, the NTSB identifies that problem as "the communication of hazardous weather information available from ground sensors to the flightcrew in time for the information to be useful in go/no-go decision making." The NTSB continues: "Current 2

Table 1.1 NTSB Broad Cause/Factor Assignments for u.s. General Aviation Accidents with Fatalities 1981 - 1985

Broad Cause/Factor Percent

pilot 88.7

Weather 39.6

Miscellaneous 27.6

Terrain 18.8

Personnel 11.4

Powerplant 10.3

Undetermined 8.9

Airframe 6.8

Systems 1.8

Instruments/Equipment/Accessories 1.5

Rotorcraft 1.3

Airport/Airways/Facilities 1.3

Landing Gear 0.2

From reference [1] 3

Table 1.2 NTSB Broad Cause/Factor Assignments for u.s. Air Carrier Accidents with Fatalities 1981 - 1985

Broad Cause/Factor Percent

Personnel 60.0

pilot 55.0

Miscellaneous 45.0

Weather 35.0

Airport/Airways/Facilities 20.0

Systems 15.0

Terrain 10.0

Airframe 10.0

Instruments/Equipment/Accessories 10.0

From reference [2] (not including commuter airlines) 4 procedures to relay NWS [National Weather Service] informa­ tion through the ATC [Air Traffic Control] system are not and will never be adequate for dynamic weather conditions

[ 3 ] • "

The aircraft pilot is the principal decision maker when in flight, and has the ultimate responsibility for the safe operation of the aircraft. When the aircraft is being piloted in and around areas of , this respon­ sibility requires that the pilot make effective go/no-go decisions in order to navigate the aircraft around indi­ vidual hazardous weather cells. Penetration of such cells poses a threat to aircraft safety, and when aircraft are navigated into such a cell, penetration is usually attempted only because the pilot is unaware of the cell intensity.

Such an occurrence is the result of an incorrect decision on the part of the pilot due to a lack of data about the weath­ er cell. In order to make life-critical go/no-go decisions effectively, the pilot must have a priori knowledge of cur­ rent weather conditions in the area. Further, these data must be timely and accurate. Although such data are in­ creasingly available on the ground, the pilot does not have sufficient access to them. Consequently, the pilot must make decisions based on weather data which are often out­ dated and inaccurate.

It is the purpose of this paper to: (1) examine this problem in further detail, (2) define some of the more 5 significant requirements which should be addressed by a proposed solution to the problem, (3) explore and evaluate some aeronautical systems which offer the capability to meet at least some of these requirements, and (4) to report on the results of research aimed at solving the problem through the utilization of an existing aeronautical system. The proposed technique involves adapting existing VHF communica­ tion equipment to prov.ide both analog voice and digital weather data transmission. Such a system could be developed to meet the defined requirements for weather data while occupying no frequency spectrum beyond that already utilized for analog voice transmission. Additionally, the system modifications could be accomplished at a cost to the air­ craft operator SUbstantially below that of integrating a new system. Such a system would similarly minimize equipment, facilities, and maintenance cost to the FAA. 6

CHAPTER II

WEATHER DATA GATHERING AND PROCESSING CAPABILITIES

Weather information is provided to the users of the

National Airspace System by a weather system which has been developed and is operated through a joint venture involving the FAA, the Department of Defense (DOD), the National

Oceanic and Atmospheric Administration (NOAA), and the civil aviation community. Many of the weather data products utilized in the system are provided by the NWS. The system

is known as the Aviation Weather System and is represented

functionally in figure 2.1.

In spite of significant advances in the ability to gather and disseminate weather data on the ground, the

Aviation Weather System continues to rely heavily upon voice communication for the dissemination of weather data to the aircraft. Aeronautical Radio, Incorporated (ARINe) provides a data link service known as the ARINC Communication Addres­

sing and Reporting System (ACARS). Upon demand, this system can provide weather data text to the aircraft. Except for this private service, weather data dissemination within the

National Airspace System is accomplished by voice communi­ cation. As a result, the system is severely limited in its ability to disseminate current weather data which is opera­

tionally significant to the individual user. DATA DATA ACQUISITION ACQUISITION SYSTEMS SYSTEMS 1 1 UNPROCESSED UNPROCESSED METEOROLOGICAL METEOROLOGICAL DATA DATA

UNPROCESSED PILOT METEOROLOGICAL DATA FEDERAL ·uU·..··..·....·....·..·..•· REPORTS AIRSPACE AVIATION USERS ALPHANUMERIC/GRAPHIC ADMINISTRATION, WEATIlERDATA· ·..····..·..·111', , WEATHER DATA PRODUCTS PRODUCTS VOICE AND DATA COMMUNICATION ...... RADIO VOICE COMMUNICATION

Figure 2.1 Functional Diagram of the Aviation Weather System

--..J 8

2.1 Ground Systems

One of the most valuable weather data gathering systems in use today is weather radar. Most weather radar systems transmit pulsed-carrier radio frequency energy at either S­

Band (2 to 4 GHz) or C-Band (4 to 8 GHz). This transmitted radar energy is reflected by precipitation, and is then re­ ceived and processed to provide both precipitation location and intensity information. The resulting display is known as a precipitation reflectivity pattern. Because most hazardous weather in the eastern united States is accompa­ nied by heavy precipitation intensities, precipitation reflectivity patterns provide useful weather data for air­ space users.

The NWS operates a network of 56 S-Band and 61 C-Band weather . The S-Band radars form the primary network.

These radars produce radiated powers from 400,000 to 500,000 watts providing an effective range of up to 240 nautical miles (nm). The C-Band radars comprise the secondary net­ work. These radars produce from 250,000 to 300,000 watts of radiated power [4]. Because of the smaller wavelength of the C-Band energy, these radars have a lesser range than do

S-Band systems. Also, energy from smaller wavelength radars is more readily absorbed by precipitation. Unfortunately, this makes C-Band radars more susceptible to rapid attenua­ tion in areas of intense precipitation. This phenomenon 9 tends to mask areas of precipitation beyond intense precipi­ tation areas. The C-Band radars are less costly than S-Band radars, and are used primarily to fill gaps in the S-Band coverage zones.

Air Route Surveillance Radar (ARSR) operates at L-Band

(1 to 2 GHz), and is used by Air Route Traffic Control

Centers (ARTCC) for surveillance of the National Air Space.

These radars operate at a longer wavelength and with a wider vertical beamwidth antenna than either C-Band or S-Band weather radars [5]. These characteristics are desirable for the detection of aircraft, but are not optimal for weather detection. These radars can, however, provide limited pre­ cipitation information in areas without NWS weather radar coverage.

Satellite images of tops are provided to the

Aviation Weather System by the Geostationary Operational

Environmental Satellite (GOES) system [6]. The GOES system records both photographic and infrared images. Infrared images provide information on the temperatures of cloud tops. Cloud tops which extend higher into the atmosphere are associated with stronger , and have colder tops.

Consequently, photographic images can be computer-enhanced using the infrared sensor data to provide images which show the heights of the cloud tops. The resulting composite images contain information which allows the detection of both strong storms and also of more generalized weather 10 disturbances.

Current weather data is also provided to the aviation weather system through NWS and FAA surface observations, and by airspace users themselves through pilot reports (PIREPS).

PIREPS are used for the reporting of routine as well as hazardous weather information by voice communication between the aircraft pilot or aircrew and controllers on the ground.

2.2 Airborne Systems

Many larger aircraft are equipped with on-board weather radar which provides precipitation reflectivity data similar to that produced by ground systems. These airborne systems transmit and receive radio frequency pulses from an antenna located in the nose of the aircraft. Due largely to equip­ ment weight, cost, and size limitations, most airborne systems operate at X-Band (8 to 12 Ghz) and with compara­ tively low radiated power. Since the X-Band units have wavelengths even smaller than C-Band radar systems, the energy is even more readily absorbed by precipitation than is C-Band energy. Consequently, airborne radar systems have a shorter range and are even more subject to precipitation masking effects than ground weather radar. Additionally, due to airborne antenna limitations, airborne units cannot provide 360 degree azimuthal coverage or resolution equal to ground systems. Although airborne weather radar does 11 provide valuable weather data to the pilot and aircrew, it has limitations which make it less effective than ground weather radar.

Because smaller, single engine aircraft are limited as to locations in which to install an airborne radar antenna, and due to the relatively high cost of airborne weather radar systems, equipment has been developed which locates in another way. This equipment senses the electromagnetic field radiated by lightning discharges. The frequency of lightning occurrence has been shown to be proportional to intensity, and storms can be cate­ gorized in this way [7]. The airborne system processes both intensity and direction information and displays an individ­ ual lightning strike as a dot on the display in the proper relative location [8]. Such a capability is valuable for the location of severe thunderstorms when no other infor­ mation is available, but is extremely limited when compared to ground weather sensing and processing capabilities.

2.3 Future Capabilities

The past ten years have brought an increased awareness of the hazardous effects of in the aircraft terminal area. Wind shear is the sudden change of surface wind velocities, and can be extremely dangerous when encoun­ tered by aircraft operating at low airspeeds and in close 12 proximity to the ground. Such conditions occur during aircraft take-off and final approach. Wind shear is most commonly caused by gust fronts which occur when warm and cold air masses collide. Wind shear of this type tends to cover a large area, and to be fairly predictable. Converse­ ly, wind shear caused by microbursts can be very localized, and difficult to predict. Microbursts are caused by the sudden and violent downdraft which occurs when a column of cooled air rapidly descends from a thunderstorm base. If the microburst contacts the ground, it spreads laterally resulting in a very localized and intense wind shear. A typical microburst is illustrated in figure 2.2 [9]. The

Dallas/Fort Worth crash discussed in the introduction is a classic example of the potentially disastrous outcome of microburst encounters. Many of the future developments in weather data gathering systems have been driven by the need to predict and sense the occurrence of microbursts near the terminal area.

In the eastern united states, microbursts are usually accompanied by heavy precipitation and are thus detected by existing radars. The western united states frequently encounters dry microbursts, which are not accompanied by heavy precipitation. It has been demonstrated that systems can detect this occurrence. For this reason, new weather radar equipment is capable of Doppler detection and processing techniques. The new radar systems are thus 13

Figure 2.2 Illustration of a Typical Microburst (from reference [9]) 14 capable of detecting precipitation intensity as earlier radars were, but are also capable of detecting wind shear conditions through Doppler techniques. In addition, ad­ vanced signal processing by these radars will allow the display of many useful weather data products which were unavailable on previous radars.

The first wind shear detection system to be implemented was the Low Level Wind Shear Alert System (LLWAS). This system is currently being installed at airports with vulner­ ability to wind shear, and its capability is being expanded through upgrades. It is anticipated that all systems will be upgraded by 1992. The LLWAS system monitors wind fields at the terminal area through a series of . The system triggers an alert and provides a display for the Air

Traffic Control Tower (ATCT) crew if the wind velocities vary significantly between any of the sensors [10]. LLWAS has limited ability to detect microbursts since their occur­ rence is usually quite localized.

Further improvements in terminal area weather monitor­ ing will be realized upon installation of the Terminal

Doppler Weather Radar (TDWR) system. The system will pro­ vide graphic indications of both precipitation and wind shear conditions including microbursts. Testing of TDWR in a dry microburst environment took place in Denver during the summer of 1988, and the results were favorable [11]. During the summer of 1989, TDWR will be evaluated in Kansas city to 15 determine its effectiveness in detecting microbursts which are accompanied by heavier precipitation intensities.

similar improvements to en route weather surveillance capabilities will be realized when the Next Generation

Weather Radar (NEXRAD) is operational. This system is being developed jointly by the FAA, the united states Air Force

(USAF), and the NWS. The NEXRAD system will replace the existing NWS weather radar network, and is expected to be in place by 1995 [12].

The Automated Surface Observing System (ASOS) and the

Automated Weather Observing System (AWQS) will together provide a much improved weather observing capability. The

AWOS system will automatically observe and report altimeter setting, wind, temperature, temperature, density altitude, visibility, and sky conditions to 10,000 feet.

Similarly, the ASOS system will automatically observe and report current surface conditions and [13].

In spite of the significant advances in ground weather data gathering and processing capabilities, future airborne weather gathering and processing capabilities will be limit­ ed by airborne cost, size, and weight considerations. Also, development of airborne equipment is not funded by the FAA, and is market-driven. Consequently, airborne capabilities will be slower to develop than will ground capabilities. 16 CHAPTER III DISSEMINATION OF WEATHER DATA TO AIRCRAFT BY DATA UPLINK

voice transmission of ground weather data products to aircraft is inadequate even for currently available weather data products. Air traffic controllers, many of whom are working near capacity, are unable to describe verbally weather data products in sufficient detail, and at short enough intervals to provide the pilot and aircrew with accurate and current weather data. with continuing advances in ground weather data gathering and processing capabili­ ties, greater numbers of more detailed weather data products are becoming available. If voice transmission of weather data is currently inadequate, it will be completely incapa­ ble of disseminating future weather data products. The transmission of weather data products from the ground to the aircraft is widely recognized as a feasible solution to the weather data dissemination problem. The FAA has the responsibility to determine what meteorological services are required for aircraft efficiency and safety, and has in fact committed to providing a data uplink ser­ vice. The Aviation Weather System Plan states that at some future time, weather information will be available to the pilot and aircrew "automatically and by request via data link [14]." 17

3.1 Cockpit Weather Data Requirements

Before discussing the requirements for a system capable of providing a data uplink service, it is useful to examine the requirements for cockpit weather data. Airspace user organizations and several u.s Government agencies have expressed recommendations for the improvement of the Avia­ tion Weather System. Recommendations relating to the uplink of weather data products are shown in table 3.1 [15].

Work at Ohio University Avionics Engineering Center has resulted in the identification of certain existing weather data products which would have the capability of improving aircraft safety and efficiency if they were available to the pilot [16],[17],[18]. These weather data products are presented here in order of importance:

1. Hazardous weather conditions

2. Radar precipitation reflectivity patterns

3. Sequence weather report (SA)

4. Terminal and area forecast (FT and FA)

5. critical weather maps

6. Text information to include:

pilot reports (PIREPS)

Significant meteorological information (SIGMETS)

Airman's meteorological information (AIRMETS)

Notice to airmen (NOTAMS)

7. Satellite images 18

Table 3.1 Some Recommended Weather Data Products for Uplink to Aircraft

Organization Expressed User Needs ALPA * NWS weather radar display AOPA * Timely critical forecasts * Self-briefing capability * Direct access to weather data ATA * Satellite images * NWS/FAA weather radar display NBAA * Pre-flight and in-flight weather data/briefings * Mass dissemination of Aviation Weather System data NTSB * Real time display and classification of precipitation and turbulence NOAA/FAA/ * Automated transmission of NASAl hazardous weather areas

ALPA - Air Line pilots Association AOPA - Aircraft Owners and pilots Association ATA - Air Transport Association NBAA - National Business Aircraft Association NASA - National Aeronautics and Space Administration 1 1981 jointly sponsored workshop on meteoro­ logical and environmental inputs to aviation systems

From reference [15] 19

Wind shear detection systems such as LLWAS have been shown by simulation to be as much as 94 percent effective in allowing pilot avoidance of microburst areas if displayed graphically to the pilot. This compares to only 43 percent if the data is provided to the pilot by voice alone [19].

For wind shear detection systems to provide the level of safety of which they are capable, the pilot or aircrew must have the information available in graphic form.

Although a weather data uplink system providing text information would certainly provide a greater dissemination capability than the present system, it is apparent that any system developed for the uplink of weather data products should provide for some level of graphics transmission.

Some of the more important graphic weather data products would consist of real time radar precipitation reflectivity patterns and graphic wind shear display information. Such data would provide the pilot useful and current information about areas of operationally significant weather both at flight altitudes and at approach and take-off.

Ground radar information is currently available to the

Center Weather Service unit (CWSU) within the Aviation

Weather System [20]. Improved radar systems such as NEXRAD and TDWR will provide information which can be processed to provide additional weather data products. The uplink of such information is technically feasible now, and would greatly enhance the safety of the National Airspace System. 20

3.2 Data Uplink System Requirements

The system utilized for the uplink of weather data to the aircraft must be selected in such a way that certain transmission requirements can be satisfied. One of the most important requirements is that the transmission of weather data not require the use of any additional portion of the radio frequency spectrum. The radio frequency spectrum is a precious natural resource. Since no more of this resource is or ever will be created, it is important to develop new systems such that they maximize spectral efficiency. For this reason, it is beneficial to consider a means for accom­ plishing weather data transmission using aeronautical radio frequency bands already in existence.

Further, the uplink of weather data to aircraft must provide an adequate temporal update rate for the display of weather data in near-real time. The provision of outdated weather information could lead the pilot to believe that severe weather is being avoided, when in fact it has moved or developed into the chosen flight path. The availability of outdated weather information could in this way place the aircraft at greater risk than if no weather data had been available at all, and the pilot had deviated completely around the storm area.

Another issue of concern for the transmission of weath­ er data to aircraft is the ease with which the data can be 21 obtained. pilot and aircrew workloads are already quite heavy, and are becoming even more so. This condition is especially true during aircraft take-off and landing. For this reason, the pilot or aircrew must be able to access the provided weather data without expending a great deal of additional effort. Although not considered in this paper, the presentation of the weather data in the cockpit must similarly be accomplished in such a way as to minimize the additional crew workload required for interpretation of the available weather data [21].

Finally, weather data should be available to all airspace users in the National Airspace, and for all alti­ tudes of flight. Provision of weather data for all National

Airspace users can best be accomplished by the FAA. Only in this way can commonality of equipment be maintained, and transmission be accomplished without competition for trans­ mission rights from commercial interests. Altitude coverage sufficient for all airspace users would need to extend from the highest jet airways down to final approach and take-off altitudes. Coverage on the ground at the terminal would be very useful for updated pilot weather briefings after take­ off delays.

3.3 Communication Systems with Data Uplink Capabilities

There exist several aeronautical radio frequency 22 systems which could be modified to accomplish the transmis­ sion of weather data to the aircraft. Additionally, there are systems under development which will provide a data link capability that might be utilized for the same purpose.

Finally, future systems in the early development phase could be designed specifically for weather data uplink applica­ tions. These systems are examined in this section.

3.3.1 Current Systems

In order that an existing aeronautical system be con­ sidered for the addition of a data uplink capability, it is important that the system be universally available within the National Airspace System. Several existing aeronautical systems offer this universality, and it is these systems which are considered. This section provides an objective overview of some of these systems with an emphasis on those characteristics which might affect the addition of a data transmission capability.

The first system considered is the Instrument Landing

System (ILS). Although scheduled for replacement by the

Microwave Landing System (MLS), the ILS system is scheduled to be completely operational until 1995 [22]. Due to delays in MLS implementation, however, ILS will probably be in use long past this deadline. The system is composed of three major subsystems: the localizer, the glideslope, and the 23 marker beacons. The localizer provides approach guidance in the azimuthal plane relative to the runway centerline by radiating tone modulated carriers within the VHF navigation frequency band from 108.0 to 111.975 MHz. In addition, the localizer carrier is amplitude modulated by a tone for identification, and some localizers provide an AM voice transmission capability. The glideslope provides approach guidance in the elevation plane relative to a three-degree approach centerline by similarly modulating carriers in the

UHF frequency band from 328.6 to 335.4 MHz. No identifica­ tion or voice capability is provided on the glideslope system. The marker beacons provide information about the distance from the threshold along the ILS path, and produce tone modulated carriers in the VHF frequency band at 75 MHz.

Spatial coverage of the localizer signal is +/- 10 degrees of runway center in azimuth and line of sight (LOS) from 0 to 7 degrees in elevation to a range of 25 nm [23]. Spatial coverage of the glideslope signal is +/- 8 degrees of runway center in azimuth and from 1.35 to 5.25 degrees in elevation to a range of 10 nm [24]. spatial coverage of the marker beacon is similarly limited to the final approach path. The

ILS system is thus not capable of transmission to users anywhere other than in the terminal final approach area.

The Non-directional Radio Beacon (NOB) is used as a navigational aid. This system transmits in the AM broadcast frequency band from 190 to 1750 kHz. Propagation at these 24 frequencies is by ground wave and coverage is thus dependent upon both the radiated power of the transmitter and the current propagation properties. The NOB utilizes amplitude modulation or on/off carrier keying for transmission of identification tones [25].

Distance Measuring Equipment (OME) is also a naviga­ tional aid. DME ground stations serve as a transponder, and provide a measurement of the distance from the aircraft to the station when interrogated by an aircraft. The military system known as Tactical Air Navigation (TACAN) operates much in the same manner and also incorporates an omnibearing capability. These systems transmit and receive pulses in the UHF frequency band from 960 to 1,215 MHz [26]. Coverage of DME transponders is LOS, and the range is dependent upon the specific site.

The VHF Omnidirectional Range (VOR) system provides civil aircraft with omnibearing information. The VOR signal is radiated within the VHF navigation frequency bands from

108.0 to 117.975 MHz. In addition to the navigation infor­ mation, the system provides a voice transmission capability.

Coverage is LOS from approximately 0 to 40 degrees in eleva­ tion [27], and the range is dependent upon transmitter output power, local terrain, and aircraft altitude. DME or

TACAN transponders are commonly installed at VOR transmitter sites. In these cases, the range of the DME transponder is usually chosen to be equivalent to the VOR range. The 25 combined VOR and DME systems provide a bearing and distance

(rho-theta) navigation capability, and form the primary civil navigation system currently in use in the united states.

civil and most military aircraft utilize VHF AM for air-to-ground communication. These systems utilize ampli­ tude modulation of VHF carriers in the frequency range of

117.975 to 136.0 MHz. Coverage is LOS and range is depen­ dent upon transmitter output power, local terrain, and aircraft altitude. En route ATC facilities utilize VHF AM and are typically capable of communication over a range of

60 nm between altitudes of 1,000 feet above ground level and

18,000 feet above mean sea level (msl), and 150 nm between altitudes of 18,000 and 45,000 feet msl. Approach and departure communication in the terminal area also utilizes

VHF AM and has a typical LOS range of 60 nm from the ground to 25,000 feet msl [28].

Although any of these systems has the potential for an added data uplink capability, use of the systems offering the greatest spatial coverage would minimize the number of uplink stations necessary to achieve the required overall coverage. Because the VOR/DME system is the primary naviga­ tion system, coverage is maintained throughout the National

Airspace System. Similarly, VHF AM communication is the sole means of communication between airspace users and the controllers on the ground, and as for VOR/DME, coverage is 26 provided throughout the National Airspace System. This universal coverage along with a high user equipage percent- age makes the use of either the VOR/DME equipment or the VHF

AM communication equipment very desirable for the uplink of weather data.

For VHF AM communication systems, the VHF carrier is amplitude modulated by voice information. Since voice is the only transmitted information for this equipment, simul- taneous transmission of data could be accomplished if the received voice information were not degraded below an intel- ligible level. Conversely, the voice information could not degrade the data transmission below a level required to provide sufficient uplink performance. Assuming the entire voice bandwidth of these systems could be utilized for data transmission, some measure of possible data rate performance can be obtained by applying the Shannon bound for theoreti­ cal channel capacity [29]. The Shannon bound predicts the maximum bit rate obtainable for the case of a signal trans- mitted through a bandpass channel corrupted by additive white Gaussian noise (AWGN):

R = (BW)lo92[1 + (S/N)c] (3.1) channel signal-to-noise power ratio where: (SIN) c = R = bit rate (bits per second) BW = channel bandwidth at receiver input (HZ) 27

For a voice bandwidth of 2,500 Hz, and a worst-case (S/N)c of one, a theoretical maximum bit rate of 5,000 bits per second could be obtainable.

VOR systems provide a voice transmission capability, but the carrier is modulated with many other signals used to provide the omnibearing information. Although data trans­ mission could replace VOR voice transmission, the complexity of the VOR signal greatly complicates the addition of fur­ ther modulations while not interfering with the existing signals and reducing required VOR capabilities.

The addition of a data uplink capability to DME and

TACAN equipment has been studied by Ross and Kennedy [30].

The results of that study show that data transmission at a maximum useful data rate on the order of 600 bits/second could be achieved by applying additional pulses to the

DME/TACAN pulse format. The additional pUlses would be modulated using either pulse position modulation (PPM) or differential phase shift keying (DPSK) to accomplish data transmission. The benefits would be partially offset by a greatly increased interrogator and transponder hardware complexity, but would nevertheless provide a means for the ground-to-air transmission of weather data to the aircraft.

3.3.2 Mode S Beacon System

The Mode S (Mode Select) Beacon System is the planned 28 replacement for the current Air Traffic Control Radar Beacon

System (ATCRBS). The Mode S system has evolved from earlier development work on the Discrete Address Beacon System

(DABS). utilizing 1030 and 1090 MHz for interrogations and replies respectively, this system is capable of providing radar surveillance with selective interrogation of aircraft, and also of air-to-ground and ground-to-air data exchange.

The data exchange capability was incorporated to allow the exchange of messages between the aircraft and ATC, thereby replacing some of the required voice communication. The current Aviation Weather System Plan relies upon this capa­ bility for the uplink of weather data from the ground to the aircraft [31].

The Mode S interrogator is capable of transmitting three classes of messages to the aircraft. These are as follows:

1. Surveillance data

2. Standard message

3. Extended-length message (ELM)

These messages consist of a 56 bit or 112 bit data block.

Included in the data block is a 24 bit discrete address overlaid with parity check bits which provides for both selective interrogation and error detection. The messages are transmitted at a 4 megabit per second data rate using

DPSK. Effective utilization of the parity check bits will allow for an overall undetected bit error rate of less than 29

7 10- • The interrogator is capable of servicing up to 700 users within the specified service volume, and so the over­ all data exchange rate is quite high [32],[33]. The standard message and the ELM are capable of provid­ ing general purpose data transmission. The standard message provides surveillance data along with a 56-bit data link message. This message type is intended primarily for mes­ sages not requiring large numbers of consecutive bits, and replaces the surveillance message while still providing the surveillance data. The ELM does not contain the surveil­ lance data, and thus cannot SUbstitute for the surveillance message. It is designed to convey large amounts of data, and consists of up to 16 112-bit data blocks of which 80 bits are available for the message. This results in the transfer of up to 1,280 bits per ELM. Because graphic weather data products will consist of relatively large amounts of data, the ELM is the most efficient message format for the uplink of this information. The ELM data rate available to the individual user is limited by the ability of the interrogator to service all of the users within the service volume. For terminal interro­ gators, the antenna scan time is approximately 4 seconds [34]. Allowing only one 16-segment ELM for an individual user in anyone antenna scan limits the individual user to 1,280 bits in the 4 second scan time. The Radio Technical Commission for Aeronautics (RTCA) minimum operational 30 performance standards (MOPS) for the airborne Mode S equip­ ment require the aircraft transponder data link interface to be capable of handling this same amount of data in the same

4 second time period [35]. Both performance criteria result

in an overall effective ELM data rate of 320 bits per second

for the individual user. This number is considerably lower than the overall system data handling capability because each individual user is being selectively addressed.

The Mode S selective communication capability is very

important for the transfer of data used for ATe functions, and could be useful for the transmission of smaller amounts of user-specific weather data while en route. It is not, however, required for the transmission of weather data products that affect all users within a specified area. In fact, the data rate limitation of a fully loaded Mode S

interrogator in the terminal area could slow the transmis­ sion time of graphic weather data products at a time when access to current weather data is critical.

One other very important Mode S data link limitation

for the uplink of weather data is the system spatial cover­ age. The National Airspace System Plan calls for the first

137 Mode S systems to provide coverage down to 12,500 feet msl, and to the ground at high-density terminals. These

systems are to be in place in the early 1990s. The second

contract for an additional 60 systems would then provide

coverage down to 6,000 feet msl by 1993 [36]. As a result, 31 any airspace user at altitudes of less than 12,500 feet msl would not have weather uplink capability unless in the vicinity of a high density terminal equipped with Mode S surveillance. After 1993, this coverage would be lowered to

6,000 feet msl, but this still leaves GA aircraft without weather data uplink capability over a large portion of the

National Airspace System which is frequently used by these aircraft. This is a critical situation since GA is in general more affected by weather due to lower flight alti­ tUdes.

For these two important reasons, the Mode S data link capability alone cannot meet the weather data uplink re­ quirements for all airspace users. Although the Mode S data link capability is invaluable for the transmission of ATe data, and might be useful for the transmission of limited amounts of user-specific weather data to aircraft en route, it does not have a sufficient capability for use as a uni­ versal weather data uplink system. Supplemental systems would be necessary at lower density airports, and would greatly enhance the individual user data rate capability elsewhere.

3.3.3 Future Systems

In the future, systems for the uplink of weather data will utilize communication satellites for the relay of 32 ground-transmitted data to aircraft [37]. This mode of uplink offers several distinct advantages. The first advan­ tage is the potential for unlimited range of communication.

If the satellites utilized for relay of data offer continu­ ous global coverage, the aircraft would be able to receive weather data continuously anywhere in the world. Another important advantage of satellite communication is the high data rates which can be achieved. Because these systems are allocated very high carrier frequencies, the bandwidth of individual channels is large enough to support high data rates. As a result, large amounts of graphic weather data could be transmitted in very little time. Unfortunately, worldwide agreement on data communication protocols, costs of leasing satellite relay channels, and cost and complexity of airborne equipment have slowed development of such sys­ tems. Although satellite systems will eventually be able to provide the weather data uplink service, it is important to consider other systems for use in the interim. 33

CHAPTER IV

ADDITION OF A DATA UPLINK CAPABILITY TO VHF COMMUNICATION

SYSTEMS

The determination has been made that if the entire voice bandwidth of a VHF AM communication channel could be utilized for data transmission without degrading the voice transmission capability, a data rate as high as 5,000 bits per second could be achieved simultaneously. In order to arrive at this conclusion, it was assumed that the entire voice bandwidth of a VHF voice communication channel could be utilized simultaneously for data and voice transmission, and that these two transmissions could be made not to inter­ fere with one another. The validity of this assumption may seem a bit dubious, but due to the relatively high predicted data rate capacity to be gained, the technique is worth pursuing. This chapter will address this issue.

4.1 Data Transmission on VHF Voice communication Systems

As described earlier in this paper, voice communication between aircraft and the ground is accomplished using ampli­ tude modulation of radio frequency carriers in the VHF band.

The specific frequencies utilized are 720 channels spaced 25 kHz apart from 118.0 to 135.975 MHz. Additional frequency spectrum from 135.975 to 137.0 MHz will be available for 34 aviation services in the 1990s. Special committee 140 (SC­

140) of the RTCA provided recommendations for the use of these additional channels. Notable recommendations were to c.ontinue to use AM, and to consider reducing channel band­ widths from 25 kHz to 12.5 kHz so that more channels could be placed within this spectrum [38].

SC-140 also investigated VHF data link applications

"based on its understanding that there will be continuing applications for air-ground data link using VHF channels even though the current FAA planning is proceeding in the direction of using the DABS [Mode S] data link for most or all air traffic control applications [39]." The approach taken by SC-140 for VHF data links is based upon the work of another RTCA committee, SC-110/111, which made recommen­ dations for universal air-ground digital communication system standards [40]. That approach is the gradual phasing in of digital communications by accomplishing both on the same channels using time diversity. Through the use of this technique, either voice or data communication is taking place over the channel, but not both simultaneously.

VHF communication channels are used by the ACARS system to provide information dissemination to users by data link at a data rate of 2,400 bits per second. On the VHF fre­ quencies used for the ACARS service, voice audio is replaced by audio signals used for data transmission [41].

In both of the RTCA recommendations, and in the ARINC 35 implementation, data transmission is accomplished over existing VHF voice communication channels. Although neither of these techniques provides simultaneous voice and data transmission on one channel, they do demonstrate the feasi­ bility of using VHF communication channels for data uplink to aircraft.

4.2 Spectrally Efficient Phase Modulation Techniques

To avoid interference with radio communication systems operating on adjacent channels, it is important to contain transmitted power within specified channel bandwidths. Due to limited availability of frequency spectrum, there is increasing pressure to reduce these channel bandwidths so that more channels can be placed in a specific frequency band. These issues are largely responsible for the develop­ ment of spectrally efficient phase modulation techniques to accomplish data transmission. Spectral efficiency is de­ fined as "the ratio of data rate to channel bandwidth (in units of bits per second per hertz) for a specified proba­ bility of symbol error [42]."

Two such spectrally efficient phase modulation tech­ niques are quadrature phase shift keying (QPSK) and minimum shift keying (MSK). These modulations utilize four phase states of the carrier to transmit data bits. A particular phase state represents one transmitted symbol. Because 36

there are four phase states, each symbol transmits two data bits. QPSK and MSK are thus able to transmit twice as much

data per phase state than are techniques which utilize only

two carrier phase states. Binary phase shift keying (BPSK)

is an example of the latter, and is only capable of trans- mitting one bit of data for each of the two phase states.

In general, a phase modulated carrier can be expressed

as:

get) = AcCOS[2~fct + aCt)] (4.1) where: A is an amplitude coefficient (volts) c f is the carrier frequency (Hz) c act) is the phase modulation created by a modulating signal

The signal can be analyzed more readily in the quadrature

form:

get) = g. (t)cos(2~f t) + g (t)sin(2~f t) (4.2) 1 c q c where: g (t) is the in-phase component i gq(t) is the quadrature component

Use of a common trigonometric identity allows equation 4.1 to be put in the form of equation 4.2:

(4.3) 37 where: Accos[9(t)] -Acsin[9(t)]

4.2.1 Quadrature Phase Shift Keying (QPSK) [43]

For QPSK, the modulated phase is given by:

aCt) = [ (2i - l)i ] (4.4) where: i = 1,2,3, or 4 depending on the symbol transmitted by the data stream during the symbol period T The carrier frequency f is chosen to be an c integer multiple of liT

Equation 4.3 shows that for QPSK, g. (t) and 9 (t) take on 1 q values of A depending on the input data stream. The ± c/ v2 values of i for each symbol, the corresponding QPSK phase

from equation 4.1, and the resulting in-phase and quadrature components from equation 4.3 are shown as a function of

input data stream bit pairs in table 4.1. Notice that the

in-phase and quadrature components take on the sign of the even and odd numbered bits in the input data stream respec- tively. Since the two quadrature components of the carrier

are coherently orthogonal signals, the bit streams that modulate each of these components can be demodulated inde-

pendently. The in-phase and quadrature carrier components 38

Table 4.1 QPSK Symbol Assignments

Input Data Bit i Phase Message Coordinates Pairs (radians) gi(t) gq(t)

- - 1 0 1 1f/4 +Ac/v2 -Ac /v2

o 0 2 31f/4 -Ac /v2 -Ac/v2

0 1 3 51f/4 -Ac/ v 2 +Ac / v 2

1 1 4 71f/4 +Ac/ v 2 +Ac/v2 39 along with the resulting QPSK carrier formed by their sum are shown in figure 4.1 for the input binary data sequence of 01101000, and for f c = 2fT. The best error performance for QPSK can be obtained using a coherent correlative receiver. A coherent receiver converts the incoming signal to baseband by product detec- tion. This process requires the multiplication of the received signal by a replica of the transmitted carrier.

Correlative receivers process this baseband signal by pass-

ing it through a linear filter specifically designed to minimize the noise and maximize the signal. Such filters are known as matched filters, and the filtering process is sometimes referred to as an integrate and dump process. A

QPSK coherent correlative receiver is shown in figure 4.2.

Notice that the in-phase and quadrature carrier components are processed independently.

A mathematical analysis can provide a predicted error performance for a QPSK receiver of this type assuming that the input signal is perturbed by AWGN. The output of the

i.n-phase and quadrature correlators can then be shown to be:

l)~ Xi = VECOS[ (2i - ] + wl (4.5)

(4.6) 40 where: Xi is the in-phase output with AWGN of wi x is the quadrature output with AWGN of W q q A 2 c T E is the energy per symbol (joules) = 2

Notice that x. and X are sample values of independent 1 q rand~ Gaussian variables Xi and Xq with means given by vECOS[(2i - 1)wj4] and -vEsin[(2i - 1)~/4] respectively, and with variances equal to the channel noise power spectral density of No/2. The resulting received signal space dia­ gram for QPSK is shown in figure 4.3. By inspection of the signal space diagram, it can be seen that the probability of an error occurring is equal to the probability of one of the message points falling into an adjacent decision region. since the signal space diagram is symmetrical, the error probabilities for each message point are the same. Consider message point four for the analysis. In this case, i is equal to four, and the mean values of both xi and X are q vE/2. The probability of a correct decision is given by the probability of both xi > 0 and x q > o. Since the random variables X. and X are independent, this probability is 1 q given by:

(4.7) 41 where: Pc is the probability of a correct decision x = x. = X 1. q vE/2 is the mean

No/2 is the variance

From the definition of the error function [44]:

~2 erfc(x) = (21r) -\J: e [- ] dz (4.8)

By making the sUbstitution z = , equation 4.7 can be written as:

(4.9)

or alternately as:

] dz ] 2 (4.10)

By use of equation 4.8:

Pc = [1 - erfC(vE/No)]2 2(vE/N = 1 - 2erfc(vE/N ) + erfc ) (4.11) o 0 42

o 1 1 o 1 o o o

Input Data Sequence

o T 2T 3T Symbol 1 Symbol 2 Symbol 3 Symbol 4

In-Phase g. (t) 1

Quadrature g (t) q

Resulting get)

Figure 4.1 QPSK Signal Waveforms INTEGRATE ANDDUMP ------1 Xi I : IINTEJRATORI SiPLEA:DHOiD: I I· dt .JEi2 So I 0 -{Ei2 I L 1 ~ IN-PHASE DECISION THRESHOLD {EAe cos (21tfet)

INPUTQPSK IF CARRIERREPLICA BIT RATE + AT FREQUENCY fe (from bit sync circuitry) MULTIPLEXER I .. OUTPUT AWGNFROM (from carrier sync circuitry) DATA RECEIVER IF 90° PHASE SHIffER

{EAe sm· (21tfet) ------1 I I I I I T I .JEi2So :Jdt -{Ei2 I 0 , , I 1 INTEGRATOR SAMPLEAND HOLD I L ------QUADRATURE Xq DECISION INTEGRATE ANDDUMP THRESHOLD

Figure 4.2 tf,:l. QPSK Receiver Block Diagram w 44

ti: sin (21tfct)

M~VA/ DECISION ~V~~OF MESSAGE OF MESSAGE DECISION REGION 3 POINT 3 POINT 4 REGION 4

El2- -- I I I I I I L~-PHASE DECISION (-)~ ~ COS (21tfct) THRESHOLD AXIS ti: I I I I I I

DECISION FJ2---\ DECISION REGION 2 L:~~ REGIONl MEAN VALUE OF MESSAGE OF MESSAGE POINT 2 POINT 1

QUADRATURE DECISION THRESHOLD AXIS

Figure 4.3 QPSK Signal Space Diagram 45 but since the probability of error P = 1 - P : e c

(4.12)

4.2.2 Minimum Shift Keying (MSK) [45],[46],[47]

Consider another type of phase modulation where set) is defined as follows:

set) = S(O) ± [ ~ Jt (4.13) 2Tb ' where: Tb = T/2 is the bit period The carrier frequency f is chosen to be an c integer multiple of 1/4Tb The + corresponds to the case where gi(t) and gq(t) have the same signs, and the ­ corresponds to the case where the signs are opposite S(O) is 0 or ~ depending upon the past history of the modulating process

Notice that set) changes linearly by ±w/2 over the bit period Tb. As for QPSK, the in-phase and quadrature components are modulated by the even and odd data bits of the input data stream respectively. Similarly, since the two quadrature components of the carrier are coherently orthogonal signals, the bit streams modulating each of these 46 components can be demodulated independently. For MSK, however, introduce a time offset between these two bit streams equal to the bit period Tb. Equation 4.3 then shows that the in-phase and quadrature components gi(t) and gq(t) are sinusoidally weighted, and since the symbol period is

T = 2Tb, they only change phase at the zero crossings. The in-phase and the quadrature carrier components along with the resulting MSK carrier formed by their sum are shown in figure 4.4 for the input binary data sequence of 01101000, and for f c = 1/Tb. Two properties of MSK not shared by QPSK are apparent by observation of figure 4.4. These properties are: (1) the MSK carrier has a constant amplitude envelope, and (2) the phase of the MSK carrier is at all points con- tinuous. As is the case for QPSK, superior MSK noise performance can be achieved with a coherent correlative receiver. Such a receiver for MSK is shown in figure 4.5. The output of the in-phase and quadrature correlators can be shown to be:

+ w. (4.14) 1

Xq = -vEbSin[e(T + Tb )] + wq (4.15) where: Xi is the in-phase output with AWGN of Wi xq is the quadrature output with AWGN of Wq Eb = E/2 is the energy per bit (joules) 47

1 provides the phase at the data stream bit transition intervals

x. and x are sample values of independent Gaussian random 1 q variables Xi and X with means given by vEbCOS[e(f)] and q vEbSin[8(f + Tb)] respectively, and with variances equal to the channel noise power spectral density of N /2. The o resulting received signal space diagram for MSK is shown in figure 4.6. By comparison with the QPSK signal space dia- gram shown in figure 4.3, it can be seen that the MSK signal space is exactly the same as the QPSK signal space with the exception that for MSK, the transmitted bit energy E is b used in place of the transmitted symbol energy E used for

QPSK. Since Eb = E/2, the error analysis for MSK proceeds exactly as for QPSK and yields the identical result as shown in equation 4.12. The bit error probability for MSK and

QPSK is expressed again here as a function of E b:

(4.16)

Equation 4.16 is shown graphically in figure 4.7.

The power spectral density of any bandpass signal get)

(denoted Sg(f» is an amplitude-scaled and frequency-shifted version of the baseband power spectral density Sb(f). The power spectral density of the baseband signal for both MSK 48

o 1 1 o 1 o o o

Input Data Sequence

o

In-Phase g. (t) 1

Quadrature g (t) q

Resulting get)

Figure 4.4 MSK Signal Waveforms INTEGRATE ANDDUMP ------1 Xi I I INTEGRATOR SAMPLEANDHOLD :

:I TJh I I -" I I I It dt .fE; So I I -Tit -fE; I L _ IN-PHASE DECISION THRESHOLD ~cos( ODDBIT ..fE;; 2T1tt ) cos (21tfet) b RATE

INPUTMSK IF CARRIER REPLICA DBMULTIPLEXBR BITRATE + AT FREQUENCY fc (from bit MULTIPLEXER I It OUTPUT AWGNFROM (from carrier sync circuitry) sync circuitry) DATA RECEIVER IF 90° PHASE SHIFfER EVEN BIT RATE ~ sin ( ;T b t) sin (27tfet) b . __ I------I I 2T I It -IE;: I ' I Jdt So I 0 -a; I QUADRATURE ~_~~~~O!~~.:~~~O~~~. X q DECISION INTEGRATE ANDDUMP THRESHOLD

Figure 4.5 ~ MSK Receiver Block Diagram ~ 50

_::.- sin (; t) sin (21t fc1) "'i Eb b

DECISION DECISION REGION3 ~~~RUNT 3 ~~~1U1NT4 REGION4

E b -- I I I I IN-PHASE DECISION -)~ ~ COS (2lt ·nIRESHOlD AXIS ft; ..ti: cos ( t) ret) I I Eb b I I I I E--~ DECISION L--<-J DECISION REGION2 b MEANVALUE REGION1 MEANVALUE OF MESSAGE OF MESSAGE IUINT2 mINTl

QUADRAlURE DECISION 1HRESHOLD AXIS

Figure 4.6 MSK Signal Space Diagram 51

10-3

~ L- a L- L Q) 10"" "-" 0..

10-6

10-t

7 10- -+-rr"T''T'T"1r-rT'T.,,.-r,,..,..,..,.T"?'''I""I'...... -rr!I''TTTT'T''T''1''"rT''1'"'rTT'T''T''T'''I'"1r-rT'T'T'TT"'II''''''r'T'I'''''I'''T''''I'''...... rT'I''''I''''''I''T'''I'''r''\'''T'T'T'T''T''1''''1''T'T''rrTT'TTT'''I,..,..,..,.T'TT'''I'"TT'''I'''''~ 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 10.00 11.00 Eb/No

Figure 4.7 Calculated Bit Error Probability for QPSK and MSK 52 and QPSK is given by the power spectral density of the sum g. (t) + g (t). Because these two components are statisti- 1 q cally independent, the total power spectral density is given by the sum of the power spectral densities of g. (t) and 1 gq(t). For a data sequence of equally probable ones and zeroes, the baseband power spectral density of get) for the case of QPSK is given by [48]:

2 sin2(2~Tbf) (4.17) 2~Tbf

The baseband power spectral density for the case of MSK is given by [49]:

2 COS(2~Tbf) Eb Sb(f) = 32 (4.18) ~2 16(T - 1 bf)2

Equations 4.17 and 4.18 are normalized with respect to

4Eb and plotted as a function of Tbf in figure 4.8. Compar­ ing the power spectral densities of MSK and QPSK, it can be seen that the MSK carrier contains much less high frequency energy than the QPSK carrier. This is because the MSK carrier does not contain the phase discontinuities which are present in the QPSK carrier. 53

0

-10

-20

,--.... -30 OJ -0 '--'" -40 ..0 l.L.J ~ -50 <, ",--...., '+- -60 '--'" ..0 (f) -70

-80

-90

-100 0 23456 Normalized Frequency, fTb

Figure 4.8 Calculated Baseband Power Spectral Density for QPSK and MSK 54

4.3 Hybrid Amplitude/Phase Modulation Technique

This section addresses the addition of phase modulation to the existing amplitude modulation utilized for VHF commu- nication channels. As discussed at the beginning of this chapter, if these two modulations could be made not to interfere with one another, the simultaneous transmission of both voice and digital weather data could be accomplished.

4.3.1 Ideal Characteristics

An amplitude modulated carrier can be expressed as:

Ac[1 + k v m(t)]cos(2~f ct) (4.19) where: A is an amplitude coefficient (volts) c k is the amplitude sensitivity of the modulator v f is the carrier frequency (Hz) c met) is the modulating signal

As shown by equation 4.1, a phase modulated carrier can be expressed as:

ACCOS[2~fct + aCt)] (4.20) where: act) is the phase modulation created by a modulating signal

Combining amplitude and phase modulations on the same 55

carrier for simultaneous transmission of voice and weather

data yields:

A m(t)]cos[2~f c [1 + kv c t + aCt)] (4.21) where: met) is the voice modulation aCt) is the phase modulation caused by a digital stream of weather data

The hybrid modulated carrier can be generated and

received as shown in figure 4.9. Under ideal conditions,

the two modulations remain completely independent of one

another. Amplitude limiting can be used to remove the

amplitude modulation from the carrier before input to the phase detector. Also, the amplitude detector is not sensi- tive to phase information, and does not detect the phase modulation. Therefore, only the phase modulated weather data is detected by the phase detector, and only the ampli­ tude modulated voice is detected by the amplitude detector.

4.3.2 Practical Limitations

By inspection of equation 4.21, the phase modulation is only retrievable if the amplitude modulation index ~ is

limited to values less than one:

~ = IkvmCt) I < 1.0, for all time t (4.22) 56

ANlENNA

AMPLIFIERJ PHASE AMPUTUDE BANDPASS MODUlATOR MODUlATOR FILTER REFERENCE OSCILlATOR DIGITAL ANALCX3 WEATIIER VOICE DATA DATA A. TRANSMITTER

DIGITAL PHASE WEAllffiR DEfECfOR DATA PROCESSINGI BANDPASS FILTERS ANALCX3 AMPUfUDE LOCAL IN1ERMEDIATE VOICE DElECfOR OSCILlAlOR FREQUENCY DATA

B. RECEIVER

Figure 4.9 Hybrid Modulation Transmitter and Receiver 57

Existing AM systems are subject to constraints on ~ which will determine if such a limitation is feasible. An AM envelope detector is utilized by most receivers for ampli­ tude detection. For a single tone modulating signal, and a received signal much larger than the noise level, the detec­ tor degrades the output SIN by the factor ~2/(2 + ~2) [50].

This effect requires that ~ have a lower limit to prevent excessive degradation of output SIN. There exists an upper limit on ~ in order to prevent the carrier envelope from changing polarity and causing crossover distortion. The specifications governing ground-based aeronautical transmit­ ters in the VHF frequency band require that ~ normally be greater than 0.7, but not larger than 1.0 [51]. Therefore, the requirement that ~ be less than 1.0 for the hybrid modulation is feasible.

In an actual radio communication system, the presence of bandwidth and amplitUde limiting introduces an interfer­ ence mechanism between the two modulations. Filtering performs bandwidth limiting at the transmitter power ampli­

fier output for frequency spectrum control, and at the receiver for selectivity. Amplitude limiting is used at the

input to the phase detector for removal of the amplitUde modulation. The performance of both voice and data trans­ mission are affected.

Voice transmission performance is degraded by carrier

amplitude envelope variations not directly caused by the 58 voice signal amplitude modulation. By inspection of figure

4.8, it can be seen that both MSK and QPSK contain signifi­ cant; energy at frequencies up to f 6T Some spectral = b• energy extends to infinitely high frequencies. Removal of part of this high frequency energy by bandwidth limiting causes amplitude envelope variations [52]. Therefore, a hybrid modulated carrier sUbject to bandwidth limiting will exhibit amplitude envelope variations not caused by the voice modulation. A degraded output SIN for the voice signal is the result. Since MSK contains significantly less energy at higher frequencies, this effect can be minimized by the use of MSK to accomplish the phase modulation.

Data transmission performance is affected by both bandwidth and amplitude limiting. Bandwidth limiting alone tends to distort the modulating symbol shapes of the in­ phase and quadrature carrier components. The result is intersymbol interference. The combination of bandwidth and amplitude limiting which occurs in an actual system results in the potential for both intersymbol interference and interphasor crosstalk [53]. These interference mechanisms have the effect of degrading the bit error rate for data transmission.

Amplifier non-linearities in the radio communication system can also degrade predicted performance. Significant manifestations of this effect are harmonic distortion, intermodulation distortion, cross modulation, and frequency 59 spectrum spreading. Unlike many data communication systems which use phase modulation, AM communication system perform­ ance is directly degraded by non-linearities. For this reason, emphasis is placed on the linear operation of ampli­ fiers in AM equipment design. Therefore, non-linearities are not expected to be a significant problem for the hybrid modulated carrier operating with VHF AM communication equip­ ment.

4.4 Hybrid System Performance

This section will address the predicted performance of the hybrid modulation technique. Special emphasis is placed on those parameters which impact voice and data transmission performance, and also on frequency spectrum requirements.

4.4.1 Computer Simulation

Benelli and Fantacci have suggested utilization of a hybrid modulation teChnique at VHF for the general purpose transmission of ATC data from the ground to the aircraft.

In their work, a computer simulation is undertaken for the evaluation of the hybrid modulation performance when sub­ jected to amplitude and bandwidth limiting. The computer simulation was performed using discrete numerical techniques on baseband signal samples. The signal was sampled at 60

19,200 samples per second, and was processed in blocks of

2,048 samples. Filtering was performed in the frequency domain, and all non-linear operations such as modulation and demodulation were performed in the time domain. A fast

Fourier transform (FFT) algorithm was used for transforma- tion between time and frequency domains [54],[55].

The system parameters were chosen to simulate the transmission of voice and data by hybrid modulation on a VHF voice communication channel. Amplitude limiting was used for removing the amplitude information from the carrier before data demodulation is accomplished. Bandpass filter­

ing was performed both at the output of the transmitter and at the receiver intermediate frequency (IF). The filter characteristics were as follows:

Transmitter filter - 4th order Butterworth 3 dB bandwidth of ± 7.5 kHz

Receiver IF filter - 8th order Butterworth 3 dB bandwidth of ± 5.0 kHz

The signal carrier is phase modulated at various data rates using MSK. The same carrier is then amplitude modu-

lated using either a four-second sample of actual voice data, or a simulated voice signal consisting of the sum of

five tones as given by: 61

5 met) ~ a.cos(2~f.t) (4.23) = 1 1 1=· 1 where: met) is the voice information as shown in equation 4.21 a are amplitudes (volts), and f are frequencies i i (HZ) as shown in table 4.2

The simulated voice signal met) from equation 4.23 is shown in figure 4.10. The power spectral density for the simu- lated voice signal is shown in appendix B to be:

2 5 a. L 1 (4.24) i=1 4 where: Sm(f) is the power spectral density (watts/Hz) 6 denotes the Dirac delta function

The voice signal power spectral density is shown graphically in figure 4.11.

The computer simulation resulted in the determination of output SIN for the voice signal and the bit error rate for the data signal. The voice channel is assumed noiseless such that the noise in the detected signal is caused entire- ly by the effects of the data signal. For the data signal,

AWGN is introduced in the channel, and the overall bit error rate is determined inclUding the effects of the hybrid modulation. By comparison of the predicted and the deter- mined bit error rate, the degradation to the data signal 62

Table 4.2 simulated Voice Modulation Parameters

Amplitudes (volts) Frequencies (Hz)

a f 468.75 1 2/9 1 a f 937.50 2 1/3 2 a f 1406.25 3 2/9 3 a f 1875.00 4 1/9 4 a f 2343.75 5 1/9 5 63

1.000

.870

.740

.610

Q) -0 .480 ::l -I-' e- .350 c, E « .220 .090

-.040

-.170

-.300 .000 .711 1.422 2. 133 2.844 3.556 4.267 Time (rns)

Figure 4.10 Simulated Voice Modulation Waveform 64

.02778 -

.02469 -

.02160 -

.01852 -

C .01543- "'--" ~ .01235-

.00926 -

.00617 -

.00309 -

.00000 ---+-----+------+0-----+-----+------1 .00 468.75 937.50 1406.25 1875.00 2343.75 Frequency (Hz)

Figure 4.11 Power Spectral Density for Simulated Voice Modulation (mirror image for negative frequencies) 65 caused by amplitude and bandwidth limiting and also by the voice signal can de determined.

The output SIN results for the simulated voice signal with ~ = 0.8 are shown as a function of data rate in figure

4.12. The equivalent data for the actual voice signal with

~ = 0.9 are shown in figure 4.13. Figure 4.13 shows a rapid decrease in (SIN) for data rates approaching 2,400 bits per o second. In both cases, however, the predicted (S/N)o for the voice channel is greater than 30 dB for all data rates considered. These results show very little dependency upon k for values between 0.2 and 0.8. v The results for the data signal are shown in figure

4.14. The degradation is a function of data rate R, with higher data rates causing increased degradation. A degrada­ tion to Eb/No of approximately 2 to 4 dB is indicated for a data rate of 2,400 bits per second.

4.4.2 Performance Evaluation

In order that the hybrid modulation technique accom- plish the desired communication of both voice and data, acceptable performance of both transmissions must be main- tained under worst-case conditions and at all points within the existing communication coverage area.

voice signal performance criteria for existing VHF AM aeronautical communication systems are available in the MOPS 66

36

35

o ~z <; 33 JJ = 0.8 U1 "--"

32

31 4------,------,------r------, o 600 1200 1800 2400 Data Rate R (bits/sec)

Figure 4.12 Output SIN Results for Simulated Voice Signal (from reference [54]) 67

36

35

o ~ p, = 0.9 z <, 33 U1 '-./

32

31 -J------.....-.-----....------r-----..., o 600 1200 1800 2400 Data Rate R (bits/sec)

Figure 4.13 Output SIN Results for Actual Voice Signal (from reference [55]) 68

10.2 ....-...... e "- CD ~ a.

10.3

10-4.....- ...-....---.....-...... -oIl 1 2 3 4 5 6 7 8 9 10 11 s, I No

Figure 4.14 Eb/No Results for Data Signal a) R = 300 bits per second b) R = 600 bits per second c) R = 1,200 bits per second d) R = 2,400 bits per second e) Ideal MSK (equation 4.16)

(from reference [55]) 69 for these systems, and can be applied for the hybrid modula­ tion signal also. According to the MOPS, when the receiver is receiving signals of sufficient power to be well above the receiver noise floor, a signal+noise-to-noise ratio of a.t least 25 dB shall be maintained [56]. Equivalently, the output (S + N)/N = 316.23. From this, the output SIN = 315.23 = 24.99 dB. Using this as the criteria for an ac­ ceptable (S/N)o for voice reception, the hybrid system can provide more than acceptable performance with a (SIN) o of at least 30 dB for data rates up to and including 2,400 bits per second. Data transmission performance is expressed as a bit er.ror rate which can be found as a function of Eb/No for QPSK and MSK from figure 4.7. The value of Eb/No must be sufficient at the extreme limits of radio coverage to provide an acceptable bit error rate. A mathematical analy­ sis is presented in appendix A which predicts Eb/No as a function of the existing IF SIN at the minimum received power level allowed within a defined coverage area. The results predict a minimum Eb/No ratio of 13.8 dB if ~ is limited to 0.7, the data rate is 2,400 bits per second, and the IF bandwidth is 10 kHz. If this is degraded by the 4 dB maximum determined by the computer simulation, a ratio of

9.8 dB is available. From figure 4.7, the resulting bit error rate would be approximately 1 x 10-5• This is an acceptable worst-case bit error rate. 70

The computer simulation and further analysis show that acceptable performance can be expected for a hybrid modu­

lated carrier which is: (1) modulated by data at 2,400 bits per second using MSK, (2) modulated by voice using AM with ~

limited to 0.7, (3) sUbject to AWGN, and (4) transmitted and received by VHF AM communication equipment.

4.4.3 Spectral Occupancy

It is important that the hybrid modulated carrier meet the spectral emission requirements for VHF aeronautical communication equipment. The amplitude and phase modula­ tions are statistically independent wide sense stationary processes. Therefore, the power spectral density for the combined process can be found from the convolution of the power spectral density for the voice process, and an ampli­ tude-scaled and frequency-translated version of the baseband power spectral density for MSK. This process is demon­

strated mathematically in appendix B. The baseband power

spectral density of the MSK phase modulation is given by equation 4.18 and shown in figure 4.8. If the simulated voice signal given by equation 4.23 and shown in figure 4.10

is chosen to model the voice modulation process, then the baseband power spectral density of the voice process is given by equation 4.24 and shown in figure 4.11. The re­

SUlting normalized power spectral density for the hybrid 71 modulated carrier as derived in appendix B is expressed as:

= [

(4.25) where: kv = 0.7 a are amplitudes (volts), and f are frequencies i i (Hz) as shown in table 4.2 f c is the carrier frequency (Hz) Tb = l/R, and R is 2,400 bits per second

Ss (f) is shown in figure 4.15. Also shown are the Federal communications Commission (FCC) spectrum limits for the VHF aeronautical communication channels [57]. Notice that the hybrid modulated carrier conforms to this requirement. 72

.00

-7.50

-15.00

-22.50 o ~ -30.00

'"'0 Q) -37.50 N C -45.00 E 5 -52.50 Z -60.00

-67.50

-75.00

-82.50

-90.00 -hT~~I""I"""l"'T"'I""'I""""""""~r"'I""I"T~~""""""""""''''''''''''P'''I'''''I''''r''r''P'P'''''''''''''''''''''''P'''I'''''I''''r''r''P'P'r''I'''I''''I''''''''''''''''''''''''''''''''I''''P'9'''I~P'''I'''''I'''I''''''''' -31.25 -25.00 -18.75 -12.50 -6.25 .00 6.25 12.50 18.75 25.00 31.25 Offset From Carrier (kHz)

Figure 4.15 Normalized Power Spectral Density as Calculated for Hybrid Modulated Carrier (shown with FCC spectrum limits) 73

CHAPTER V

IMAGE DATA COMPRESSION TECHNIQUES

When cockpit weather data requirements were discussed

in section 3.1, it was found that some type of graphics weather display was required. Further, as discussed in section 3.2, the uplink of weather data to the aircraft must be accomplished in such a way as to provide an adequate temporal update rate for the near-real time dissemination of weather data. Graphics images require large amounts of data to adequately describe them, and a system transmitting these data to the aircraft at a fixed data rate can achieve a higher temporal update rate if the amount of data required for each image can be reduced. Such a reduction is possible through the utilization of image data compression tech­ niques. Due to an interdisciplinary interest in reducing data handling requirements, data compression of images is an area of much research, and can be accomplished using one of many different techniques [58],[59],[60].

This chapter provides an overview of some commonly applied image data compression techniques. Such processes allow an encoded image to be described using a reduced number of data bits, and then reconstructed from the encoded

form to provide an accurate replica of the original image.

The treatment presented here is not exhaustive or rigorous, but is sufficient to allow an elemental understanding of 74 certain techniques with desirable properties for the com­ pression of graphic weather data products. The chapter concludes with a discussion of optimum image data compres­ sion techniques for graphic weather data.

5.1 Error-free Techniques [61]

certain data compression techniques can be categorized as error-free. using error-free techniques, the encoded image can be reconstructed to yield an exact replica of the original image. Four such techniques are examined here.

5.1.1 Compact Codes

A compact code is one which represents a unique image

"with an average word length less than or equal to the average length of all other uniquely decodable codes for the same set of input probabilities [62]." The Huffman code is an example of such a coding process. Using this method, the most frequently occurring image picture elements (commonly referred to as pixels or pels) are assigned a smaller length code than are the less frequently occurring elements. An image with a pixel assignment vector Xn and with an associ­ ated probability vector Pn would be encoded using Huffman encoding as shown in table 5.1 [63]. A data compression factor of 3.0/2.2, or 1.36 is achieved in this example. 75

Table 5.1 Huffman Encoding Example

Probability Inputs of Occurrence Natural Code Huffman Code Xn Pn

Xo 0.4 000 1

Xl 0.3 001 00 X2 0.1 010 011 X3 0.1 011 0100 X4 0.06 100 01010 Xs 0.04 101 01011

Average Word Length 3.0 2.2

From reference [63] 76

5.1.2 Differential Encoding

Differential encoding is useful for images in which adjacent pixels are frequently the same, or differ by a small amount. These images can be described by encoding only the changes between adjacent pixels. If the code length necessary to represent the changes is significantly smaller than the code length needed to uniquely specify each pixel using standard pixel mapping, this technique can achieve data compression.

5.1.3 Contour Encoding

Frequently, the objects within an image can be more efficiently described by their outline, or contour, than by description of individual pixels. In these cases, contour encoding can be used to describe the image using fewer data bits than standard pixel mapping would require. The contour encoding process maps each image cluster with the same pixel values into a unique two-dimensional descriptor set. Knowl­ edge of the descriptor set and of the associated pixel value for the set allows the cluster to be reconstructed. Complex images can be viewed as a finite assembly of diverse clus­ ters, allowing contour encoding to describe even the most detailed images. Contour encoding is typically applied for less complex images, however, since it is more efficient to 77 describe an extremely complex image by standard pixel map- ping.

5.1.4 Run Length Encoding

The run length encoding process maps a series of like pixels Xl, X2, X3, ••• , Xu into a set (Px,lx) where Px is the pixel value, and Ix is the length of the pixel run. For pixel run lengths greater than a determined threshold value, it becomes more efficient to describe the pixels by the run length encoded sets than by pixel mapping. The threshold value is the run length at which the (Px,lx) set descriptor is of the same length as the description of the individual pixels. The run length encoding technique works well for images with frequently repeated pixel runs of length greater than the threshold value. Although the process described is for the one-dimensional case of pixel runs along one image scan line, it can also be extended to two dimensions by encoding pixels from adjacent scan lines as well.

5.2 Predictive Encoding [64]

Using the statistics of the image data, it is possible to make a prediction of the current pixel value from the previous pixel values. Letting Xi-l be the current pixel

A value, the next pixel value Xi can be estimated to be Xi- 78

SUbtracting the predicted value from the actual value yields a difference d i = Xi - Xi. If the predicted values are accurate, the differences di will be smaller in magnitude than the actual pixel values Xi. Therefore, the image can be represented by a collection of these encoded differences, and data compression is achieved. This technique is re- ferred to as differential pulse code modulation (DPCM).

The DPCM predictor can be classified as linear or non- linear depending upon whether the predicted pixel value is a linear or a non-linear function of the previous pixel val- ues. Interframe predictors use previous pixel values from earlier image frames, whereas intraframe predictors use pixel values from within one image frame. Intraframe pre- dictors can be one-dimensional, utilizing only previous pixels in one scan line, or can be two-dimensional, and make use of previous pixels in the entire image frame. Further predictor distinctions can be made based on whether the predictor changes characteristics based on input image statistics, or remains fixed. The former are termed adap- tive predictors and the later are termed fixed predictors.

The design of the predictor must be optimized for the sta- tistics of the particular images of interest.

Ideally, DPCM would be an error-free data compression technique. Since the encoder and the decoder use the same type of predictor, they both make identical predictions based on the same image statistics. The differences d i 79 between these predictions and the actual image are available at the decoder and are thus summed back into the decoder predictions to recreate the original image. However, due to the quantization error of these differences, the decoder is unable to recreate the original image exactly. For this reason, DPCM is not an error-free technique. Nevertheless, the technique is fairly widely used. One common use is for the encoding of medical diagnostic images [65].

5.3 Transform Encoding [66],[67]

In one-dimensional transform encoding, an N x N image is separated into smaller 1 x n subarrays where n < N.

These subarrays are then transformed into a set of coeffi- cients which are more statistically independent than the pixels within that subarray. This process is written as:

Yn,l = An,nXn,l (5.1) where: Xn,l is the subarray of pixel samples An,n is a linear transformation matrix Yn,l is the resulting array of coefficients

The transform process can be performed in two dimensions by extending the image sUbarray size to an n x n square. Data compression is accomplished by eliminating smaller coeffi- cients, and by more coarsely quantizing the coefficients 80 which do not significantly affect image quality.

An ideal transform would result in coefficients which are completely statistically independent. Unfortunately, this would require knowledge of higher order statistics of the image. since these statistics are not available, a transform which results in uncorrelated coefficients is the best which can be realized. Additionally, the ideal trans­ form would concentrate the image information into as few coefficients as possible. A transform which satisfies both of these requirements exists, and is known as the Karhunen­

Loeve transform (KLT). This transform is derived from the covariance matrices of the image sUbarrays xn,!.

Although such an ideal transformation exists, there are limitations which prevent its use. The first limitation is the varying image statistics among the sUbarrays Xn,l. This effect is caused by differences in the image composition as a function of position. The problem can only be solved by using different, or adaptive transforms for each image subarray. Another limitation is that the transform cannot be uniquely defined for every input case. Finally, the computations necessary to find the KLT are very complex.

These limitations usually are overcome by using a SUboptimum transform technique. Popular examples of these transforms are the discrete Fourier transform, the discrete cosine transform, and the Hadamard transform.

Although transform encoding is not an error-free 81 technique, very good results have been obtained. In fact, transform encoding using sUboptimum transforms has been utilized successfully for data compression in the storage and transmission of diagnostic medical images [68],[69].

5.4 Fractal Image Compression

Fractal image compression is not an image data compres­ sion technique in the classical sense. This is because the image pixel data is not compressed, but is completely re­ placed by data which is used for the reconstruction of a representation of the original image. still, no contempo­ rary discussion of image compression techniques would be complete without including the concept of fractal image compression. Before discussing this data compression tech­ nique, a brief introduction to the concept of fractals is necessary.

The term fractal was coined in 1975 by French mathema­ tician Benoit Mandelbrot. He defines the term as "a set for which the Hausdorff Besicovitch dimension strictly exceeds the topological dimension [70]." The topological dimension, denoted hereafter as Ott is the integer dimension of a set in n-dimensional Euclidean space. Some examples of sets and their corresponding topological dimension are shown as:

Dt = 0 for a point 82

°t = 1 for a line

°t = 2 for a plane Dt = 3 for space

The Hausdorff Besicovitch dimension, denoted as 0, is also a descriptor of the dimension of a set in n-dimensional

Euclidean space, but is not necessarily an integer. 0 is now more commonly called the fractal dimension. An example of a plane curve in Euclidean space is shown as a square in figure 5.1 [71]. Since this curve is a plane curve, it has a topological dimension given by 0t = 2. Consider the number of fundamental units of this curve contained inside a circle of radius R centered on the curve. The number of fundamental units enclosed is proportional to the circle radius R raised to the power of 0t. This can be illustrated easily by noting that the area of a circle is given by:

A = ~R2 where: A is the circle area (in square units) R is the circle radius ~ is the constant 3.141593

Now consider the fractal geometry example shown in figure

5.2 [72]. For this case, the curve of interest is a fractal curve with fractal dimension 0 = 1.46. Notice that this curve is self-similar at any scale. Such is an identifying trait of fractals. In this case, the number of fundamental 83

RADIUS=3 PLANE CURVE (Dt=2)

UNITS ENCLOSED - RADIUS**Dt

FOR RADIUS = 1: 1**2 = 1 FOR RADIUS = 3: 3**2 = 9

Figure 5.1 Euclidean Geometry Example 84

RADIUS=!

RADIUS=3

FRACTAL CURVE (D= 1.46)

UNITS ENCLOSED - RADIUS**D

FOR RADIUS = 1: 1**1.46 = 1

FOR RADIUS = 3: 3**1.46 = 5

Figure 5.2 Fractal Geometry Example 85 units enclosed in the same circle is proportional to the circle radius R raised to the power of D. The fractal dimension of 1.46 indicates that the fractal curve grows faster than a line, but not as fast as a solid plane curve.

There exist two primary types of fractals. The first class are known as geometric fractals, and were described above. This fractal class consists of fractals which are:

(1) strictly self-similar, (2) scale invariant, and (3) generated according to exact geometric rules. Random frac­ tals are the second fractal class. This group of fractals are: (1) statistically self-similar, (2) largely scale in­ variant, and (3) generated by using the fractal as the domain of attraction for a random process [73]. The random fractals closely resemble natural processes.

Fractal image compression utilizes random fractals to represent an image [74]. The process of image compression using fractals proceeds by first separating the image into similar element groups. Next, a transform descriptor for each element group is found which will result in a fractal domain of attraction that best represents the element group visually. These transform descriptions, along with a proba­ bility of the occurrence of each transform, make up the image description. The image can then be reconstructed by random iteration using the transform descriptions to yield the desired fractal domains of attraction. This technique has been applied with very impressive results for images 86 which are composed of elements easily represented by random fractals. In this case, image compression ratios as high as

10,000:1 have been achieved.

The current disadvantages of the fractal image compres­ sion technique are the large amounts of processing time required both to encode and to decode the image, and the difficulty of finding a fractal which realistically repre­ sents the original image element. The times required to encode and decode a complex color image using current­ technology processing systems are 100 hours and 30 minutes respectively. Further, these are images for which adequate fractal descriptions have been found. It should be noted, however, that as research in this area continues and proces­ sing times become less, fractal image compression could pro­ vide a much improved method for image compression.

5.5 Optimum Techniques for Graphic Weather Data

Any of the techniques for image data compression could be applied for the compression of graphic weather data products. The selection of an optimal teChnique for all types of graphic weather data products is not possible. All of the techniques described in the last section offer cer­ tain advantages and disadvantages. The technique utilized must provide a compromise of these to best suit the charac­ teristics of a particular image. Many of these teChniques 87 are being successfully combined to form hybrid techniques in order to accomplish this [75]. A tabular comparison of some of the image data compression techniques which have been considered is shown in table 5.2 [76],[77]. Due to the single-frame nature of graphic weather images, only intra­ frame techniques are considered.

In section 3.1 it was concluded that one very important graphic weather data product for uplink to aircraft was ground radar information. A typical NWS radar display is shown in figure 5.3 with a geographical map overlay. The precipitation intensity is displayed as one of six color levels on a black background. Each color level corresponds to a particular precipitation intensity. Figure 5.4 shows a pixel level histogram for figure 5.3 without the map over­ lay.

The image data compression techniques which have been successfully implemented in the medical community are readi­ ly adaptable for the compression of graphic weather data products due to similarities in requirements. Both applica­ tions require the preservation of critical details within complex images. In the last section, it was found that both

DPCM and transform encoding were successfully being used in the medical community.

Fractal image compression is very applicable for graph­ ic weather image compression. Lovejoy has demonstrated an area-perimeter relation for both GOES satellite images and 88

Table 5.2 Comparison of Image Data Compression Techniques

Typical Method Compression Implementation Ratios

Error-Free Methods variable Simple - moderate

Predictive Encoding

Intraframe 2 - 4 Moderate

Intraframe Adaptive 4 - 8 Moderate

Transform Encoding

Intraframe 5 - 8 Complex

Intraframe Adaptive 8 - 16 Complex

Hybrid Encoding

Intraframe 4 - 8 Moderate - Complex

Intraframe Adaptive 5 - 16 Moderate - Complex

Fractal Compression up to 10,000 Very Complex

Data from references [76] and [77] 89

Figure 5.3 NWS Color Weather Radar Display (NWS Radar site at Cape Hatteras (HAT), North Carolina with map overlay) 90

25000

Q.) 20000 01 0 E c 15000 en Q) o c Q) 10000 L. L :J U U 0 5000

Q-J-----J----4----+----+------1I------4It' o 234 5 6 Pixel Level

Figure 5.4 Pixel Level Histogram for NWS Color Weather Radar Display (from figure 5.3 without map overlay) o = black 1 = light green 2 = dark green 3 = light yellow 4 = dark yellow 5 = light red 6 = dark red 91 weather radar reflectivity patterns [78]. This relation demonstrates that the boundaries of these cloud and areas are indeed described by fractals.

Both transform and DPCM encoding and also fractal image compression appear to be applicable for graphic weather data products. Before further considering these techniques for radar images, notice that from figures 5.3 and 5.4 that the radar image is composed of only 7 pixel levels, and that these levels tend to be grouped in clusters. Further, there exist large areas of only one level such as for the back­ ground. The relative simplicity of these images suggests that one of the error-free encoding techniques might be efficient. Although any of the error-free techniques are applicable, a combination of compact codes and run-length encoding is: (1) easily implemented, (2) accomplished with minimal processing time, and (3) suited for providing a baseline for the future evaluation of more sophisticated image compression techniques. 92

CHAPTER VI

AN EXPERIMENTAL WEATHER DATA UPLINK SYSTEM

Chapter IV provided an analysis for the performance evaluation of a hybrid modulation technique. This technique uses both MSK and AM to accomplish the simultaneous trans­ mission of weather data and voice information from the ground to the aircraft. In order to provide for the evalua­ tion of graphic weather image processing techniques and cockpit weather displays, an experimental weather data uplink system has been developed. The experimental system provides a dedicated uplink capability using an existing

25kHz VHF AM voice communication channel. This uplink does not provide for the simultaneous transmission of voice and data, but does allow the uplink of weather data at a data rate equivalent to that which could be provided by the hybrid technique.

The experimental weather data uplink system is depicted in figure 6.1. This system digitizes and compresses weather radar data as gathered on the ground by NWS radar. The resulting compressed image data is transmitted to the air­ craft at 2,400 bits per second over the VHF AM communication channel. On board the aircraft, the image data is expanded and displayed in the standard NWS color format with geo­ graphical map overlays. This chapter provides a detailed description of this system. O.HIOUNlVERSITY ...... S ~JCKERFLYING ~<; ABORATORY

~VHFCOMMUN~ATION CHANNEL

AIRBORNE IMAGE NWS PROCESSING SYTEM RADAR SITE

TELEPHONE LINES

OHIO UNIVERSITY AVIONICS GROUND· BASEDIMAGE ENGINEERING CENTER PROCESSING SYSTEM

02X 029/001. E27

Figure 6.1 Experimental Weather Data Uplink system ~ W 94

6.1 Gathering and Processing of Ground-based Weather Data

Precipitation reflectivity patterns from NWS weather radars are available to private users through equipment such as that made available by Kavouras, Incorporated [79]. The equipment consists of encoders at each radar site, and an integrated decoder and display processor unit at each user site. Encoded reflectivity patterns with geographical map overlays are transmitted from the radar site to the user site by telephone. The decoder and display processor unit, known as the RADAC Color Weather Radar Receiver System, provides a color video output using standard National Tele­ vision System Committee (NTSC) composite video.

The RADAC display output provides user-selectable colors for the background, for each precipitation intensity level, and for the optionally displayed map overlays. The intensity levels correspond to the strength of the returned radar echo, and are thus related to the detected rainfall rate. These intensity levels, the corresponding rainfall rates, and the associated NWS color identification are as shown in table 6.1 [80]. The Kavouras RADAC equipment at the Ohio University Avionics Engineering Center is config­ ured in this way, and this equipment is shown in figure 6.2.

The video output of this equipment with the geographical maps enabled was shown in figure 5.3. These maps are not transmitted to the aircraft with the radar images since they 95

Table 6.1 Kavouras RADAC Color Weather Radar Precipitation Intensity Levels

Convective Precipitation Intensity NWS Color Rainfall Rate Level (inches/hour)

1 Light Light Green 0.0 - 0.2

2 Moderate Dark Green 0.2 - 1.1

3 Heavy Light Yellow 1.1 - 2.2

4 Very Heavy Dark Yellow 2.2 - 4.5

5 Intense Light Red 4.5 - 7.1

6 Extreme Dark Red 7.1 and above

From reference [80] 96

Figure 6.2 Kavouras RADAC Color Weather Radar Receiving System 97 convey no new information. Instead, the maps are stored within the airborne processing system and overlaid for airborne display.

The RADAC equipment was modified to provide a gray­ level video output and a pixel clock output so that the radar image could be digitized for image processing using a low-cost video digitizer. In this configuration, precipita­ tion levels one through six are displayed as incrementally brighter gray levels on a black background. The geographi­ cal map overlays are white. A description of these changes, and the supporting technical description of the RADAC equip­ ment are provided in appendix C.

The RADAC gray-level video output is digitized for image processing using a Micromint ImageWise Video Digitizer

[81],[82]. The unit is pictured in figure 6.3. This low­ cost system accepts standard NTSC composite video, and digitizes one frame (244 lines x 256 pixels) of this video within 1/30 of a second. The pixel values are stored in the lower six bits of an eight bit word, allowing for 64 gray­ levels. The image frame is stored in static random access memory (RAM) along with line and field synchronizing charac­ ters. One complete image frame requires 62,710 bytes of storage. Upon command, the image frame is downloaded by RS­

232 serial communication at a selectable data rate from 300 to 57,600 bits per second.

The digitizer was modified to allow operation with 98

J

Figure 6.3 Micromint ImageWise Video Digitizer 99 high-resolution graphics. An external pixel clock from the

Kavouras RADAC system provides for synchronous pixel detec­ tion, and the analog front-end of the digitizer was rebuilt to allow an improved transient response for graphics images.

These modifications and the supporting technical details of the digitizer are described in appendix D.

Image processing on the ground is performed by an IBM

PC/XT-compatible personal computer. All image acquisition

and processing software was written in the C programming

language, and compiled using Microsoft C version 5.0

[83],[84],[85]. Software listings are provided in appendix

E. The software consists of five main programs. These are:

(1) cal, (2) mapgen, (3) upcwx, (4) rxcwx, and (5) rxtest.

Functions used by these programs are organized by task in

include files. The programs cal, mapgen, and upcwx are run by the ground processor, and are described here.

The program cal is used to calibrate the digitizer

levels used for quantization relative to the Kavouras RADAC video output. The Kavouras test bar pattern is used for this program. This pattern consists of a test bar corre­

sponding to each output video gray level. Digitized samples are taken from each test bar, and are used to determine threshold levels which are equidistant from the computed mean for each level. These levels are written to the output

file cal.dat, and are used in the main programs.

Map generation for storage on board the aircraft is 100 accomplished by the program mapgen. The program digitizes the geographical map overlays from the RADAC system, thresh­ olds them to eliminate any noise, and stores them in files which are then manually transferred to the airborne process­ ing system. The user is prompted to enter a file name for these files. They must be named according to the format:

ABCxxx.map, where ABC is the three-letter radar site identi­ fier, and xxx is the corresponding radar range (nm).

The program upcwx performs a serial data download from

'the digitizer, checks the data for correct placement of all synchronization control codes, quantizes the data to six levels, compresses the image, appends a title block for identification, and transmits the image through a second serial port to a modem. The title block contains the radar site identifier and range which are entered on the command line when the program upcwx is called. In addition, the title block contains date and time information in Greenwich mean time (GMT) which is derived from the computer clock.

The serial data from the digitizer is downloaded through the serial port at a data rate of 28,800 bits per second. This high data rate is supported by directly ac­ cessing the 8250 asynchronous communications element within the computer by system port input/output. Data transmission to the modem is accomplished serially at 2,400 bits per second. The modem output is a 600 ohm audio carrier which is DPSK modulated by the data. The audio carrier from the 101 modem is transformer-coupled to the microphone input of a

VHF communication transmitter. The transmitter carrier is thus modulated by the remote audio carrier from the modem.

The transmitter radiates from a dipole antenna mounted on the roof of Stocker Center on the Ohio University campus.

6.2 Image Data Compression Technique

Precipitation reflectivity patterns as digitized from the Kavouras RADAC system are compressed using a specific­ ally designed encoding technique utilizing compact codes and run length encoding. Each pixel of the digitized image is first quantized into one of six intensity levels using the calibration data from the file cal.dat. In this way, each pixel can be represented in three bits instead of the six bits which are output by the digitizer. The image compres­ sion software then further compresses the image before transmission.

The image data compression technique used is a one­ dimensional run length encoding technique which also uses compact codes to integrate line synchronizing control char­ acters with run length control characters where possible.

The technique also utilizes a reduced-length code to repre­ sent an entire line of black background pixels. Individual pixels are not run length encoded if the run length is less than a three-pixel threshold length. In this case, it is 102 more efficient to represent the pixels directly. These individual pixels are distinguished from control characters by a zero in the most significant bit position. Control codes have a one in this position. All control and pixel characters are thus 4 bits long. Elementary error detection is provided for each image line by a checksum code for each line. This checksum is formed by summing all pixels along one line before compres­ sion, and then truncating this sum to four bits. One pixel bit error within anyone line can thus be detected.

The resulting data format for an image is as shown in table 6.2. The four-bit control codes and pixels are shown along with their usage within the image. Also shown is the

10-byte title block format.

Compression ratios achieved by the image data compres­ sion technique vary depending upon the characteristics of a particular image. The largest compression ratio is achieved when a solid black background is compressed. Although such an image conveys no information, it provides an upper limit for compression efficiency. The compression ratio in this case is 68:1. More complex radar reflectivity patterns which occur when thunderstorms are being detected are typi­ cally compressed with a compression ratio between 6:1 and

9:1. As a result, these images can be encoded using 2,000 to 3,000 data bytes. 103

Table 6.2 Data Format for Digital Weather Images

Title Block

10 Bytes /sl/s2/s3/s4/range/minute/hour/day/month/year r a d a r site I I(sl - s3)

Data Nibble Usage

Hex Binary Usage

0-7 OXXX Pixel Value 8 1000 Field Sync 9 1001 Line Sync A 1010 End Field B 1011 Unused C 1100 Compression (Run Encode) D 1101 Compression/Line Sync (Start Encode) E 1110 Unused F 1111 Full Line Compression/Line Sync (Line Encode)

Data Nibble Placement

a,b,c,d - 4 bit pixel values y - 8 bit run count

9 - Line Sync

C- Compression (Run Encode)

Even Pixel Bound: 1001/CHKSUM/a3aZalaO/b3blblbo/l100/C3ClClCO/Y7Y6YsY4/y3Y2YIYo Odd Pixel Bound: 1001/CHKSUM/a3a2alaO/l100/b3blblbo/Y7Y6YsY4/Y3Y2YlYo D- Compression/Line Sync (Start Encode)

F- Full Line Compression (Line Encode) 104

6.3 Modulation Technique

As discussed early in this chapter, an existing 25 kHz

VHF AM voice communication channel was utilized for the

experimental uplink system. For this uplink system, data

transmission is accomplished by replacing the existing voice transmission capability with a data transmission capability.

DPSK (in the form of differential QPSK) modulation of a

carrier in the audio band accomplishes this data transmis­

sion. Differential QPSK offers two advantages over ordinary

QPSK: (1) detection can be accomplished differentially without requiring a coherent receiver, and (2) receiver

synchronization for establishment of a reference phase is

not required.

For the experimental uplink, an 1,800 Hz audio carrier

is modulated by the digital data using differential QPSK.

This carrier is in turn used to amplitUde modulate the VHF

carrier in place of voice information. At the receiver, the data carrier is AM detected, and then the data is extracted

by coherent correlative detection. Although the data could

also be differentially detected, coherent correlative detec­ tion offers improved noise performance.

Bit error performance for differential QPSK is the same

as for QPSK when the receivers use coherent detection. The

predicted bit error probability for a QPSK modulated carrier

transmitted through an AWGN bandpass channel can be found 105 from the received bit energy-to-noise ratio Eb/No using equation 4.16. Eb/No in this case can be found from appen­ dix A by noting that: (1) since only the phase modulation is present, no limiting is required, and (2) the phase modu- lated carrier must be AM detected by the envelope detector before the data is recovered. Thus, we proceed by express- ing Eb/No as a function of the minimum output SIN. with reference to appendix A, substitution of equations A.3, A.7 and A.a into A.9 yields:

[ ~ ] = (6.1) o

(A J.L)2 where: ~ is the signal output (watts)

kTSB is the noise output (watts)

The signal output in this case is the 1,800 Hz phase modu­ lated audio carrier. The bit energy is thus given by:

E (6.2) b =

Proceeding as in appendix A, it is found that for output SIN expressed in dB: 106

Eb = 101og(B/R) + [ ] (6.3) NO ; o

For a bandwidth B of 10 kHz, a data rate R of 2,400 bits per second, and a minimum output SIN of 4.7 dB, the minimum expected Eb/N for the experimental uplink is 10.9 dB. o Equation 4.16 then gives a predicted worst-case bit error rate of approximately 0.7 x 10-6 •

The modulator and demodulator design are implemented using the Motorola MC6172 and MC6173 chip set. This chip set digitally synthesizes a differential QPSK waveform at a

2,400 bit per second data rate. A description of the modu- lator and demodulator design is presented in reference [86].

6.4 Airborne Processing and Display

The airborne equipment is installed in the Stocker

Flying Laboratory. This aircraft is a 1980 Piper Saratoga

PA-32-301, and is identified by tail number N8238C. The aircraft, pictured in figure 6.4, is owned by Ohio Univer- sity Avionics Engineering Center, and is equipped as a flying laboratory.

The DPSK weather data is received on board the aircraft by a King KX 175B VHF Navigation/Communication Transceiver

[87]. This receiver is pictured in figure 6.5. The 500 ohm headphone audio output of this radio is transformer-coupled 107

Figure 6.4 Avionics Engineering Center stocker Flying Laboratory 108

Figure 6.5 King KX 175B VHF Navigation/ Communication Transceiver 109 into the demodulator. The demodulator extracts the DPSK data from the audio carrier, and provides a serial output at

2,400 bits per second to the airborne image processing equipment.

An IBM PC/AT-compatible personal computer is used for airborne image processing. The program rxcwx loads the serial weather data from the serial port at 2,400 bits per second, and checks for the necessary control characters. If the control characters are present, the program expands the received image. Once expanded, the checksum characters are compared to the pixel sum within each video line. If an error is detected, the program will exit. The program reads the title block to determine the radar site and range, and uses this data to open and read the corresponding geographi­ cal map overlay file. Next, the gray levels are converted to colors corresponding to the NWS precipitation intensity colors. Finally, the map is overlaid, and the image is displayed in 16-color enhanced graphics adapter (EGA) format with a resolution of 640 pixels x 350 lines. The displayed image title contains the three-letter radar site identifier, the radar range, the date, and the time (GMT) at which the transmitted radar reflectivity pattern was digitized. A typical airborne display of a color weather radar precipita­ tion reflectivity pattern is shown in figure 6.6.

The program rxtest is used as an overall system test.

This program exercises all routines used by upcwx for ground 110

Figure 6.6 Airborne Color Weather Radar Display (NWS Radar site at Columbus (CMH) , Ohio) 111 processing, and also all routines used by rxcwx for airborne processing. Rxtest does not, however, serially transmit or

receive the compressed image. Instead, a copy of the com­ pressed image in computer memory is expanded and displayed.

Using this program, all routines except the serial transmit

routine used in upcwx and the serial receive routine used in

rxcwx can be tested without the necessity of two computers

being linked serially. The program is also useful for

testing of the digitizer system.

6.5 Flight Evaluation Results

On May 18 of 1989, the experimental weather data uplink

system was evaluated by flight test. The test flight origi­

nated at the Ohio University Airport in Albany, Ohio at

approximately 12:45 PM local time. Upon take-off, the

airborne receiving and processing equipment was turned on,

and the first image was received when the aircraft was

approximately two miles south and east of the airport. All

SUbsequent images were received within a five nautical mile

radius of the transmitter on the Ohio University Campus in

Athens, Ohio. The measured transmitter output was approxi­ mately 6 watts.

Color weather radar reflectivity patterns were success­

fully transmitted from the ground and displayed in the

aircraft cockpit without errors. This does not provide 112

quantitative performance data, but does support the low predicted bit error rate for the experimental uplink system.

Ground gathering of images from the Kavouras weather

radar receiver was accomplished manually. As a result, an updated image could only be transmitted at two to five min­ ute intervals. This limitation could be overcome if the

operation of the Kavouras weather radar receiving equipment were automated and controlled by the ground-based computer.

The first radar images to be transmitted and received

originated from the NWS radar site at Port Columbus Inter­ national Airport, Ohio. No thunderstorms or precipitation

areas were located in the state of Ohio during the flight,

and so these images showed only ground clutter patterns.

The NWS radar summary for the united states was trans­ mitted next. This image showed a severe thunderstorm watch

area in eastern Kansas. Also, severe thunderstorms were

indicated near Wichita, Kansas. After several NWS radar

summaries were transmitted, a current reflectivity pattern

from the NWS radar at Wichita Mid-Continent Airport, Kansas was transmitted and received. A display of this image

revealed scattered thunderstorm cells of moderate and occa­

sionally heavy to severe intensity. A data compression

ratio of 6.5:1 reduced this image to under 3000 data bytes.

This image was transmitted in approximately 12 seconds.

The aircraft returned to the Ohio University Airport at

approximately 1:45 PM. Although no severe weather was 113 encountered in Ohio during this flight, the weather data from the Wichita radar could have proven invaluable if the flight had taken place in Kansas. One of the thunderstorm cells shown on the Wichita radar reportedly produced a funnel within minutes of the time of the received radar image. 114

CHAPTER VII

CONCLUSIONS AND RECOMMENDATIONS

7.1 Summary and Concluding Remarks

This paper has addressed an issue which is critical for the safety of the National Airspace System. That issue is the provision of weather data products to the aircraft cockpit. The availability of these data in the cockpit is necessary to allow the pilot to make effective go/no-go decisions when flying in weather conditions which could threaten safe operation of the aircraft. Currently, except

for the limited text capabilities of the ACARS system, weather data are disseminated by voice communication between aircraft and ATC personnel. This process severely limits the ability of the pilot and aircrew to obtain timely and accurate weather data.

Due to advances in ground weather data gathering and processing capabilities, greater numbers of more detailed weather data products are becoming available. It has been shown that certain of these weather data products need to be presented in a graphical format to allow effective interpre­ tation of the data. The transmission of such weather data products to the aircraft by data uplink is widely recognized as the solution to the weather data dissemination problem.

The current FAA plans for providing this data uplink 115 service rely upon the Mode S beacon system. Although impor­ tant for ATe functions, this system has been shown to be inadequate for the provision of weather data to all users of the National Airspace System.

Although future data uplink services will be adequately provided by satellite communication, the need for a weather data uplink system is immediate. Therefore, this paper has examined some existing aeronautical radio frequency systems with the goal of adding a data uplink capability while not unacceptably degrading the performance of the particular system. Providing a data uplink service in this way would have the advantages of conserving the rapidly diminiShing radio frequency spectrum and also of providing weather data dissemination without the necessity of developing a dedi­ cated system for this service. A dedicated system would have a higher cost to both the aircraft operator and the

FAA.

Analysis has shown that existing VHF communication systems can simultaneously transmit both analog voice and digital weather data. According to this analysis, both transmissions can take place on the same radio frequency carrier without unacceptable degradation to either communi­ cation mode. This can be accomplished by using amplitude modulation for voice transmission as in current equipment, and also using MSK phase modulation of the same carrier for uplink of weather data products. 116

The analysis builds upon an existing computer simula­ tion of such a hybrid modulation technique. The computer simulation and further analysis show that acceptable per­ formance can be expected for a hybrid modulated carrier which is: (1) modulated by data at 2,400 bits per second using MSK, (2) modulated by voice using AM with ~ limited to

0.7, (3) SUbject to AWGN, and (4) transmitted and received by VHF AM communication equipment. The generation of the

MSK data modulation requires synchronization of the carrier frequency f to the bit period T such that R n/4T where c b = b n is an integer. This requirement would necessitate the slight adjustment of the 2,400 bit per second data rate to allow use with the VHF communication channels which are separated by 25 kHz. Also, current voice communication requirements allow the modulation index ~ to vary between

0.7 and 1.0. In order to achieve acceptable performance using the hybrid modulation, ~ would have to be limited to

0.7.

The Automated Terminal Information Service (ATIS) would be an ideal candidate for transmission of the hybrid modu­ lated carrier. This service is provided at most major airports to inform pilots of current weather conditions and other local airport information. The information, in the form of recorded voice, is transmitted continuously on a discrete VHF radio frequency at each facility [88]. Because the ATIS transmitter is continuously transmitting, the 117 hybrid modulated carrier could replace the current voice transmission with continuous combined voice and weather data transmission.

Currently, the coverage volume for ATIS transmitters is omnidirectional with an LOS range of approximately 60 nm

from ground level to 25,000 feet msl [89]. As discussed within this paper, coverage on the ground at the airport offers the advantage of allowing updated pilot weather briefings immediately before take-off. Above 25,000 feet rosl, Mode S coverage is adequate and could provide en route data at a limited data rate. In areas where the 60 nm range

is not sufficient to provide continuous coverage between

ATIS facilities, the transmitter power could be increased, or additional channels provided.

Initially, the system could transmit compressed local

NWS weather radar reflectivity patterns. As improved weath­ er data gathering systems become available, the uplink system would have the capability of transmitting processed graphical images from these systems as well. In addition, a text capability would allow uplink of text weather messages.

The resulting weather data dissemination system would have the advantage of greatly reducing congestion on exist­ ing ATC voice channels by eliminating the requirement for voice communication of weather data. Also, the hybrid modulation technique would allow the system to accomplish the dissemination of both voice data and text or graphic 118 weather data products at an adequate temporal update rate while requiring no additional radio frequency spectrum.

An experimental system for the uplink of graphical weather data products has been developed and demonstrated, and is described in this paper. This system was developed in order to provide for the evaluation of graphic weather image processing techniques and cockpit weather displays.

The experimental system provides a dedicated data uplink capability using an existing 25kHz VHF AM voice communica­ tion channel. This uplink does not provide for simultaneous transmission of voice and data, but does allow the uplink of weather data at a data rate equivalent to that which could be provided by the hybrid technique.

The experimental uplink system digitizes and compresses

NWS weather radar reflectivity patterns. The compressed image is then transmitted by DPSK on an audio carrier. Upon receipt of the data in the aircraft, the image is expanded and displayed on a high-resolution color display.

Typical compression ratios from 6:1 to 9:1 have been demonstrated by the image compression algorithm when radar precipitation reflectivity patterns with thunderstorms are encoded. These images have been successfully transmitted in

8 to 12 seconds at 2,400 bits per second. 119

7.2 Recommendations for Future Research

Based on the predicted acceptable performance of a hybrid modulated carrier for the simultaneous transmission of both voice and weather data, it is recommended that the evaluation continue to the hardware development phase.

Acceptable performance of the system can be quantitatively determined by the SIN ratio for the voice communication, and by bit error rate performance for the digital data communi­ cation.

An experimental weather data uplink system is now in place, and will allow for the evaluation of weather data products, image processing techniques, and cockpit displays.

It is recommended that continued research be conducted in all of these areas.

Particularly important to the pilot and aircrew is an effective display format for the cockpit presentation of particular weather data products. Some amount of processing of these data on the ground can significantly reduce both uplink data handling requirements and also aircrew and pilot effort needed to interpret this data. Integration with aircraft navigation systems would provide a useful indica­ tion of the location of the aircraft with respect to weather areas. The ineffective presentation of weather data to the aircrew and pilot would make their availability in the cockpit of little use. For this reason, research in the 120 human factors area of cockpit presentation of weather data is extremely important.

Although image compression techniques will depend upon each particular graphical image, acceptable performance for radar precipitation reflectivity patterns has been demon­ strated using a hybrid technique. The technique utilizes both run-length encoding and compact codes. It appears that contour, DPCM, and transform encoding will also provide good performance for radar reflectivity patterns. Fractal data compression will provide very high compression ratios for all types of images. with current research being undertaken in this area, and with digital computing speeds rapidly increasing, this technique offers much promise. Further research in the area of image compression is highly recom­ mended.

It is particularly important that research in the broader area of weather data dissemination to aircraft be undertaken expeditiously, and with the goal of quickly developing a working system. Although our National Airspace

System has an impressive overall safety record, we should not tolerate unnecessary aircraft encounters with severe weather which could be avoided. The uplink of weather data to aircraft has the potential for significantly improving the safety of the National Airspace System in this area. 121

REFERENCES

1. National Transportation Safety Board, Annual Review of Aircraft Accident Data, U.S. General Aviation, Calendar Year 1986, NTSB/ARG-88/01, Washington, D.C., October 25, 1988, 31.

2. National Transportation Safety Board, Annual Review of Aircraft Accident Data, U.S. Air Carrier Operations, Calen­ dar Year 1986, NTSB/ARC-89/01, Washington, D.C., February 3, 1989, 19.

3. National Transportation Safety Board, Aircraft Accident Report. Delta Airlines. Inc., Lockheed L-1011-385-1, N726DA, Dallas/Fort Worth International Airport. Texas. August 2, 1985, NTSB/AAR-86/05, Washington, D.C., August 15, 1986, 84.

4. Kavouras, Inc., "Introduction to Weather Radar," RADAC User's Manual, Minneapolis, Minnesota, 5:1-50.

5. Ibid.

6. U.S. Department of Transportation, Federal Aviation Administration, Aviation Weather System Plan, Washington, D.C., 1984, 2:6.

7. Zittel, D.W., Echo Interpretation of Severe Storms on Airport Surveillance Radars, FAA-RD-78-60, U.s. Department of Transportation, Federal Aviation Administration, Systems Research and Development Service, Washington, D.C., April 1978, 54-58.

8. Garvey, W., "Stormscope, the only alternative," reprint from The AOPA Pilot, August 1976.

9. op.Cit., National Transportation Safety Board, Aircraft Accident Report, Delta Airlines, Inc., Lockheed L-1011-385­ 1. N726DA. Dallas/Fort Worth International Airport, Texas, August 2, 1985, 32.

10. Mandel, E., "Severe Weather: Impact on Aviation and FAA Programs in Response," Paper presented at 27th American Institute of Aeronautics and Astronautics Aerospace Sciences Meeting, Reno, Nevada, AlAA-89-0794, January 1989, 2.

11. Sand, W., and C. Biter, "TDWR Display Experiences," Paper presented at 27th American Institute of Aeronautics and Astronautics Aerospace Sciences Meeting, Reno, Nevada, AlAA-89-0807, January 1989. 122

REFERENCES (continued)

12. Ope cit., Mandel, "Severe Weather: Impact on Aviation and FAA Programs in Response," 3-4.

13. Ibid., 1-2.

14. Ope Cit., u.s. Department of Transportation, Federal Aviation Administration, Aviation Weather System Plan, 5:2.

15. Ibid., 3:1-12.

16. Parker, C. B., "Aircraft Weather Data Dissemination System," Proceedings of the Joint University Program for Air Transportation Research - 1987. Atlantic City. New Jersey. January 1988, NASA CP-3028, National Aeronautics and Space Administration, Washington, D.C., April 1989, 69-76.

17. McFarland, R. H., A Delineation of critical Weather Factors Concerning General Aviation, OU/AEC/EER 53-3, Ohio University, Avionics Engineering Center, Athens, Ohio, November 1981.

18. McFarland, R. H., An Experimental Investigation of the Efficacy of Automated Weather Data Dissemination to Aircraft in Flight, DOT/FAA/PM-83/11, u.s. Department of Transporta­ tion, Federal Aviation Administration, Program Engineering and Management Service, Washington, D.C., December 1982.

19. Hansman, R. J., and C. Wanke, "Cockpit Display of Hazardous Weather Information," Paper presented at 27th American Institute of Aeronautics and Astronautics Aerospace Sciences Meeting, Reno, Nevada, AlAA-89-0808, January 1989.

20. Ope cit., u.S. Department of Transportation, Federal Aviation Administration, Aviation Weather System Plan, 2:6.

21. Clyne, P. W., "Cockpit Requirements for Weather Infor­ mation and Data Link Messages," Proceedings of the 1983 RTCA Assembly. Washington. D.C. November 1983, Radio Technical Commission for Aeronautics, Washington, D.C., 107-115.

22. U.S Department of Transportation, Federal Aviation Administration, National Airspace System Plan. Facilities. Equipment and Associated Development, Washington, D.C., April 1984, I:25. 123

REFERENCES (continued)

23. International civil Aviation Organization, Internation­ al Standards. Recommended Practices and Procedures for Air Navigation Services. Aeronautical Telecommunications, Annex 10 to the Convention on International civil Aviation, Volume I, Fourth Edition, Montreal, Quebec, Canada, April 1985, 9.

24. Ibid., 15.

25. Ibid., 26.

26. Ibid., 31.

27. Ibid., 23.

28. Department of Transportation, Federal Aviation Adminis­ tration, Frequency Management Engineering Principles: crite­ ria and Procedures for Assigning VHF/UHF Air/Ground Communi­ cation Frequencies, FAA Order 6050.4B, Washington, D.C., October 19, 1981, Appendix 5, 6.

29. Couch, L. W., Digital and Analog Communication Systems, Macmillan Publishing Co., New York, 1983, 11.

30. Ross, I., and J. B. Kennedy, System Analysis of Tacan and DME for Addition of Digital Data Broadcast, FAA-RD-73-2, Department of Transportation, Federal Aviation Administra­ tion, Systems Research and Development Service, Washington, D.C., April 1973.

31. Ope cit., u.S. Department of Transportation, Federal Aviation Administration, Aviation Weather System Plan, 5:15­ 17.

32. Radio Technical Commission for Aeronautics, Minimum Operational Performance Standards for Air Traffic Control Radar Beacon System/Mode Select (ATCRBS/Mode S) Airborne Equipment, RTCA/DO-181, Washington, D.C., March 25, 1983.

33. Orlando, V. A., and P. R. Drouihet, Mode S Beacon System: Functional Description, DOT/FAA/RD-82/52, Department of Transportation, Federal Aviation Administration, Systems Research and Development Service, Washington, D.C., October 27, 1982.

34. Ibid., 12. 124

REFERENCES (continued)

35. Ope Cit., Radio Technical Commission for Aeronautics, Minimum Operational Performance Standards for Air Traffic Control Radar Beacon System/Mode Select (ATCRBS/Mode S) Airborne Equipment, 42. 36. Ope Cit., u.s Department of Transportation, Federal Aviation Administration, National Airspace System Plan. Facilities. Equipment and Associated Development, IV:78-79.

37. Rucker, R. A., "Integrating Digital Communication Systems," Proceedings of the 1988 RTCA Assembly. Washington. D.C. November 1988, Radio Technical Commission for Aeronau­ tics, Washington, D.C., 141-158.

38. Radio Technical Commission for Aeronautics, VHF Air­ Ground Communication Technology and Spectrum Utilization, RTCA/DO-169, Washington, D.C., July 20, 1979.

39. Ibid., 29.

40. Radio Technical Commission for Aeronautics, Universal Air-Ground Digital Communication System Standards, RTCA/DO­ 136, Washington, D.C., August 15, 1978.

41. Ope cit., Rucker, "Integrating Digital Communication Systems," Proceedings of the 1988 RTCA Assembly. Washington. D.C. November 1988, 144.

42. Haykin, S., Communication Systems, 2d ed., John Wiley and Sons Inc., New York, 1983, 553.

43. Ibid., 553-561.

44. Helstrom, C. W., Probability and Stochastic Processes for Engineers, Macmillan Publishing Co., New York, 1984, 69.

45. Ope cit., Haykin, communication Systems, 561-572.

46. Pasupathy, S., "Minimum Shift Keying: A Spectrally Efficient Modulation," IEEE Communications Magazine, vol.17, no.7, July 1979, 14-22.

47. Doelz, M. L., and E. T. Heald, "Minimum-Shift Data Communication System," u.S. Patent 2,977,417, March 28, 1961.

48. Ope cit., Haykin, communication Systems, 573. 125

REFERENCES (continued)

49. Ibid., 574.

50. Ibid., 330.

51. Federal Communications Commission, Rules and Regula­ tions. Part 87 - Aviation Services, Washington, D.C, Septem­ ber 1987, 15.

52. Greenstein, L. J., and P. J. Fitzgerald, "Envelope Fluctuation statistics of Filtered PSK and Other Digital Modulations," IEEE Transactions on Communications, vol.COM­ 27, nO.4, April 1979, 750-760.

53. Mathwich, H. R., J. F. Balcewicz, and M. Hecht, "The Effect of Tandem Band and Amplitude Limiting on the Eb/No Performance of Minimum (Frequency) Shift Keying (MSK)," IEEE Transactions on Communications, vol.COM-22, no.10, October 1974, 1525-1540.

54. Benelli, G., and R. Fantacci, "An Integrated Voice-Data Communication System for VHF Links," IEEE Transactions on Communications, vol.COM-31, no.12, December 1983, 1304-1308.

55. Benelli, G., "VHF Radio Link for Ground-Air-Ground Communications Using an Integrated Voice-Data Modulation," Electronics Letters, vol.18, no.13, June 24, 1982, 555-556.

56. Radio Technical Commission for Aeronautics, Minimum Operational Performance Standards for Airborne Radio Commu­ nications Equipment Operating Within the Radio Frequency Range 117.975-137.000 MHz, RTCA/DO-186, Washington, D.C., January 20, 1984, 10.

57. Ope cit., Federal Communications Commission, Rules and RegUlations. Part 87 - Aviation Services, 14.

58. Gonzalez, R. C., and P. Wintz, Digital Image Process­ ing, 2d ed., Addison-Wesley PUblishing Company, Inc., Read­ ing, Massachusetts, 1987.

59. Jain, A. K., "Image Data Compression: A Review," Pro­ ceedings of the IEEE, vol.69, no.3, March 1981, 349-389. 60. Netravali, A. N., and J. o. Limb, "Picture Coding: A Review," Proceedings of the IEEE, vol.68, nO.3, March 1980, 366-406. 126

REFERENCES (continued)

61. Ope Cit., Gonzalez and Wintz, Digital Image Processing, 265-292.

62. Ibid., 265.

63. Ibid., 268.

64. Ibid., 293-299.

65. Lehmann, L. A., and A. Macovski, "Data Compression of X-ray Images by Differential Pulse Code Modulation (DPCM) Coding," Digital Radiography, Proceedings of SPIE, vol.314, 1981, 396-404.

66. Ope Cit., Netravali and Limb, "Picture Coding: A Re­ view," 390-396.

67. Ope Cit., Gonzalez and Wintz, Digital Image Processing, 300-317.

68. Lo, Shih-Chung, and H. K. Huang, "Compression of Radio­ logical Images with 512, 1,024, and 2,048 Matrices," Radiol­ Qgy, vol.161, no.2, November 1986, 519-525.

69. Elnahas, S. E., et. al., "Progressive Coding and Trans­ mission of Digital Diagnostic Pictures," IEEE Transactions on Medical Imaging, vol.MI-5, nO.2, June 1986, 73-83.

70. Mandelbrot, B. B., The Fractal Geometry of Nature, W. H. Freeman and Co., New York, 1983, 15.

71. Sander, L. M., "Fractal Growth," scientific American, January 1987, 94-100.

72. Ibid.

73. Sorenson, P. R., "Fractals," BYTE, September 1984, 157­ 172.

74. Barnsley, M.F., and A. D. Sloan, "A Better Way to Compress Images," BYTE, January 1988, 215-223.

75. Ope Cit., Jain, "Image Data Compression: A Review," Proceedings of the IEEE, 374-376.

76. Ibid., 383. 127

REFERENCES (continued)

77. Ope Cit., Barnsley and Sloan, "A Better Way to Compress Images," BYTE, 216.

78. Lovejoy, S., "Area-Perimeter Relation for Rain and Cloud Areas," Science, vol.216, nO.9, April 1982, 185-187.

79. Kavouras, Inc., RADAC User's Manual, Minneapolis, Minnesota.

80. Ibid., 3:9-10.

81. Micromint, Inc., ImageWise Digitizer/Transmitter Tech­ nical Manual, Release 1.0, Vernon, Connecticut, June 23, 1987.

82. Ciarcia, S. A., "Part 2: Digitizer/Transmitter, Build­ ing a Gray-Scale Video Digitizer," BYTE, June 1987, 129-138.

83. Microsoft Corporation, Microsoft C 5.0 Optimizing Compiler. Language Reference. Microsoft CodeView. and utili­ ties, Redmond, Washington, 1987.

84. LaFore, R., Microsoft C Programming for the IBM, The waite Group, Mill Valley, California, 1988.

85. Barkakati, N., Microsoft C Bible, The waite Group, Mill Valley, California, 1988.

86. Parker, C. B., 2400 Bit/Second Modem for Audio Chan­ nels, 30-89TM NASA TRI-U/122, Ohio University, Avionics Engineering Center, Athens, Ohio, To be released.

87. King Radio Corporation, KX 170B/KX 175B Navigation Receiver/communications Transceiver Installation Manual, 006-0085-01, Olathe, Kansas, January 1976.

88. Ope cit., U.S. Department of Transportation, Federal Aviation Administration, Aviation Weather System Plan, 2:10.

89. Ope Cit., Department of Transportation, Federal Avia­ tion Administration, Frequency Management Engineering Prin­ ciples: criteria and Procedures for Assigning VHF/UHF Air/ Ground Communication Frequencies, Appendix 5, 6.

90. Department of Transportation, Federal Aviation Adminis­ tration, u.S. National Aviation Standard for the VHF Air­ Ground Communications System, FAA Order 6510.6, Washington, D.C., November 11, 1977, 10. 128 REFERENCES (continued)

91. Ope Cit., Department of Transportation, Federal Avia­ tion Administration, Frequency Management Engineering Prin­ ciples: criteria and Procedures for Assigning VHF/UHF Air/ Ground Communication Frequencies, 15. 92. Op.cit., Radio Technical Commission for Aeronautics, Minimum Operational Performance Standards for Airborne Radio Communications Equipment Operating Within the Radio Frequen­ cy Range 117.975-137.000 MHz, 9.

93. Hayward, W. H., Introduction to Radio Frequency Design, Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 1982, 209-210.

94. Ope Cit., Haykin, communication Systems, 330.

95. Ope Cit., Couch, Digital and Analog Communication Systems, 282.

96. Ope Cit., Helstrom, Probability and Stochastic Pro­ cesses for Engineers, 244-254.

97. Ope cit., Haykin, communication Systems, 550.

98. Ope cit., Kavouras, Inc., RADAC User's Manual. 99. Ope cit., Micromint, Inc., ImageWise Digitizer/Trans­ mitter Technical Manual.

100. National Semiconductor, 1982 Linear Databook, Santa Clara, California, 1982, 3:165-171.

101. Ibid. 129

APPENDIX A

Derivation of an Expression for EblNo of a Hybrid Modulated

Carrier

The FAA specifies that at all points in a VHF communi- cation system service volume, and at all frequencies of operation, a carrier power level of -87 dBm (dB relative to

1 milliwatt) shall be available at the receive antenna when terminated into a 50 ohm impedance [90],[91]. The minimum signal-to-noise (SIN) performance of an airborne VHF receiv- er is specified based on this received power level. The minimum operational performance standards (MOPS) for VHF airborne equipment require a minimum output signal+noise-to- noise ratio of 6 dB when a -87 dBm radio frequency signal modulated 30% at 1000 Hz is received [92]. The received signal in this case is of the form of equation 4.19, and can be written:

kvAmCOS(2~fmt)]COS(2~fct) (A.1) Ar[1 + where: A and Am are amplitude coefficients (volts) r A cos(2~f t) is the modulating signal met) m m f mand f c are the modulation and carrier frequencies respectively (HZ) k is the amplitude sensitivity of the modulator v

By using equation 4.22, equation A.1 can be written: 130

~COS(2~fmt)]COS(2"fct) (A.2) Ar [ l + where: ~ kvA is the amplitude modulation index = m

For the case of 30 per cent modulation by a 1000 Hz modulat- ing tone, ~ 0.30, and f 1000. = m = The power in equation A.2 is given by:

A 2 2 Pr = ~ [1 + ~ ] (A.3) where: P is the received power (watts) r ~ is the amplitude modulation index

The noise power available at the receiver input is equal to the channel additive white Gaussian noise (AWGN) power spectral density multiplied by the noise bandwidth over which the AWGN power is observed. Since this noise will be observed over a bandwidth given approximately by the inter- mediate frequency (IF) bandwidth of the receiver, the noise bandwidth can be considered the to be same as the IF band- width [93]. So, the noise power at the receiver input is:

N N = 0 (2B) = NB (A.4) 2 0 where: N is the noise power at the receiver input (watts) N is the two-sided channel noise power spectral o/2 density (watts/Hz) B is the receiver IF bandwidth (HZ) 2B accounts for the bandwidth image in the 131

negative frequency domain

For this case, N = kT , and equation A.4 can be written: o 0

(A. 5)

2 3 where: k = 1.38 x 10- is Boltzmann's constant (joules/kelvin) To = 290 is the standard thermal noise temperature (kelvin)

Finally, using equations A.3 and A.5, the channel signal-to- noise power ratio SIN can be written:

[ ~ ] = (A.6) c

In an ideal AM receiving system, only linear operations would be performed on the input signal, and no additional noise would be added. Under these conditions, the receiver output SIN would be the same as the channel SIN. Unfortu- nately, these ideal conditions do not exist. The receiver adds noise to the signal, and consequently degrades the input SIN. The amount that the input SIN is degraded by additive noise within the receiver is called the receiver noise figure. For an AM receiver, the input SIN is further degraded by signal loss in non-linear envelope detection. 132

The SIN degradation due to added noise occurs mainly in the receiver front-end. Beyond the receiver IF, very little degradation occurs due to added noise. Therefore, with reference to figure 4.9, another SIN is defined at the end of the receiver IF and immediately before detection:

PP r r (A. 7) [ ~ ] if= kToBF = kTsB where: F is the receiver noise figure T = TF is the effective system noise s 0 temperature (kelvin)

The further degradation of the IF SIN due to the envelope detector for the case of a single tone modulating signal, and for an IF SIN much larger than one, is given by [94]:

So = y.2 (A. a) Sif 2 + J.l.2 where: So is the detector output power (watts) Sif is the IF signal power (watts) ~ is the amplitude modulation index

The receiver output SIN is the product of the IF SIN given by equation A.7, and the detector degradation given by equation A.a:

(A.9) 133

Rearranging and expressing all ratios in dB yields:

~ (A.10) [ ]o

Recalling that the specified input signal was 30 per cent modulated, equation A.8 gives a demodulator degradation of

13.6 dB. Since this signal produces a required output (S +

N)/N of 6 dB or 3.98, the required output SIN is found to be

2.98, or 4.7 dB. Substitution of these values in equation A.10 gives a required IF SIN of 4.7 + 13.6 = 18.3 dB. Now that the required IF SIN is known for the minimum input signal power, the expected Eb/No for the digital data with the same input signal power can be found.

The received hybrid modulated carrier will be in the

form of equation 4.21, and can be written:

Ar [1 + ~cos(2~fmt)]cos[2~fct + aCt)] (A.11) ~cos(2~f where: [1 + mt)] is the amplitude modulation component aCt) is the phase modulation component

The amplitude modulation can be removed by hard limiting before data demodulation. The limiter must be chosen to

hard limit at the minimum carrier amplitude. By inspection

of equation A.11, this minimum would be given by: 134

Am = Ar(1 - ~max) (A.12) where: Am is the limited carrier peak amplitude ~max is the maximum anticipated value of ~ for the amplitude modulation process.

After hard limiting, filtering would be performed to restore the carrier to a sinusoidal signal. The power in this signal would thus be given (in watts) by:

(A. 13)

The bit energy is then given by:

2 2 b r ~ Tb E = [ Am 2T ] = [ A {1 J.£max> 2 ] (A.14) b where: Eb is the bit energy (joules) Tb is the bit period (seconds)

And finally, recalling that No = kT , Eb/N at the receiver o 0 IF is found to be:

(A.15) where: F is the receiver noise figure

Ts = ToF is the effective system noise 135 temperature (kelvin)

Letting the data rate R = 11Tb, and sUbstituting equation A.3, equation A.1S can be written:

P r(1 - I-£max )2 ] (A. 16) = [ kTsR(l + J1.2/2)

By substituting equation A.7, Eb/No can be written as a function of the IF SIN:

B (1 - J1.max ) 2 ] x (A.17) = [ [ ~ ] if R(l + p,2 / 2)

If the IF SIN is expressed in dB, the result in dB is:

2 B(l - p,max) + (A.18) = lOl09[ [ ~ ] if R(l + 1-£2 / 2) ]

This result can be used to find the minimum Eb/No ratio to be expected for the minimum power input signal. Recall that the minimum IF SIN for this input signal was found from equation A.10 to be 18.3 dB. Since the minimum power input signal is modulated at 30 per cent, p, = 0.3. Choosing a 136 maximum expected bit rate R of 2,400 bits per second, an IF

bandwidth B of 10 kHz (as was used in the computer simula­

tion), and limiting ~ for the amplitude modulation to a maximum of 0.7, equation A.18 gives a minimum expected Eb/No of -4.5 + 18.3 = 13.8 dB. 137

APPENDIX B

Derivation of an Expression for Power Spectral Density of a

Hybrid Modulated Carrier

Recall the expression for a hybrid modulated carrier as

in equation 4.21:

A [1 + k m(t)]cos[2~f t + aCt)] (B.1) c v c where: A is an amplitude coefficient (volts) c kv is the amplitude sensitivity of the modulator f is the carrier frequency (HZ) c met) is the voice modulation aCt) is phase modulation caused by a digital stream of weather data

A statistical analysis can be used to find the power spec-

tral density of this signal. For this purpose, the hybrid modulated carrier can be considered to be the product shown

here:

= (B.2) set) [1 + kvm(t)]g(t) where: set) is the hybrid modulated carrier met) is the voice modulation process get) is the phase modulation process

met) as given by equation 4.23 can be written as: 138

5 met) = L aicoS[2~ifmt + t] (B.3) i=l where: a. are amplitude coefficients given by table 4.2 1 f = 468.75 is the fundamental voice modulation m frequency (HZ)

t is the phase and is considered to be a random variable with a uniform distribution from 0 to 2~ as shown:

(B.4) elsewhere

In order to consider met) a wide sense stationary process, two conditions must be met: (1) the mean cannot vary with time, and (2) the autocorrelation function R must be m(t1,t2) a function only of the time difference 1 = t - t [95]. 1 2 First, the mean is evaluated using equations B.3 and B.4:

5 E[m(t)] = L cos[2~if t + t] a.E[1 m ] i=l 5 = L a. ft(ep)COS[2~ifmt + ep]dep i=l 1 J:oo 5 J2~ 1 I: a. cos[2~if ep]dep 0 = 2~ t + = (B.5) i=l 1 0 m where: E[m(t)] is the mean of met)

And so the mean is zero, and is not a function of time. Now 139 consider the autocorrelation function of met):

Rm(t1,t2) = E[m(t1)m(t2)]

5 2 R = L a E[ COS[2~ifmtl + t]COS[21rifmt + t] ] m(t1,t2) i=l i 2 5 2 L: a. J~ ft(~)COS[2~ifmtl + 4>]COS[21rifmt + cp]dcp i=l 1 2 5 2 1 = L a. J:~ cos [27Tif + 4>]COS[27Tifmt + cp]dcp i=l 1 27T mt1 2 5 2 1 = L: a. J:~ COS[27Tifm(t + t + 2cp]d¢ i=l 1 47T 1 2) 5 2 J2~ + L a. _1 COS[27Tifm(t - t i=l 1 o 47T 1 2)]dcp 2 5 a. 1 = L COS[27Tifm(t - t (B.6) i=l 2 1 2)]

But note that equation B.6 can be written:

5 a. 2 R = ~ 1 COS[2~fi1] (B.?) m(1) i=l 2 where: ( 1 ) 1)] Rm = E[m(t)m(t + is the autocorrelation function of the process met)

1 = t 1 - t 2 f i are harmonically related frequencies

This proves wide sense stationarity for met). Now consider the term get) in equation B.2: 140

get) = COS[2~fct + aCt)] (B.8) where: aCt) is phase modulated using MSK

For a carrier modulated by data such as get) is, the auto- correlation function is dependent only upon whether the two instants of time t and t span a data bit period T or 1 2 b, both fall in the same data bit period. The autocorrelation function is thus a function only of the time difference r. For this reason, a process such as this is considered to be wide sense stationary [96]. Thus we can express the auto- correlation function for get) as:

Rg(T) = E[g(t)g(t + T)] (B.9)

since both met) and get) are wide sense stationary, and are statistically independent, the combined process set) shown in equation B.2 has an autocorrelation function given by:

RS(r) = E[s(t)s(t + r)]

[get + T) + kvm(t + T)g(t + T)]] 141

= E[g(t)g(t + 1) + kvm(t)g(t)g(t + r) +

kvm(t + r)g(t)g(t + 1) +

k 2 m(t)m(t + r)g(t)g(t + r)] V

2E[m(t)m(t RS(r) = E[g(t)g(t + r)] + kv + r)]E[g(t)g(t + r)]

Rs (1) = Rg (1) + k v2R m(1)Rg (r) (B.10)

Because set) is wide sense stationary, equation B.10 can be transformed into the frequency domain to yield the power spectral density. Taking the Fourier transform:

Ss (f) = Sg (f) + k v2S m(f) * Sg (f) (B. 11) where: S s (f) is the power spectral density of the hybrid modulated carrier (watts/HZ) Sg(f) is the power spectral density of the MSK phase modulation (watts/HZ) Sm(f) is the power spectral density of the voice modulation (watts/HZ) * denotes the convolution process

Recall the expression for R (1) given by equation B.7. m Taking the Fourier transform yields:

2 5 a. S (f) = L 1 [o(f + f.) + o(f - f.)] (B.12) m 1 1 i=l 4 142 where: 6 denotes the Dirac delta function

The remaining task is to find the power spectral densi- ty Sg (f) for the MSK phase modulation. The power spectral density of the bandpass signal get) is related to the power spectral density of the same signal at baseband by the relation [97]:

1 Sg(f) = 4 [Sb(f + f c ) + Sb(f - f c )] (B.13) where: Sb(f) is the power spectral density of the baseband signal get) (watts/Hz) f is the carrier frequency for the bandpass c signal get) (Hz)

Sb(f) has been presented for the MSK process, and is given by equation 4.18. Substituting into equation B.13 yields:

8E COS[2~Tb(f b + f c)] S (f) ] 2 + g = 2 2(f 1r [ 16T + f ) 2 - 1 b c

8E COS[2~Tb(f b - f c)] ]2 (B.14) 2 2(f ~ [ 16T - f ) 2 - 1 b c

Since the result will be normalized, we need only consider the positive frequency domain: 143

(B.15)

Now that both Sg(f) and Sm(f) are known, equations B.12 and

B.15 can be substituted into equation B.1l to yield the power spectral density of the hybrid modulated carrier:

S s (f)

2E k 2 5 b v 2 COS[21TTb(f - f c ± f i )] ]2 ~ a. (B.16) 2 ]. [ 2 1T i=l 16Tb ( f - f c ± f i ) 2 - 1

Finally, the result is normalized:

COS[2~Tb(f [ - f c )] S (f) ] 2 + s = 2 16Tb ( f - f c ) 2 - 1

k 2 5 COS[21TTb(f - f v 2 c ± f i )] ~ a. ]2 (B.17) ]. 2 4 [ f ± f - 1 i=l 16Tb ( f - c i)2

~max If is limited to 0.7, from equation 4.22, Ikvmet) I max = 0.7 for all time t. At time t = 0, and for ~ = 0, equation

B. 3 gives: 144

5 m(t)max = ~ a. (B.18) i=l 1.

From table 4.2, the coefficients a. are chosen such that 1. they sum to 1. Therefore, at time t = 0, the maximum value of met) 1.0. since Ik met) 0.7, this limits k to = v Imax = v 0.7. 145

APPENDIX C

Kavouras RADAC Color Weather Radar Receiver Modifications

Modifications were made to the Kavouras RADAC radar receiver to allow use of a low-cost digitizing system. The modifications were made to provide a gray-level image output using National Television System Committee (NTSC) composite video, and to provide a pixel clock output for sample clock synchronization with the digitizer. This allowed the use of a gray-level digitizer with a sampling rate which would normally be too slow for high-resolution graphics.

All modifications were made to the Kavouras video board. The video board schematic is shown in figures C.1 and C.2 [98], and the modifications are shown in figure C.3.

The video board provides for user selection of display colors for each precipitation level. Selection is normally accomplished by hard-wiring the appropriate pins of the

"level/color select" strap (B5 in figure C.l). However, in order to provide the standard National Weather Service (NWS) color assignments, Kavouras has utilized a custom integrated circuit (IC) (A7 in figure C.1). By floating pin 11 of this

IC, it can be disabled, and user-selection of colors is again possible.

The desired gray-levels are obtained by selecting the colors which have the desired luminance information, and then killing the chrominance subcarrier. This occurs when a 146

"color kill" switch on the front panel of the RADAC system is pushed. The switch allows user selection of standard NWS color levels for display, or incremental gray-levels for the digitizer. The circuit modifications to accomplish this are now described.

The "color kill" switch on the RADAC front panel en­ ables one of two outputs of a mUltiplexer (MUX) (B5-A and B in figure C.3). The MUX functions as a solid state switch, and replaces the hard-wire strapping of the "level/color select" strap. The first set of MUX outputs assign each precipitation level to the associated NWS color. The second set of MUX outputs assign each level the necessary color to achieve the desired incremental gray-levels. with these outputs selected, pin 11 of the custom IC is floated, thus enabling non-NWS color selection. Also, the DC bias for the chrominance output (Pin 13 of U8 in figure C.2) is shorted.

This disables the chrominance sUbcarrier, creating a true gray-level output. 147

Q u • ~ :- ...... ~ ,.. -- "" lit, "" " '\I ...... ~~j~]:;;r ~ ~ ~ '.- I - i !

Q u •

Figure e.1 Kavouras RADAC Video Board Schematic 1 of 2 (From reference [98]) 148

Q v • c .i .:

,.' J-

i !

Ie I ~ i: I ~ ..~ ~ ~: : 0 ~~ ~:~, ! ~ - :,-..,---~~ ~ 2J . ~ ~ i ~i! :1• C>

~i.... -~I.. 6.4 Q • c

Figure C.2 Kavouras RADAC Video Board Schematic 2 of 2 (From reference [98]) CHROMINANCE CHROMINANCE . . .. . ,. --, • v .. v. -~ COLOR KILL' .fFROM P2-3S) BS-A A7 PIN 11 2 lA lY 4 (FROM PC BRD~ 3 18 ~ S 2A 2Y 7 6 2B nJ II 3A 3Y 9 ..10 A7 PIN 11 < 38 (T~. o 14 4A 4Y 12 -K: ~ il 48 Ii J.. A/B nJ (J1 ~ COLOR ;4LS157 ~ILL'

~ IRlUE: 1 ~ 14 VIP-l o RED 2 II VIP ·4 IMACE NTA 3 TI VIP-2 > IcRE:fN 4 o S 4~10 VIP-6 IeYAN 6 -9 VIP -s Y~LLOW ~ VIP 3 <:~ ~ <~ I f-'- f-'e LEVEL/COL SELECT 0"U) BS-B (ON VIDEO BRD) (l) C 2 4 3 lA lY o Ii _5 18 7 CD 2A 2Y ~- ~ 101 28 ~.t0l to 11 3A 3Y 9 .-vv 1/ Ql01 o o ~ 38 2N3904 4A 4Y 12 10K ~914 nJ • .l.3 Kl0l lilA.> 48 r ~~ 0., ~ 1 A/B NWS G COLORS +So- n ::s: ~ e.tP 74LS1S7 Rl02 ~.t02 o -. .- RELAY SPST 0.. vv ~~~~04:;i~~~~~4 f-'e 10K ~914 t-h f-'e o --F nJ rt f-'- o ~ (J1 AVIONICS ENGINEERING CENTER-OHIO UNIVERSITY +1$---1---- s:::> Titl. NOTES: 1. FLOATING PIN 11 OF A7 VIDEO LEVEL SELECT MOO TO KAVOURAS VIDEO eRD I ~iS~ DISABLES NWS COLORS AND ENABLES LEVEL/COLOR SELECT AT 8S ~ ~ \0 150

APPENDIX D

Micromint ImageWise Video Digitizer Modifications

Modifications were made to the video digitizer in order to allow operation with high resolution graphics. The two modifications made were the provision for an external pixel clock, and the redesign of the analog front-end of the unit.

The schematic diagrams for the unit as supplied by Micromint are shown in figures D.1 through D.3 [99]. The schematic diagram of the required changes is shown in figure D.4.

An external pixel clock was necessary since the digi­ tizer sampling frequency was too low to allow accurate sampling of individual pixels. Individual pixels must be accurately sampled for the adequate display of graphical images.

A standard National Television System Committee (NTSC) composite video image frame is provided every 1/60 of a second. For an image consisting of 244 lines and 256 pixels per line, this equates to a time of 267 nanoseconds between pixel values. The ImageWise digitizer is designed to sample at a rate of 5 MHz. This equates to a time between samples of 200 nanoseconds. The resulting sampled image resolution is approximately ± 1 pixel. Although this is adequate for most images, it is not acceptable for graphics images.

The solution chosen was to provide synchronization between the Kavouras RADAC video output and the ImageWise 151 digitizer. In this way, the individual pixels within an image are sampled at the same instant of time that they are written by the RADAC system. The changes consisted of: (1) removing the 20 MHz crystal sampling oscillator (U20 in figure 0.3), (2) removing the D flip-flop which functions as a divide-by-four and a gating circuit, and provides both a true and inverted logic sampling clock output (U21 in figure

D.3), and (3) providing an external transistor-transistor logic (TTL) level sampling clock by means of a gating cir­ cuit (U8D in figure D.4), and an inverter (U8C in figure

D. 4) •

The analog front-end of the unit was rebuilt to provide an improved transient response. This was necessary since the unit as supplied from Micromint was unable to properly sample the first pixels of a new frame. This effect was caused by the DC restoration circuitry used in the unit. DC restoration is necessary to provide a fixed DC level for the

NTSC composite video waveform so that it can be digitized.

In the supplied unit, the input is AC coupled through a .1 microfarad (~F) capacitor (C8 in figure D.3) to a DC clamp­ ing circuit (R6, R7, and D1 in figure D.3). The resulting

R-C time constant of this circuit is large enough that it cannot quickly respond to rapid changes in the effective DC level of the NTSC composite video. This waveform undergoes large changes in the effective DC level when the first image pixels are transmitted after a vertical retrace interval. 152 The existing circuit path from the video buffer (Q1 in figure 0.3) to the flash analog-to-digital converter input (pin 11 of U19 in figure 0.3) was broken, and the DC resto­ ration circuit shown in figure 0.4 was placed between the video input (J3 in figure 0.3) and the flash analog-to­ digital converter input. with reference to figure D.4, the new OC restoration circuit works as described here. During each horizontal synchronizing (commonly abbrevi­ ated as sync) pUlse, the NTSC composite video should have the same voltage level after OC restoration. The restora­ tion circuit forces this condition by: (1) sampling the video circuit during each horizontal sync to determine the actual offset voltage level, (2) holding this offset voltage until the next sample, and (3) sUbtracting this offset voltage from the video signal. The sampling is performed by a complimentary metal oxide semiconductor (CMOS) switch (U23A). The control for this switch is the horizontal sync pulse, which must be con­ verted from TTL (0 to 5 volt logic) to ± 5 volt logic by a comparator (U22). The sample voltage is held across a .22

~F capacitor (C45). The capacitor voltage is the desired offset voltage, and is input to an operational amplifier (op-amp) buffer with a high input impedance (U24) so the capacitor remains charged between samples. The buffer is overcompensated by C46 and R26 for maximum stability [100]. Another op-amp circuit (U25) acts as a summer and subtracts 153 the offset voltage from the video signal. R29 and R30 divide the video signal by two so that it is restored to the correct amplitude after being mUltiplied by two in the summer circuit. The summer circuit is compensated for a fast transient response by feedback capacitor (C47) [101]. 154

o u

~~~ :~~d~~~1

c

Figure D.1 ImageWise Digitizer Schematic 1 of 3 (From reference [99]) U 6 s ~ z COP'IRIG", .", MICROMItH. !tIC.

U12 ----~ 1~~~157 1\1-· H 48 --- __ "15 --~-.,,-,-Iii 41\ ..... j2_Y1}7 - o ___ )8 l-' '.11~'_ o 11'1'" 1\5 -.1·kl" lY.t~ Y~I~--261\1) -28 ~n_ ~12 -~21\ ~u_ --=-2- 23"" 1'07 U VOJ "4--.1 18 V~U_-- -Jl 'II. ~~1L---'2~ ~~. ", 1,o4 Ji. ~~=- . ----~ H '~~!- :~~:o!~J' ---25 ". I'OJ)' ..- .~l ~ -V~1.--l a "1 "02 u_ '" _ ':'QZ p, '~i_~'" lQ -! ::0'It_ JIJ~ ~"_K :~~~~~~ ~!lL-- [1-- j (1) 1 vnj __ ! "1 ~ -_. 'iJi\l._I A 2 ~. UI] 1 .. 'VR!_?AI (f) ~'_.J' 1 1 ".RiU' ~(l) ~--I---2. vec ·'-11) C£ c c ::t;~~-20l." J .-1::=U~~ tic ,,1_i 2• ~. ---- ~ o ---~ 1~ s~ L-. 2Y ~.-i:: ~.t-tj s ti rt ~. U" CD ~.~ ,us, t-t)N~ \Mli- -l" ... 'WilL- .-16 "11 CDCD t; "\H\JL- --_2.~u ti t; CD ~".L- 2.J 1\1' CD "VIo\J!- 21 'I'. VI4 ::Smc Y~SH ~~n ~----t~~: et ~7 3 '11 o 0 • i~4. ---- __ 4~ .. v 11 ....~~.i.-.~ (l)::Tl\J 1. ", B ('() B tt~~~C~l- J1 )8 -~'$-'", ___~ t JI\ ~~-i ". """s -..Y~1-..I"'l \Cp, ~ .,-~;= ~~L-J "'2 ,,~~f--=---.1,. ~"t_''''1 \Crt -~ ~ ,~ ..JL~~_I~". ~. s __ I.! Rill J R4 ---- .,-28 '.ICC o 18K lIN( ]~~ l\J ., U, l S•• 8 1,M1~-J - o -'5-' d t-t) O w A A I I ~ ftlCROfIlUT. INC. ({{~ 4 """1

W o H)

W

~ U1 0\ J3 VIDEO MONITOR

J4 H VIDEO INPUT a +5 OJ +5 lQ 1-CKT JMPR (l) ~ C47 5PF ~ R2 2 ~- 75 R27 m (l) l 10K ,- C45 C * .22UF ~-~ lQ ~­ t-t-lQ R29 rt~ R25 10K 7 ~·ti N (l) 10K (l) R30 tic 10K ~~ o -5 -5 p., ~- +5 t-h I rB::> ~- ,- C43 o -::h- .1UF OJ -5 -:- rt ::t B:> ~. "T" C44 o * .1UF ::J m Y JS-2 EXT PIXEL CLK IN AVIONICS ENGINEERING CENTER-OHIO UNIVERSITY R32 Titl. 4.7K NOTES: 1. EXISTING US ON VIDEO MODIFICATIONS TO IMAGEWISE TRANSMITTER BOARD UTILIZED 2. EXISTING U21 ON VIDEO >-1- BOARD REMOVED

f-I U1 -...J 158

APPENDIX E Image Processing Program Listings

/* cal.c */ /**********************************************************************/ 1* Program to calibrate video threshold levels */ /* using data digitized from the Kavouras test bars. */ /* The program is designed to handshake with the ImageWise gray level */ /* digitizer, which digitizes a gray level test bar pattern */ 1* from the gray level video output of the Kavouras radar image */ /* receiver. */ /* The Kavouras must be in the following modes: */ 1* Color Kill - Active */ 1* Transmitter Maps - Off */ 1* Test - Active */ 1* Threshold values are written to file "cal.dat". */ 1* Program is based on one line of test bar data as shown: */ 1* level 0 pixels 0-22 */ /* level 1 pixels 24-54 */ /* level 2 pixels 56-86 */ /* level 3 pixels 88-118 */ /* level 4 pixels 120-150 */ /* level 5 pixels 152-182 */ /* level 6 pixels 184-214 */ /* level 7 pixels 216-246 */ /* Adjust ImageWise digitizer white level such that level 7 digitizes */ /* to near 63. */ /* Adjust ImageWise digitizer black level such that level 0 digitizes */ /* to near O. */ /* Craig B. Parker */ /* 3/20/89 */ /**********************************************************************/ Hinclude /* standard I/O library */ Hinclude "declares.h" /* global declarations */ Hinclude "serial.hlt /* serial I/O functions */ main( ) { FILE *fptr; /* pointer to file */ unsigned int i; /* index variable */ float thrsh[7]; 1* array of threshold values */ float avgO; /* average level 0 */ float avg1; /* average level 1 */ float avg2; /* average level 2 */ float avg3; /* average level 3 */ float avg4; 1* average level 4 */ float avg5; /* average level 5 */ float avg6; /* average level 6 */ float avg7; /* average level 7 */ float sum; /* sum of levels */ int pel; /* pixel index */ int line; /* line index */ ptr = ℑ /* pointer to image */ Loc com(); /* locate COMl-3 ports */ set-com1(dig baud); /* set up COMl for digitizer */ rdy=com1 ( ); - /* ready COM1 */ 159 delay(5000l); /* stabilize port */ xmit com1(FULLRES); /* set full resolution */ printf("Receiving data on COM1 port ••• \n"); xmit com1(XON); 1* XOD to digitizer */ for (i=10; idata arr[i] = rec_com1(); } - rst com1(); /* reset COM1 */ sum-= 0.0; 1* initialize sum count */ for (line=O; line<240; line+=10) 1* level 0 data */ { for (pel=6; pel<17; pel++) { sum +- ptr->pic arr.line arr[line].pel arr[pel]; } - - - } avgO = sum/264.0; /* average level 0 */ sum ==0.0; /* initialize sum count */ for (line=O; line<240; line+=10) /* get level 7 data */ { for (pel=29; pel<50; pel++) { sum += ptr->pic arr.line arr[120].pel arr[pel]; } -- - } avg7 == sum/504.0; /* average level 7 */ sum ==0.0; /* initialize sum count */ for (line=O; line<240; line+=10) /* get level 6 data */ { for (pel=61; pel<82; pel++) { sum +== ptr->pic arr.line arr[120].pel arr[pel]; } -- - } avg6 = sum/504.0; /* average level 6 */ sum =0.0; /* initialize sum count */ for (line=O; line<240; line+=lO) /* get levelS data */ { for (pel=93; pel<114; pel++) { sum += ptr->pic arr.line arr[120].pel arr[pel]; } -- - } avg5 = sum/504.0; /* average levelS */ sum =0.0; /* initialize sum count */ for (line==O; line<240; line+=lO) /* get level 4 data */ { for (pel=125; pel<146; pel++) { sum += ptr->pic arr.line arr[120).pel arr[pel]; } -- - } avg4 = sum/504.0; /* average level 4 */ sum =0.0; /* initialize sum count */ for (line==O; line<240; line+=lO) /* get level 3 data */ 160

{ for (pel~157; pel<178; pel++) { sum +~ ptr->pic arr.line arr[120].pel arr[pel]; } - - - } avg3 ~ sum/504.0; /* average level 3 */ sum ~O.O; 1* initialize sum count */ for (line~O; line<240; line+~lO) 1* get level 2 data */ { for (pel~189; pel<210; pel++) { sum +~ ptr->pic arr.line arr[120].pel arr[pel]; } -- - } avg2 ~ sum/504.0; 1* average level 2 */ sum ~O.O; 1* initialize sum count */ for (line=O; line<240; line+=lO) 1* get level 1 data */ { for (pel~221; pel<242; pel++) { sum +~ ptr->pic arr.line arr[120].pel arr[pel]; } - - - } avgl = sum/504.0; 1* average level 1 */ thrsh[O] «avgO + avgl)/2.0); 1* threshold b/w 0 and 1 */ thrsh[l] «avgl + avg2)/2.0); 1* threshold b/w 1 and 2 */ thrsh[2] «avg2 + avg3) /2.0); 1* threshold b/w 2 and 3 */ thrsh[3] «avg3 + avg4)/2.0); 1* threshold b/w 3 and 4 */ thrsh[4] «avg4 + avg5)/2.0); 1* threshold b/w 4 and 5 */ thrsh[5] «avg5 + avg6)/2.0); 1* threshold b/w 5 and 6 */ thrsh[6] «avg6 + avg7)/2.0); 1* threshold b/w 6 and 7 */ printf("Average level 0 is %f\n",avgO); printf("Average level 1 is %f\n",avgl); printf("Average level 2 is %f\n",avg2); printf("Average level 3 is %f\n",avg3); printf("Average level 4 is %f\n",avg4); printf("Average level 5 is %f\n",avg5); printf("Average level 6 is %f\n",avg6); printf("Average level 7 is %f\n",avg7); fptr ~ fopen("cal.dat","w"); 1* open calibrate file */ for (i=O; i<7; i++) /* write thresholds */ { printf("Threshold %u is %f\n",i,thrsh[i]); fprintf(fptr,"%f\n",thrsh[i]); } fclose(fptr); 1* close calibrate file */ } 161

/* mapgen.c */ /**********************************************************************/ /* Main program to generate and store current digitized */ /* radar map overlays. */ /* The program is designed to handshake with the ImageWise gray level */ /* digitizer, which digitizes current geographical radar map overlays */ 1* from the gray level video output of the Kavouras radar image */ 1* receiver. */ 1* The Kavouras must be set as follows: */ /* Color Kill - Active */ /* Transmitter Maps - Active */ /* Precipitation Levels (All) - Off */ /* Test - Off */ /* The maps are thresholded, and assigned levels of 0 (black), and 7 */ /* (white). */ /* Maps should be stored in files with filenames as follows: */ /* ABCxxx.MAP */ /* where ABC is the radar site identifier, and xxx is the radar range */ 1* (060, 120, 180, 240). */ /* Craig B. Parker */ 1* 3/20/89 */ 1**********************************************************************/ #include /* standard I/O library */ #include /* console I/O library */ Hinclude /* time/date library */ #include /* standard functions library */ #include /* string functions library */ linclude /* DOS functions library */ linclude "declares.h" /* global declarations */ linclude "serial.h" /* serial I/O functions */ linclude "process.h" /* image processing functions */ linclude "pcio.h" /* I/O functions */ maine ) { unsigned int i; /* data array index */ unsigned int line; /* line index */ unsigned int pel; /* pixel index */

ptr = ℑ /* pointer to image */ cptr = &cimage; /* pointer to cimage */ tptr = &timage; /* pointer to timage */ lac com(); /* locate COMl-3 ports */ set-coml(dig baud); /* set up COMl for digitizer */ rdy- coml ( ); - /* ready COMl */ delay(5000l); /* stabilize port */ xmit coml(FULLRES); /* set full resolution */ printf("Receiving data on COMI port ••• \n"); xmit coml (XON) ; /* XON to digitizer */ for (i=lO; i«BYTEMAX); i++) /* load data to image array */ { ptr->data arr[i] = rec coml(); } - - rst coml(); /* reset COMl */ printf("Checking image ••• \n"); check() ; /* check for cntrl characters */ printf("Thresholding image ••• \n"); 162

maps (); /* threshold and build array */ wmap() ; /* write map array to disk */ for (i~O; i

/* upcwx.c */ /**********************************************************************/ /* Main program to load, compress, and uplink current digitized */ /* radar reflectivity patterns. */ /* The program is designed to handshake with the ImageWise gray level */ /* digitizer, which digitizes current radar reflectivity patterns */ /* from the gray level video output of the Kavouras radar image */ /* receiver. */ /* Geographical maps are stored in the aircraft, and are not */ /* digitized from the Kavouras using this program. */ /* The Kavouras must be in the following modes: */ /* Color Kill - Active */ /* Transmitter Maps - Off */ /* Precipitation Levels (All) - On */ /* Test - Off */ /* The program is called as follows: */ 1* >UPCWX ABC xxx */ 1* where ABC is the current radar site, and xxx is the current radar */ 1* range (060, 120, 180, 240). This data is appended to current day/ */ /* date data from the PC, and is transmitted to the receiver in the */ 1* title block. */ 1* Craig B. Parker */ /* 3/20/89 */ /**********************************************************************/ linclude /* standard I/O library */ 'include /* console I/O library */ #include /* time/date library */ linclude /* standard functions library */ linclude /* string functions library */ #include /* DOS functions library */ linclude "declares.h" /* global declarations */ linclude "serial.h" /* serial I/O functions */ #include "process.h" /* image processing functions */ #include "pcio.h" /* PC I/O functions */ main(argc,argv) /* get site & range from call */ int argc; /* number of call arguments */ char *argv[]; /* actual call arguments */ { struct tm *gmt ptr; /* pointer to time structure */ long ltime; - /* time variable */ unsigned int i; /* data array index */ int range; /* radar range */ int pel; /* pixel index variable */ int line; /* line index variable */ char rdr_site[4]; /* radar site identifier */

ptr = ℑ /* pointer to image */ cptr = &cimage; /* pointer to cimage */ tptr = &timage; /* pointer to timage */ if (argc 1= 3) /* proper program call? */ { printf("EXAMPLE USAGE:\n"); printf(n>upcwx cmh 180\n"); exit(O) ; /* exit program */ } strcpy(rdr_site,strupr(argv[l]»; /* copy radar ID string */ 164 range = atoi(argv[2]); /* copy radar range */ Loc com(); /* locate COMl-3 ports */ set-coml(dig baud); /* set up COMI for digitizer */ set-com3(modem baud); /* set up COM3 for transmit */ rdy-coml ();- /* ready COMI */ delay(SOOOl); /* stabilize port */ xmit coml(FULLRES); /* set full resolution */ printf ("Receiving data on COMI port••• \n") ; xmit coml(XON); /* XON to digitizer */ for (i=10; i«BYTEMAX); i++) /* load data to image array */ { ptr->data arr[i] = rec_coml(); } - rst coml(); /* reset COMI */ putenv( "TZ=ESTSEDT") ; /* set time zone correction */ tzset(); /* time zone set */ time (&Itime) ; /* get time */ gmt ptr - gmtime(<ime); /* convert to GMT */ /* till title structure */ strcpy(ptr->pic arr.title blk.site,rdr site); ptr->pic arr.title blk.rng - (unsigned-char)range; ptr->pic-arr.title-blk.min = (unsigned char)gmt ptr->tm min; ptr->pic-arr.title-blk.hr = (unsigned char)gmt ptr->tm hour; ptr->pic-arr.title-blk.day = (unsigned char)gmt ptr->tm mday; ptr->pic-arr.title-blk.mo = (unsigned char)gmt ptr->tm mon; ptr->pic-arr.title-blk.yr = (unsigned char)gmt-ptr->tm-year; printf("Checking image ••• \n"); -- check(); /* chk & place cntrl char */ printf("Quantizing image ••• \n"); quant(); /* quantize 64 to 7 levels */ printf("Compressing image ••• \n"); compress(); /* run length encode */ printf("Creating transmit array••• \n"); tran arr(); /* truncate to transmit array */ printf("Transmitting image on COM3 port••• \n"); rdy com3(); /* ready COM3 */ delay(lOOOOOI); /* stabilize port & txmtr */ for (i=O; i<=240; i++) /* transmit preamble of ones */ { xmit com3(OxFF); } - for (i=O; itdata arr[i]); } - - for (i=O; i<=240; i++) /* transmit ones */ { xmit com3(OxFF); } - rst com3(); /* reset COM3 */ printf("Expanding image which was transmitted••• \n"); bytes(); /* expand to bytes */ expand(); /* expand to original image */ i=O; /* initialize index */ for (line=O; line

{ for (pel=O; pelrpic_arr.rline_arr[line].rpel_arr[pel]; i++; } } display(); /* EGA display */ } 166

/* rxcwx.c */ /**********************************************************************/ /* Main program to receive and display current radar reflectivity */ /* patterns. */ /* Program expects MARK characters to preceed and follow */ /* the block of image data. */ 1* The image is loaded through the serial port, expanded, and */ /* overlayed with the correct geographical map. */ 1* The image is displayed in NWS color format using enhanced EGA */ 1* (16 colors, 640 x 350 resolution). */ /* Craig B. Parker */ /* 5/18/89 */ /**********************************************************************/ Hinclude /* standard I/O library */ Hinclude /* console I/O library */ linclude /* time/date library */ 'include /* standard functions library */ linclude /* string functions library */ linclude /* DOS functions library */ 'include "declares.h" /* global declarations */ linclude "serial.htl /* serial I/O functions */ linclude "process.h" /* image processing functions */ 'include "peio.h" /* PC I/O routines */ maine ) { unsigned int i; /* index variable */ unsigned int j; /* index variable */ int line; /* line index varaiable */ int pel; /* pixel index variable */ int error; /* received errors */ int ones; /* received ones */ int data fIg; /* receive data flag */ int test; unsigned char tempI; unsigned char temp2; tptr == &timage; /* pointer to timage */ loc com(); /* locate COMl-3 */ set-coml(modem baud); /* set up COMI for receive */ rdy- coml ();- /* ready COMI */ delay(5000l); /* stabilize port */ printf("Polling COMI port for data••• \n"); rec coml(); /* discard current byte */ /* loop until decide ones block preamble being sent */ ones == 0; /* initialize ones count */ error == 0; /* initialize errors count */ data fIg == 0; /* data received flag clear */ while (onestdata_arr[O] temp2; /* place temp2 into data array */ tptr->tdata_arr[l] tempI; /* place tempI into data array */ i = 1; /* initialize index */ do /* receive compressed image */ { tptr->tdata arr[i+1] i++; - } while «tptr->tdata_arr[i-1] & tptr->tdata arr[i]) !== OxFF); /* untIl receive two bytes of */ /* ones */ rst coml(); /* reset COMI */ prlntf(ltNo. ones in preamble = %d\n",ones); printf(ltNo. errors in preamble = %d\n",error); printf(ltExpanding image ••• \n"); bytes(); /* expand to bytes */ expand( ); /* expand to original image */ rxchk() ; /* check for errors */ rmap() ; /* read map file */ i=O; /* initialize index */ for (line=O; linerpic arr.rline arr[line].rpel arr[pel] I map[i]); i++; - - - } } display() ; /* EGA display */ } 168

/* rxtest.c */ /**********************************************************************/ /* Test program to simulate receiving, expanding, and displaying */ 1* current digitized radar reflectivity patterns. */ 1* Since RXCWX.C loads image data into original transmit array, */ 1* this program evaluates all routines except serial I/O */ /* by simply expanding the transmit array as if it had been */ 1* received. */ 1* The program is called as follows: */ 1* >RXTEST ABC xxx */ /* where ABC is the current radar site, and xxx is the current radar */ /* range (060, 120, 180, 240). */ 1* Craig B. Parker */ 1* 3/20/89 */ 1**********************************************************************/ #include /* standard I/O library */ linclude /* console I/O library */ linclude /* time/date library */ 'include /* standard functions library */ linclude /* string functions library */ linclude /* DOS functions library */ linclude "declares.h" 1* global declarations */ linclude "serial.h" /* serial I/O functions */ 'include "process.h" /* image processing functions */ linclude "pcio.h" /* PC I/O functions */ main(argc,argv) int argc; char *argv[]; { struct tm *gmt ptr; /* pointer to time structure */ long ltime; ­ /* time variable */ unsigned int i; /* index variable */ int range; /* radar range */ int pel; /* pixel index variable */ int line; /* line index variable */ char rdr_site[4]; /* radar site identifier */

ptr = ℑ /* pointer to image */ cptr = &cimage; /* pointer to cimage */ tptr = &timage; /* pointer to timage */ if (argc != 3) /* proper program call? */ { printf("EXAMPLE USAGE: \n") ; printf(">rxtest cmh 180\n"); exit(O); /* exit program */ } strcpy(rdr site,strupr(argv[l]»; /* copy radar ID string */ range = atoi(argv[2]); /* copy radar range */ loc com(); /* locate COMl-3 ports */ set-com1(dig baud); /* set up COM1 for digitizer */ rdy- com1 ( ); - 1* ready COM1 */ delay(50001); 1* stabilize port */ xmit com1(FULLRES); /* set full resolution */ printf ("Receiving data on COM1 port••• \n") ; xmit_com1(XON); /* XON to digitizer */ 169

for (i=lO; i«BYTEMAX); i++) /* load data to image array */ { ptr->data arr[i] ~ rec_coml(); } - rst coml(); /* reset COMI */ putenv("TZ=EST5EDTtI) ; /* set time zone correction */ tzset(); /* time zone set */ time(<ime); /* get time */ gmt ptr = gmtime(<ime); /* convert to GMT */ /* lill title structure */ strcpy(ptr->pic arr.title blk.site,rdr site); ptr->pic arr.title blk.rng = (unsigned-char)range; ptr->pic-arr.title-blk.min - (unsigned char)gmt ptr->tm min; ptr->pic-arr.title-blk.hr - (unsigned char)gmt ptr->tm hour; ptr->pic-arr.title-blk.day - (unsigned char)gmt ptr->tm mday; ptr->pic-arr.title-blk.mo - (unsigned char)gmt ptr->tm mon; ptr->pic-arr.title-blk.yr = (unsigned char)gmt-ptr->tm-year; printf("Checking image ••• \n"); -- check(); /* chk & place cntrl char */ printf("Quantizing image ••• \n"); quant(); /* quantize 64 to 7 levels */ tl printf("Compressing image ••• \n ); compress(); /* run length encode */ for (i=O; i<8; i++) /* write histogram info */ printf("No. of level %u = %u\n",i,hist[i]); printf("Creating transmit array••• \n"); tran arr(); /* truncate to transmit array */ printf("Expanding image ••• \n") ; bytes(); /* expand to bytes */ expand(); /* expand to original image */ rxchk() ; /* check for errors */ rmap() ; /* read map */ i=O; /* initialize index */ for (line=O; linerpic_arr.rline_arr[line].rpel_arr[pel] map[i); i++; } } display() ; /* EGA display */ } 170

1* declares.h */ /* global constants */ #define XOFF Ox13 /* ASCII XOFF */ 'define XON Ox11 /* ASCII XON */ 'define OSC 1.8432e6 /* 8250 clock frequency */ Idefine PELMAX 256 /* pixels in a line */ Idefine LINEMAX 244 /* lines in a field */ Idefine BYTEMAX 62720 /* image array size */ Idefine FULLRES Ox80 /* full res control to digitizer */ #define FIELD SYNC Ox40 /* field sync control from digitizer */ #define LINE SYNC Ox41 /* line sync control from digitizer */ #define FIELD END Ox42 /* field end control from digitizer */ #define CPELMAX 212 /* pixels in compressed line */ 'define CPELBLKS 106 /* compressed 2-pixel blocks/line */ Idefine CPEL STRT 25 /* first pixel in compressed image */ Idefine CPEL-END 236 /* last pixel in compressed image */ Idefine CLINEMAX 175 /* lines in a compressed field */ 'define CLINE STRT 24 /* first line in compressed image */ 'define CLINE-END 198 /* last line in compressed image */ #define CBYTEMAx 18727 /* max compressed image array size */ 'define CFIELD SYNC Ox8 /* compressed image field sync */ Hdefine CLINE SYNC Ox9 /* compressed image line sync */ Idefine CFIELD END OxA /* compressed image end field */ #define COMPRESS OxC /* compression identifier */ #define COMP SYNC OxD /* compression identifier/line sync */ #define CSTRr SYNC OxF /* line compression ident/line sync */ Idefine RBYTEMAx 37462 /* max received image array size */ Idefine DBYTEMAX 37100 /* display array size */ /* global variables */ int dig baud = 28800; /* digitizer baud rate */ int modem baud = 2400; /* modem baud rate */ struct title strc /* image title structure */ { char site[4]; /* radar site */ unsigned char rng; /* range */ unsigned char min; /* minutes */ unsigned char hr; /* hours */ unsigned char day; /* day */ unsigned char mo; /* month */ unsigned char yr; /* year */ } ; 1* image structure */ struct pel strc /* formatted line structure */ { - unsigned char syncl; /* line sync */ unsigned char pel_arr[PELMAX]; /* array of pixels */ } ; struct line strc /* formatted field structure */ { struct title strc title blk; /* image title structure */ unsigned char syncf; /* field sync */ struct pel strc line arr[LINEMAX]; /* array of lines */ unsigned char endfld; /* field end */ } ; union pic_strc /* access image array */ { /* by format or by data */ 171

struct line strc pic arr; /* formatted structure */ unsigned char data arr[BYTEMAX]; /* data array */ }; - union pic strc image; union pic-strc *ptr; /* pointer to image */ /* compressed image structure */ struct cpel fld /* pixel field */ { - unsigned char cpel 10 4; /* pixel values */ unsigned char cpel=hi 4; /* pixel values */ } ; struct cpel strc /* formatted line structure */ { - unsigned char chksum 4; /* line count check sum */ unsigned char csyncl 4; /* line sync/1st pixel */ /* compression control */ struct cpel fld cpel_arr[CPELBLKS]; /* array of 2 pixel blocks */ }; - struct cline strc /* formatted field struct */ { unsigned char csyncf; /* compressed field sync */ struct cpel strc cline arr[CLINEMAX]; /* array of lines */ unsigned char cendfld;- /* compressed field end */ } ; union cpic strc /* access compressed image */ {- /* by format or by data */ struct cline strc cpic arr; /* formatted structure */ unsigned char cdata arr[CBYTEMAX]; /* data array */ }; - union cpic strc cimage; union cpic-strc *cptr; /* pointer to cimage */ 1* transmitted image structure */ struct tnbl strc /* xmitted pixel data struct */ { unsigned char tnbl 10 4; /* 10 nibble */ unsigned char tnbl-hi 4; /* hi nibble */ } ; struct tblk strc /* xmitted block data struct */ { struct title strc title blk; /* title block */ struct tnbl strc tnbl_arr[CBYTEMAX];/* array of 2 nibble blocks */ } ; union tpic strc /* access transmit image by */ {- /* format or by data */ struct tblk strc tpic arr; /* formatted structure */ unsigned char tdata arr[CBYTEMAX+IO]; /* data array */ }; - union tpic strc timage; union tpic-strc *tptr; /* pointer to timage */ 1* received image structure */ struct rpel strc /* formatted line structure */ { - unsigned char syncl; /* line sync */ unsigned char chksum; /* line count check sum */ unsigned char rpel_arr[CPELMAX]; /* array of pixels */ } ; 172 struct rline strc /* formatted field structure */ { struct title strc title blk; /* image title structure */ unsigned char syncf; /* field sync */ struct rpel strc rline arr[CLINEMAX); /* array of lines */ unsigned char endfld; - /* field end */ }; union rpic strc /* access image array */ {- /* by format or by data */ struct rline strc rpic arr; /* formatted structure */ unsigned char rdata arr[RBYTEMAX); /* data array */ }; - union rpic strc rimage; union rpic-strc *rptr; /* pointer to rimage */ unsigned int tran byte; /* I bytes in xmitted image */ unsigned int line-sum[CLINEMAX]; /* line sum for chksum */ unsigned char picture[DBYTEMAX); /* display array */ unsigned char map[DBYTEMAX]; /* map array */ unsigned int hist[8); /* histogram for levels */ /* global functions */ /* serial.h */ loc com(); set-coml(int) ; set-com2(int); set-com3(int); rdy- coml (); rdy-com2(); rdy-com3(); rst-coml(); rst-com2(); rst-com3(); xmit coml(unsigned char); xmit-com2(unsigned char); xmit-com3(unsigned char); unsigned char rec coml(); delay(long int); - 1* process.h */ check( ); quant(); unsigned char level(unsigned char); maps(); unsigned char thresh(unsigned char); compress(); line comp(); run comp(); start comp(); tran arr(); bytes() ; expand(); rxchk( ); /* io.h */ wmap() ; rmap() ; display() ; unsigned char assign(unsigned char); 173

/* process.h */ /* image processing functions */ unsigned char tbyte_arr[2*CBYTEMAX]; /* temp storage for */ /* received array */ float thrsh[7]; /* threshold values for */ /* quantization */ check( ) /* checks for control characters in received image */ /* if present, places control characters in compressed image */ { unsigned int i; /* line loop counter */ ptr == ℑ /* pointer to image */ cptr == &cimage; /* pointer to cimage */ if (ptr->pic arr.syncf--FIELD SYNC) /* field sync? */ cptr->cpic arr.csyncf - CFIELD SYNC; /* compressed sync */ else - - /* no field sync */ { printf("ERROR: Missing field sync from digitizerl\n"); exit(O); } if (ptr->pic arr.endfld -- FIELD END) /* field end? */ cptr->cpic arr.cendfld == CFIELD END; /* compressed field end */ else - - /* no field end */ { printf("ERROR: Missing field end from digitizerl\n"); exit(O); } for (i==CLINE STRT; i<-CLINE END; i++) /* ignore unused lines */ { - if (ptr->pic arr.line arr[i].syncl LINE SYNC) /* line sync? */ cptr->cpic arr.cline arr[i-CLINE STRT].csyncl == CLINE SYNC; - - - /* compressed line sync */ else /* no line sync */ { printf("ERROR: Missing line sync from digitizerl\n"); exit(O); } } } quant() /* quantizes 64 levels to 7 levels and places in compressed image format */ 1* level() is called to find quantized level */ 1* thrsh[i] is read from calibration file "cal.dat" */ { FILE *fptr; 1* FILE pointer */ unsigned int i; /* line loop counter */ unsigned int j; /* pixel loop counter */ unsigned char pixel 10; /* 10 pixel in block */ unsigned char pixel=hi; /* hi pixel in block */

ptr == ℑ /* pointer to image */ cptr == &cimage; /* pointer to cimage */ 174

for (i=O; i<8; i++) /* initialize histogram */ hist[i] == 0; /* read threshold array */ if«fptr=fopen("cal.dat","r"»===NULL) /* open file */ { printf("ERROR: Cannot open file CAL.DATI\n"); exit(O) ; } for (i=O; i<7; i++) /* read file */ fscanf(fptr,"%f\n",&thrsh[i]); fclose(fptr); /* close file */ /* transfer quantized data from pic arr to cpic arr */ for (i=CLINE STRT; i<==CLINE END; i++) /* ignore unused lines */ {- - for (j=CPEL STRT; j<=CPEL END; j+=2) /* ignore unused pixels */ {- - pixel hi == ptr->pic arr.line arr[i].pel arr[j]; pixel-lo = ptr->pic-arr.line-arr[i].pel-arr[j+l]; cptr->cpic arr.cline arr[i-CLINE STRT].cpel arr [(j-CPEL STRT)72].cpel hi ~ level(pixel hi); cptr->cpic arr:cline arr[i-CLINE STRT].cpel arr [(j-CPEL STRT)72].cpel 10 level(pixel 10); } - - = - } } unsigned char level(pixel) unsigned char pixel; /* input pixel (8 bits) */ /* places pixel (64 levels) into proper quantized level qpixel (7 levels) */ /* also, computes histogram for use elsewhere */ { unsigned char qpixel; /* output pixel (4 bits) */ if (pixel > Ox3F) { printf("ERROR: Invalid pixel datal\n"); exit(O); } if (pixel >= thrsh[6]) { qpixel = Ox07; /* assign level 7 */ hist[7] += 1; /* increment count */ } if «pixel < thrsh[6]) && (pixel >= thrsh[S]» { qpixel = Ox06; /* assign level 6 */ hist[6] += 1; /* increment count */ } if «pixel < thrsh[S]) && (pixel >= thrsh[4]» { qpixel = OxOS; /* assign levelS */ hist[S] +== 1; /* increment count */ } if «pixel < thrsh[4]) && (pixel >= thrsh[3]» { 175

qpixel == Ox04; /* assign level 4 */ hist[4] +== 1; /* increment count */ } if «pixel < thrsh[3]) && (pixel >== thrsh[2]» { qpixel == Ox03; /* assign level 3 */ hist[3] +== 1; /* increment count */ } if «pixel < thrsh[2]) && (pixel >== thrsh[l]» { qpixel == Ox02; /* assign level 2 */ hist[2] +== 1; /* increment count */ } if «pixel < thrsh[l]) && (pixel >== thrsh[O]» { qpixel == Ox01; /* assign level 1 */ hist[l] += 1; /* increment count */ } if (pixel < thrsh[O]) { qpixel = OxOO; /* assign level 0 */ hist[O] +== 1; /* increment count */ } return (qpixel); } maps() /* copy and threshold maps from image to map[i] for storage & display */ /* thesh() is called to threshold map */ { FILE *fptr; /* FILE pointer */ int line; /* line counter */ int pel; /* pixel counter */ unsigned int i; /* index */ ptr == ℑ /* pointer to image */ rptr = &rimage; /* pointer to rimage */ if«fptr=fopen("cal.dat","r"»==NULL) /* open file */ { printf("ERROR: Cannot open file CAL.DATI\n"); exit(0) ; } for (i=O; i<7; i++) /* read thrsh[i] */ { fscanf(fptr,"%f\n",&thrsh[i]); } fclose(fptr); /* close calibrate file */ /* transfer thresholded data from pic arr to map[i] */ i=O; - for (line=CLINE STRT; line<=CLINE END; line++) /* lines used */ {- - for (pel=CPEL STRT; pel<=CPEL END; pel++) /* pels used*/ {- - map[i] = thresh(ptr->pic_arr.line_arr[line].pel_arr[pel]); i++; } 176

} } unsigned char thresh(mpixel) unsigned char mpixel; /* input pixel */ 1* thresholds map image for clean up */ { unsigned char tpixel; /* output pixel */ if (mpixel > Ox3F) { printf("ERROR: Invalid pixel datal\n"); exit(O); } tpixel ~ OxOO; /* assign level 0 */ if (mpixel> thrsh[2]) tpixel ~ Ox07; /* assign level 7 */ return (tpixel); } compress() 1* compress image using run length coding */ /* calls line code() for entire-line coding */ 1* calls start code() for run coding starting at first pixel */ 1* calls run code() for generalized run coding */ { - int i; 1* line index */ int j; /* pixel index */ int index; /* run start index */ int run; /* run counter */ struct bits /* 1 bit flags */ { unsigned char start : 1; /* start flag */ unsigned char cont : 1; /* continue flag */ unsigned char hi : 1; /* hi flag */ unsigned char unused : 5; /* not used */ } struct bits flag; unsigned char cpixel hi; /* hi pixel in pixel block */ unsigned char cpixel-lo; /* 10 pixel in pixel block */ unsigned char cpixel-old; /* previous block hi pixel */ cptr ~ &cimage; - /* pointer to cimage */ for (i=O; icpic arr.cline arr[i].cpel arr[j].cpel hi; cpixel-lo == cptr->cpic-arr.cline-arr[i].cpel-arr[j].cpel-lo; line sum[i] +- (cpixel-lo + cpixel hi); /*-add to line-sum */ if (J == 0) - - /* first pixel block? */ { 177

if (cpixel 10 ~~ cpixel_hi) /* 1st two pixels alike? */ { - flag. start ~ 1; /* set start encode flag */ flag.cont ~ 1; /* set run encode flag */ flag.hi ~ 1; /* set hi pixel flag */ index == j; /* cpic arr index */ run +~ 1; /* incr-run count */ } } else /* not first pixel block */ { if (cpixel hi ~= cpixel_old) /* hi - last block lo? */ { - run +- 1; /* incr run count */ if (flag.cont ~== 0) /* start new run? */ { index ~ j - 1; /* point to run start */ flag.cont ~ 1; /* set run encode flag */ flag.hi ~ 0; /* clear hi pixel flag */ } if (cpixel 10 == cpixel_hi) /* current hi ~ lo? */ { - run +~ 1; /* incr run count */ if (flag.cont == 0) /* start new run? */ { index = j; /* point to run start */ flag.cont = 1; /* set run encode flag */ flag.hi = 1; /* set hi pixel flag */ } } else /* end of run */ { if (flag.start == 1) /* start encode flag? */ { if (run> 3) /* run length> 3? */ { if (run < CPELMAX) /* run length < CPELMAX? */ { start code(i,cpixel hi,run); } - - else /* run length >= CPELMAX */ { printf ("ERROR: Run length >= CPELMAXI\n"); exit(O); } } flag. start = 0; /* rst start encode flag */ } else /* not start encoded */ { if (flag.cont == 1) /* run encode flag set? */ { if (run> 3) /* run length> 3? */ { if (run < CPELMAX) /* length < CPELMAX? */ 178

{ run code(i,index,flag.hi,cpixel hi,run); } - - else 1* run length >= CPELMAX */ { printf ("ERROR: Run length >- CPELMAXI\n"); exit(O); } } } } flag.cont = 0; 1* reset run encode flag */ run == 1; 1* reset run counter */ } } else 1* hi not equal 10 */ { if (flag.start == 1) /* start encode flag? */ { if (run> 3) /* run length > 3? */ { if (run < CPELMAX) /* run length < CPELMAX? */ { start code(i,cpixel old,run); } - - else 1* run length >= CPELMAX */ { printf ("ERROR: Run length >= CPELMAXI \n") ; exit (0) ; } } flag. start = 0; /* rst start encode flag */ } else /* not start encoded */ { if (flag.cont == 1) /* start new run? */ { if (run> 3) /* run> 3? */ { if (run < CPELMAX) 1* run < CPELMAX? */ { run code(i,index,flag.hi,cpixel old,run); } - - else /* run length >= CPELMAX */ { printf ("ERROR: Run length >= CPELMAXI\n"); exit(0) ; } } } } flag.cont 0; /* reset run encode flag */ run = 1; /* reset run count */ 179

if (cpixel 10 == cpixel_hi) /* current blck lo-hi? */ { - run += 1; /* incr run count */ if (flag.cont == 0) /* start new run? */ { index - j; /* point to run start */ flag.cont - 1; /* set run encode flag */ flag.hi - 1; /* set hi pixel flag */ } } } } if (j (CPELBLKS-l» /* end of line? */ { if (flag.start == 1) /* start encode flag? */ { if (run> 3) /* run length> 3? */ { if (run < CPELMAX) /* length < CPELMAX? */ { start code(i,cpixel hi,run); } - - else /* run length >- CPELMAX */ { if (run == CPELMAX) /* run length = CPELMAX? */ { line code(i,cpixel hi); } - - else /* run length> CPELMAX */ { printf ("ERROR: Run length exceeds one linel\n"); exit(0) ; } } } flag. start 0; /* rst start encode flag */ } else /* not start encoded */ { if (flag.cont == 1) /* run encode flag set? */ { if (run> 3) /* run length> 3? */ { if (run < CPELMAX) /* run length < CPELMAX? */ { run code(i,index,flag.hi,cpixel hi,run); } - - else /* run length >= CPELMAX */ { printf ("ERROR: Run length >= CPELMAXI\n"); exit(O); } } } 180

} flag.cont = 0; /* reset run encode flag */ run = 1; /* reset run counter */ } cpixel old = cpixel_lo; /* set old = current 10 */ } - cptr->cpic_arr.cline_arr[i).chksum (line sum[i) & OxOF); /* set chksum */ } }

start code(i,cpixel,run) int i; /* line index */ int run; /* run length */ unsigned char cpixel; /* pixel value */ { cptr = &cimage; /* pointer to cimage */ cptr->cpic arr.cline arr[i].csyncl = COMP SYNC; cptr->cpic-arr.cline-arr[i].cpel arr[O].cpel hi = cpixel; cptr->cpic-arr.cline-arr[i].cpel-arr[O].cpel-lo = (run & 240»>4; cptr->cpic=arr.cline=arr[i].cpel=arr[1].cpel=hi = (run & 15); }

line code(i,cpixel) int !; /* line index */ unsigned char cpixel; /* pixel value */ { cptr = &cimage; /* pointer to cimage */ cptr->cpic arr.cline arr[i].csyncl = CSTRT SYNC; cptr->cpic=arr.cline=arr[i].cpel_arr[O].cpel_hi - cpixel; }

run code(i,index,hi,cpixel,run) int-i; /* line index */ int index; /* run start index */ int run; /* run length */ char hi; /* hi pixel flag */ unsigned char cpixel; /* pixel value */ { cptr = &cimage; /* pointer to cimage */ switch(hi) { case 1: /* start at hi pixel */ { cptr->cpic arr.cline arr[i].cpel arr[index].cpel hi COMPREss; - - - cptr->cpic arr.cline arr[i).cpel arr[index].cpel 10 cpixel; - - - cptr->cpic arr.cline arr[i).cpel arr[index+l).cpel hi (run-& 240»>4; - - cptr->cpic arr.cline arr[i].cpel arr[index+l].cpel 10 (run-& 15); - - - break; } case 0: /* start at 10 pixel */ { 181

cptr->cpic arr.cline arr[i].cpel arr[index].cpel 10 ~ COMPRESS; - - - cptr->cpic arr.cline arr[i].cpel arr[index+l].cpel hi cpixel; - - - cptr->cpic arr.cline arr[i].cpel arr[index+l].cpel 10 ~ (run-& 240»>4; - - cptr->cpic arr.cline arr[i].cpel arr[index+2].cpel hi (run-& 15); - - - break; } } } tran arr() /* performs scan of compressed image array (cimage) and removes redundant pixels. Data is written to final transmit image (timage)*/ { int i; /* cimage index */ int j; /* timage index */ unsigned char hi; 1* hi pixel in block */ unsigned char 10; /* 10 pixel in block */ unsigned char next hi; /* hi pixel in next block */ unsigned char next-lo; /* 10 pixel in next block */ unsigned char last-hi; /* hi pixel in last block */ unsigned char last=lo; /* 10 pixel in last block */ unsigned char count; /* data index */ float comp; /* compression ratio */ struct bits /* bit flags */ { unsigned char odd c · 1; /* cimage odd boundary flag */ unsigned char odd-t · 1 ; /* timage odd boundary flag */ unsigned char unused·: 6; /* unused */ } ; struct bits flag; ptr = ℑ /* pointer to image */ cptr = &cimage; /* pointer to cimage */ tptr = &timage; /* pointer to timage */ tptr->tpic arr.title blk = ptr->pic arr.title blk; /* cpy ttl blk */ tptr->tpic-arr.tnbl arr[O].tnbl hi ~ 0; /* copy first byte */ tptr->tpic-arr.tnbl-arr[O].tnbl-lo cptr->cdata arr[O]; flag. odd c- 0; - - /* initialize odd c flag */ flag.odd-t = 0; /* initialize odd-t flag */ i = 1; - /* init cimage index */ j = 1; /* init timage index */ do /* main do loop */ { hi = (cptr->cdata arr[i] & 240»>4; /* get hi nibble */ 10 = (cptr->cdata=arr[i] & 15); /* get 10 nibble */ next hi = (cptr->cdata arr[i+l] & 240»>4; /* next hi nibble */ next-lo = (cptr->cdata-arr[i+l] & 15); /* next 10 nibble */ last-hi ~ (cptr->cdata-arr[i+2] & 240»>4; /* last hi nibble */ last-lo = (cptr->cdata-arr[i+2] & 15); /* last 10 nibble */ if «(hi & 8) ~= 8) &&-(flag.odd c == 0» /* hi = control & */ { - /* cimage even bound? */ switch (hi & 7) /* get control type */ 182

{ case 1: /* line sync */ { if (flag.odd t == 0) /* timage even bound? */ { - tptr->tpic arr.tnbl arr[j].tnb1 hi ~ hi; /* control */ tptr->tpic-arr.tnbl-arr[j].tnb1-lo ~ 10; /* chksum */ } - - - else /* timage odd bound */ { tptr->tpic arr.tnbl arr[j].tnbl 10 - hi; /* control */ tptr->tpic-arr.tnbl-arr[j+1].tnDl_hi = 10;/* chksum */ } - i += 1; /* incr cimage index */ j +- 1; /* incr timage index */ break; } case 4: /* run encode */ { if (flag.odd t == 0) /* timage even bound? */ { - tptr->tpic arr.tnbl arr[j].tnbl hi - hi; /* control */ tptr->tpic-arr.tnbl-arr[j].tnbl-lo = 10; /* pixel */ tptr->tpic=arr.tnbl=arr[j+1].tnDl_hi = next hi; /* count hi */ tptr->tpic_arr.tnbl_arr[j+1].tnbl_lo next 10; /* count 10 */ } else /* timage odd bound */ { tptr->tpic arr.tnbl arr[j].tnb1 10 - hi; /* control */ tptr->tpic-arr.tnbl-arr[j+1].tnDl hi = 10; /* pixel */ tptr->tpic=arr.tnbl=arr[j+1].tnbl=10 next hi; /* count hi */ tptr->tpic_arr.tnbl_arr[j+2].tnbl_hi next 10; /* count 10 */ } count = (next hi«4) + next 10; /* get count */ i += (unsigned int) (count/2); /* incr cimage index */ j += 2; /* incr timage index */ if «count%2) > 0) /* odd i incr? */ flag.odd c 1; /* set odd c flag */ break; - } case 5: /* start encode */ { if (flag. odd t == 0) /* timage even bound? */ { - tptr->tpic arr.tnbl arr[j].tnbl hi = hi; /* control */ tptr->tpic-arr.tnbl-arr[j].tnbl-lo = 10; /* chksum */ tptr->tpic=arr.tnbl=arr[j+1].tnD1_hi next hi; 7* pixel */ tptr->tpic_arr.tnbl_arr[j+1].tnbl_lo next 10; /* count hi */ tptr->tpic_arr.tnbl_arr[j+2].tnbl_hi last_hi; 183

/* count 10 */ j +== 2; /* incr timage index */ } else /* timage odd bound */ { tptr->tpic arr.tnbl arr[j].tnbl 10 - hi; /* control */ tptr->tpic-arr.tnbl-arr[j+1].tnbl hi - 10;/* chksum */ tptr->tpic=arr.tnbl=arr[j+1].tnbl=10 == next hi; 7* pixel */ tptr->tpic_arr.tnbl_arr[j+2].tnbl_hi next 10; /* count 10 */ tptr->tpic_arr.tnbl_arr[j+2].tnbl_lo == last hi; /* count hi */ j +== 3; /* incr timage index */ } count == (next 10«4) + last hi; /* get count */ i +- (unsigned int) (count/2); /* incr cimage index */ i +- 1; if «count%2) > 0) /* odd i incr? */ flag.odd c == 1; /* set odd_c flag */ flag. odd t =-flag.odd t; /* toggle odd t flag */ break; - - }

case 7: /* line encode */ { if (flag.odd t ==== 0) /* timage even bound? */ { - tptr->tpic arr.tnbl arr[j].tnbl hi == hi; /* control */ tptr->tpic-arr.tnbl-arr[j].tnbl-lo == 10; /* chksum */ tptr->tpic-arr.tnbl-arr[j+1].tnbl hi == next hi; -- - 7* pixel */ j +== 1; /* incr timage index */ } else /* timage odd bound */ { tptr->tpic arr.tnbl arr[j].tnbl 10 == hi; /* control */ tptr->tpic-arr.tnbl-arr[j+1].tnbl hi == 10;/* chksum */ tptr->tpic=arr.tnbl=arr[j+1].tnbl=lo == next hi; 7* pixel */ j +== 2; /* incr timage index */ } i +== (CPELBLKS+1); /* incr cimage index */ flag.odd t == -flag.odd t; /* toggle odd t flag */ break; - - } } } else /* hi nibble not */ { /* control */ if (flag.odd c === 0) /* cimage even bound? */ { - if (flag. odd t ==== 0) /* timage even bound? */ { - tptr->tpic arr.tnbl arr[j].tnbl hi == hi; /* hi nbl */ } - - - 184

else /* timage odd bound */ { tptr->tpic arr.tnbl arr[j].tnbl 10 - hi; /* hi nbl */ j += 1; - - -/ * incr timage index */ } flag. odd t = -flag.odd t; /* toggle odd t flag */ flag.odd-c - 1; - /* set odd_c flag */ } - if «10 & 8) == 8) /* 10 = control? */ { switch (10 & 7) /* get control type */ { case 1: /* line sync */ { if (flag.odd t == 0) /* timage even bound? */ { - tptr->tpic arr.tnbl arr[j].tnbl hi - 10;/* cntrl */ tptr->tpic-arr.tnbl-arr[j].tnbl-lo - next hi; }-- - /*-chksum */ else /* timage odd bound */ { tptr->tpic arr.tnbl arr[j].tnbl 10 = 10;/* cntrl */ tptr->tpic-arr.tnbl-arr[j+l].tnbl hi = next hi; }-- - /* chksum */ i += 1; /* incr cimage index */ j += 1; /* incr timage index */ break; }

case 2: /* field end */ { if (flag.odd t == 0) /* timage even bound? */ tptr->tpic_arr.tnbl_arr[j].tnbl hi = 10; /* CFIELD END */ else /* timage-odd bound */ tptr->tpic arr.tnbl arr[j].tnbl 10 = 10; -- /* CFIELD END */ break; - }

case 4: /* run encode */ { if (flag.odd t == 0) /* timage even bound? */ { - tptr->tpic arr.tnbl arr[j].tnbl hi = 10;/* cntrl */ tptr->tpic=arr.tnbl=arr[j].tnbl=lo - next hi; /* pixel */ tptr->tpic_arr.tnbl_arr[j+l].tnbl_hi = next 10; /* count 10 */ tptr->tpic_arr.tnbl_arr[j+l].tnbl_lo = last hi; /* count hi */ } else /* timage odd bound */ { tptr->tpic arr.tnbl arr[j].tnbl 10 = 10;/* cntrl */ tptr->tpic=arr.tnbl-arr[j+l].tnbl hi next_hi; 185

/* pixel */ tptr->tpic_arr.tnb1_arr[j+l].tnb1_10 ~ next 10; /* count hi */ tptr->tpic_arr.tnb1_arr[j+2].tnb1_hi ~ last hi; /* count 10 */ } count ~ (next 10«4) + last hi; /* get count */ i +~ (unsigned int) (count/2); /* incr cimage index */ j +~ 2; /* incr timage index */ if «count%2) > 0) /* odd i incr? */ { i +== 1; /* incr cimage index */ f1ag.odd c == 0; /* clear odd c flag */ } - break; } case 5: /* start encode */ { if (f1ag.odd t ==== 0) /* timage even bound? */ { - tptr->tpic arr.tnb1 arr[j].tnb1 hi == 10;/* cntrl */ tptr->tpic=arr.tnbl=arr[j].tnb1=10 == next hi; /*-chksum */ tptr->tpic_arr.tnbl_arr[j+l].tnbl_hi ~ next 10; /* pixel */ tptr->tpic_arr.tnbl_arr[j+l].tnbl_lo == last hi; /* count hi */ tptr->tpic_arr.tnbl_arr[j+2].tnb1_hi == last 10; /* count 10 */ j +== 2; /* incr timage index */ } else /* timage odd bound */ { tptr->tpic arr.tnbl arr[j].tnbl 10 == 10;/* cntrl */ tptr->tpic=arr.tnbl-arr[j+l].tnbl hi == next hi; /* chksum */ tptr->tpic_arr.tnbl_arr[j+l].tnbl_lo == next 10; /* pixel */ tptr->tpic_arr.tnbl_arr[j+2].tnbl_hi ~ last hi; /* count hi */ tptr->tpic_arr.tnb1_arr[j+2].tnbl_lo == last 10; /* count 10 */ j +== 3; /* incr timage index */ } count == (last hi«4) + last 10; /* get count */ i +== (unsigned int) (count/2); /* incr cimage index */ i +== 1; if «count%2) > 0) /* odd i incr? */ { flag.odd c 0; /* clear odd c flag */ i +== 1; - /* incr cimage index */ } flag.odd t == -flag.odd_t; /* toggle odd t flag */ break; - } 186

case 7: /* line encode */ { if (flag. odd t ~= 0) /* timage even bound? */ { - tptr->tpic arr.tnbl arr[j].tnbl hi -= 10;/* cntrl */ tptr->tpic=arr.tnbl=arr[j].tnbl=lo = next hi; /*-chksum */ tptr->tpic_arr.tnbl_arr[j+l].tnbl_hi = next 10; /* pixel */ j += 1; /* incr timage index */ } else /* timage odd bound */ { tptr->tpic arr.tnbl arr[j].tnbl 10 = 10;/* cntrl */ tptr->tpic=arr.tnbl=arr[j+l].tnbl_hi = next hi; /* chksum */ tptr->tpic_arr.tnbl_arr[j+l].tnbl_lo = next 10; /* pixel */ j +~ 2; /* incr timage index */ } i += (CPELBLKS+l); /* incr cimage index */ flag.odd t ~ -flag. odd t; /* toggle odd t flag */ break; - - } } } else /* 10 1= control */ { if (flag. odd t == 0) /* timage even bound? */ { - tptr->tpic arr.tnbl arr[j].tnbl hi = 10; /* 10 nibble */ } - - - else /* timage odd bound */ { tptr->tpic arr.tnbl arr[j].tnbl 10 = 10; /* 10 nibble */ j += 1; - - - /* incr timage index */ } flag.odd t -flag.odd t; /* toggle odd t flag */ flag.odd-c 0; - /* clear odd-c flag */ i += 1; ­ /* incr cimage index */ } } } while «(hi«4)+10) 1= CFIELD_END); /* loop exit test */ tran byte = j + 11; 1* total I bytes */ comp-= «float)(i+l»/«float)tran byte); /* compression */ printf("Digitizer output image = 62710 bytes\n"); printf(tlTruncated and quantized image = %d bytes\n",i+l); printf("Run-length coded image = %u bytes\n",tran byte); printf("Compression ratio due to run-length coding -= %5.2f\n",comp); } bytes() /* expands received image nibbles(timage.tnbl arr[CBYTEMAX]) to bytes */ /* (tbyte arr[2*CBYTEMAX]) */ - { - 187

unsigned int i; /* index */ unsigned int j; /* index */ tptr == &timage; /* pointer to timage */ for (i==O; i<10; i++) /* copy title block */ { tbyte arr[i] == tptr->tdata_arr[i]; } - j == 10; for (i==10; i«CBYTEMAX+IO); i++) /* expand and copy */ { tbyte arr[j] == «tptr->tdata arr[i] & OxFO»>4); tbyte-arr[j+l] == (tptr->tdata_arr[i] & OxOF); j +==2;- } } expand () /* expands received image bytes (tbyte_arr) to received picture */ /* stucture (rimage) */ { unsigned int i; /* array index */ unsigned int j; /* run index */ int pel; /* pixel index */ int line; /* line index */ int run; /* run length */ rptr == &rimage; /* pointer to rimage */ for (i==O; i<10; i++) /* copy title block */ { rptr->rdata arr[i] == tbyte_arr[i]; } - if «tbyte_arr[10] I tbyte_arr[ll) !== CFIELD_SYNC) /* field sync */ { printf(lfERROR: No field sync found!\nlf); exit(O); } rptr->rpic arr.syncf FIELD_SYNC; /* place field sync */ /* expand Image */ i == 12; /* start at 12th byte */ pel == 0; /* init pel counter */ line == -1; /* init line counter */ while «tbyte arr[i] tbyte_arr[i+l]) !== CFIELD_END) /* fld end? */ { - if «tbyte arr[i] & 8) ==== 8) /* control byte? */ { - switch (tbyte arr[i) & 7) /* get control type */ { - case 1: /* line sync */ { pel == 0; /* pixel index */ line++; /* start new line */ rptr->rpic arr.rline arr[line).syncl == LINE SYNC; rptr->rpic-arr.rline-arr[line].chksum == tbyte arr[i+l); i+==2; - - /* increment array index */ break; 188

} case 4: /* run encode */ { run ~ (tbyte arr[i+2]«4) + tbyte arr[i+3];/* rn lngth */ if (run < CPELMAX) /* run length < CPELMAX? */ { for (j=O; jrpic arr.rline arr[line].rpel arr[pel+j] tbyte arr [1+1] ;- - } - pel += run; /* increment pel index */ i+=4; /* increment array index */ break; } else /* run length >= CPELMAX */ { printf(tlERROR: Run length >= CPELMAXI\ntl); exit(O); } } case 5: /* start encode */ { pel = 0; /* pixel index */ line++; /* start new line */ rptr->rpic arr.rline arr[line].syncl = LINE SYNC; rptr->rpic-arr.rline-arr[line].chksum = tbyte arr[i+1]; run = (tbyte arr[i+3T«4) + tbyte arr[i+4];/*-rn lngth */ if (run < CPELMAX) /* run length < CPELMAX? */ { for (j=O; jrpic arr.rline arr[line].rpel arr[pel+j] tbyte arr[1+2]; - - } - pel += run; /* increment pel index */ i+=5; /* increment array index */ break; } else /* run length >= CPELMAX */ { printf(tlERROR: Run length >= CPELMAXI\ntl); exit(O); } } case 7: { pel = 0; /* pel index */ line++; /* start new line */ rptr->rpic arr.rline arr[line].syncl - LINE SYNC; rptr->rpic-arr.rline-arr[line].chksum = tbyte arr[i+l]; run = CPELMAx; - - for (j=O; jrpic arr.rline arr[line].rpel arr[pel+j] = tbyte_arr[!+2]; - - 189

} pel +== run; /* increment pel index */ i+==3; /* increment array index */ break; } } } else /* copy current pixel */ { rptr->rpic arr.rline arr[line).rpel arr[pel) == tbyte arr[i); i++; - - -/* increment array index */ pel++; /* increment pel index */ } if (i >= CBYTEMAX) /* exceeded max size? */ { printf("ERROR: No field end found!"); exit(O); } } rptr->rpic_arr.endfld /* place end field */ } rxchk( ) /* checks transmitted chksum against received pixel line sum */ /* and checks for control characters */ { unsigned char chksUtn; /* transmitted chksum */ int line; /* line index variable */ int pel; /* pixel index variable */ rptr = &rimage; /* pointer to rimage */ if (rptr->rpic arr.syncf != FIELD_SYNC) /* field sync check */ { - printf("ERROR: No field sync found!\n"); exit(O); } for (line=O; linerpic arr.rline arr[line].syncl 1== LINE SYNC) {-- /* line sync check */ printf("ERROR: Missing sync at line %d!\n",line); exit(O); } chksum = rptr->rpic arr.rline arr[line].chksUtn; /* get chksum */ line sum[line] = 0; - - /* initialize line sum */ for (pel=O; pelrpic arr.rline arr[line].rpel arr[pel); } - - - - if (chksum != (line sum[line) & OxOF» /* chksum I==line sum? */ { - printf("ERROR: Checksum error at line %d!\n",line); exit(O); } } if (rptr->rpic_arr.endfld 1== FIELD_END) /* field end check */ 190

{ printf("ERROR: No field end found!\n"); exit(O); } } 191

/* serial.h */ /* functions for serial port communication */ /* with digitizer (COMI) and transmitter modem (COM2 or COM3) */ struct com /* port register address structure */ { unsigned rcv reg; /* receiver buffer register */ unsigned xmit reg; /* transmitter holding register */ unsigned divlo Itch; /* divisor latch (LSB) */ unsigned divhi-Itch; /* divisor latch (MSB) */ unsigned ien reg; /* interrupt enable register */ unsigned iid-reg; /* interrupt identification register */ unsigned 1ctI reg; /* line control register */ unsigned mctl-reg; /* modem control register */ unsigned 1st_reg; /* line status register */ unsigned mst_reg; /* modem status register */ } ; struct com coml; struct com com2; struct com com3; unsigned coml reg ~ Ox03F8; /* COMI location */ unsigned com2-reg Ox02F8; /* COM2 location */ unsigned com3=reg ~ Ox0338; /* COM3 location */ loc com() /* assigns addresses of 8250 registers for COMI-3 ports */ { 1* COMI */ coml.rcv reg == coml reg; coml.xmit reg == com! reg; coml.div1o Itch == coml reg; coml.divhi-Itch == coml-reg + 1; coml •ien reg == coml reg + 1; coml.iid-reg == coml-reg + 2; coml.lctI reg == com! reg + 3; coml.mctl-reg == coml-reg + 4; coml.lst reg coml reg + 5; coml.mst-reg coml-reg + 6; /* COM2 */ - com2.rcv reg ~ com2 reg; com2.xmit reg == com2 reg; com2.divlo Itch == com2 reg; com2.divhi-Itch == com2-reg + 1; com2.ien reg ~ com2 reg + 1; com2.iid-reg == com2-reg + 2; com2.1ctI reg == com2 reg + 3; com2.mctl-reg == com2-reg + 4; com2.1st reg com2 reg + 5; com2.mst-reg com2-reg + 6; /* COM3 */ - com3.rcv reg == com3 reg; com3.xmit reg == com3 reg; com3.divlo Itch == com3 reg; com3.divhi-ltch == com3-reg + 1; com3.ien reg == com3 reg + 1; com3.iid-reg == com3-reg + 2; com3.1ctI_reg == com3_reg + 3; 192

com3.mctl reg ~ com3 reg + 4; com3.lst reg = com3 reg + 5; com3.mst=reg = com3=reg + 6; } set coml(baudl) int-baudl; /* sets COM1 port for baudl */ { int divisor; float rate; rate = (float)baudl; divisor = (int)(OSC/(rate * 16.0»; /* 8520 divisor value */ outp(com1.lctl reg,Ox80); /* access divisor latches */ outp(com1.divlo ltch,divisor); /* divisor LSB */ outp(coml.divhi-ltch,(divisor » 16»; /* divisor MSB */ outp(com1.lctl_reg,Ox03); /* no parity, 1 stop bit, 8 data bits */ } set com2(baud2) int-baud2; /* sets COM2 port for baud2 */ { int divisor; float rate; rate = (float)baud2; divisor = (int) (OSC/(rate * 16.0»; /* 8520 divisor value */ outp(com2.lctl reg,Ox80); /* access divisor latches */ outp(com2.divlo ltch,divisor); /* divisor LSB */ outp(com2.divhi-ltch,(divisor » 16»; /* divisor MSB */ outp(com2.1ctl_reg,Ox03); /* no parity, 1 stop bit, 8 data bits */ } set com3(baud3) int-baud3; /* sets COM3 port for baud3 */ { int divisor; float rate; rate = (float)baud3; divisor = (int) (OSC/(rate * 16.0»; /* 8520 divisor value */ outp(com3.lctl reg,Ox80); /* access divisor latches */ outp(com3.divlo ltch,divisor); /* divisor LSB */ outp(com3.divhi-ltch,(divisor » 16»; /* divisor MSB */ outp(com3.lctl_reg,Ox03); /* no parity, 1 stop bit, 8 data bits */ } rdy com1() /* ~DTR and -RTS on COMl */ { outp(coml.mctl_reg,Ox03); /* assert -DTR and -RTS */ } rdy com2() /* =DTR and -RTS on COMl */ 193

{ outp(com2.mctl_reg,Ox03); /* assert -DTR and -RTS */ } rdy com3() /* =DTR and -RTS on COMI */ { outp(com3.mctl_reg,Ox03); /* assert -DTR and -RTS */ } rst coml() /* DTR and RTS on COMI */ { outp(coml.mctl_reg,OxOO); /* reset -DTR and -RTS */ } rst com2() /* DTR and RTS on COM2 */ { outp(com2.mctl_reg,OxOO); /* reset -DTR and -RTS */ } rst com3() /* DTR and RTS on COM2 */ { outp(com3.mctl_reg,OxOO); /* reset -DTR and -RTS */ } xmit coml(byte) unsigned char byte; /* transmits bytes on COMI */ { while «inp(coml.lst_reg) & Ox20) == 0) /* wait until transmitter */ /* holding register empty */ while (inp(coml.rcv_reg) == XOFF) /* wait if XOFF is sent */ ; /* from digitizer */ outp(coml.xmit_reg, byte); /* else output data byte */ } xmit com2(byte) unsigned char byte; /* transmits bytes on COM2 */ { while «inp(com2.lst_reg) & Ox20) 0) /* wait until transmitter */ /* holding register empty */ outp(com2.xmit_reg,byte); /* else output data byte */ } xmit com3(byte) unsigned char byte; /* transmits bytes on COM3 */ { while «inp(com3.lst_reg) & Ox20) == 0) /* wait until transmitter */ /* holding register empty */ 194

outp(com3.xmit_reg,byte); /* else output data byte */ } unsigned char rec coml() /* receives data bytes from COMI */ { while «inp(coml.lst_reg) & OxOI) == 0) /* wait if receive buffer */

; /* register is empty */ return(inp(coml.rcv_reg»; /* else input data byte */ } delay(incr) /* general purpose delay loop */ long int incr; { long int k; for (k=l; k

} 195

/* pcio.h */ /* functions for read/write to disk files and write to screen */ Idefine MAXR 350 /* rows */ 'define MAXC 640 /* columns */ Idefine PIX 8 /* pixels in one service element */ Idefine VIDIO OxlO /* video service - interrupt 10 (hex) */ 'define MAXB (MAXC/PIX) /* pixels in one service block */ wmap( ) /* routine to write map array to disk file */ { FILE *fptr; /* FILE pointer */ unsigned int i; /* array index */ char file_name[30]; /* file name */

lt printf(ltEnter desired map image file name:\n ); /* file should be of form: ABCxxx.map */ /* where ABC is radar site, xxx is range in nmi */ scanf("%s",file name); /* read file name */ fptr==fopen(file-name,"wb"); /* open file */ /* write file *7 fwrite«void *)map,sizeo£(unsigned char),DBYTEMAX, fptr); fclose(fptr); /* close file */ } rmap( ) /* routine to identify and read map array from disk */ { FILE *fptr; /* FILE pointer */ unsigned int i; /* array index */ char read name[13]; /* file name */ char file-suff[8]; /* filename suffix */ int range; /* radar range */ rptr == &rimage; /* pointer to rimage */ range == (int)rptr->rpic_arr.title_blk.rng; /* get range */ strcpy(read name,rptr->rpic arr.title blk.site); /* get site */ switch (range) - - /* select proper suffix */ { case 60: /* 60 nmi */ { strcpy(file_suff,"060.MAP"); /* 60 nmi map */ break; } case 120: /* 120 nmi */ { strcpy(file suff,"120.MAP"); /* 120 nmi map */ break; - } case 180: /* 180 nmi */ { strcpy(file_suff,"180.MAP"); /* 180 nmi map */ break; } case 240: /* 240 nmi */ { 196

strcpy(file suff,"240.MAP"); /* 240 nmi map */ break; - } default: /* no map */ { printf("ERROR: No map overlay for %d nmi range!\n", range) ; exit(O); /* terminate */ } } strcat(read name,file suff); /* append filename suffix */ printf("Reading map from file %s ••• \n",read name); if«fptr = fopen(read name,"rb"» == NULL) - 1* open file */ { - printf("ERROR: Cannot open file %sl\n",read_name); exit(O); } /* read file */ fread«void *)map,sizeof(unsigned char),DBYTEMAX,fptr); fclose(fptr); /* close file */ } display( ) /* writes image to EGA memory for display */ /* one weather pixel element = 3 pixels x 2 lines */ { union REGS regs; /* 8086 registers */ char far *farptr; /* pointer to memory */ unsigned int i,n,idx; /* index variables */ int row, col; /* display row & column */ int addr; /* pointer address */ int mode; /* graphics mode */ unsigned char color; /* color */ unsigned char temp /* dummy variable */ int range; /* radar range */ char rdr site[4]; /* radar site */ int hour; /* hour */ int minute; 1* minute */ int day; /* day */ int month; /* month */ int year; /* year */ char space[Sl]; /* space for title */ char kbrd[4]; /* keyboard return */ printf("Press space bar to display weather••• \n"); while ( getch() 1= , ') /* wait for space */ ; mode = 16; /* 640x350 EGA 16 color */ regs.h.al = (char)mode; /* display mode */ regs.h.ah = 0; /* set video mode srvc */ int86 (VIDIO, ®s, ®s); /* 8086 software intrpt */ idx = 0; /* initialize index */ farptr = (char far *) OxAOOOOOOO; /* pointer to EGA memory */ for(row=O; row<350; row+=2) /* 175 lines x 2 */ { 197 for(col=O; col«208*3); col+=24) /* do block of 24 pixels */ { color - assign(picture[idx]); /* assign color */ outp(Ox3C4,2);outp(Ox3CS,color); 1* color in map mask reg */ outp(Ox3CE,8);outp(Ox3CF,OxEO); 1* bits in bit mask reg */ for (i-O; i<2; i++) 1* write two lines */ { addr - (row+i)*MAXB + col/PIX; /* byte address */ temp - *(farptr+addr); /* place byte in latches */ *(farptr + addr) = OxFF; /* select all bits */ } idx++; /* increment index */ color - assign(picture[idx]); /* assign color */ outp(Ox3C4,2); outp(Ox3CS,color); /* color in map mask reg */ outp(Ox3CE,8); outp(Ox3CF,OxlC); /* bits in bit mask reg */ for (i=O; i<2; i++) /* write two lines */ { addr ~ (row+i)*MAXB + col/PIX; /* byte address */ temp = *(farptr+addr); /* place byte in latches */ *(farptr + addr) = OxFF; /* select all bits */ } idx++; /* increment index */ color = assign(picture[idx]); /* assign color */ outp(Ox3C4,2); outp(Ox3CS,color); /* color in map mask reg */ outp(Ox3CE,8); outp(ox3CF,ox03); /* bits in bit mask reg */ for (i=O; i<2; i++) /* write two lines */ { addr = (row+i)*MAXB + col/PIX; /* byte address */ temp = *(farptr+addr); /* place byte in latches */ *(farptr + addr) - oxFF; /* select all bits */ }

outp(Ox3CE,8); outp(ox3CF,ox80); /* bits in bit mask reg */ for (i=O; i<2; i++) /* write two lines */ { addr = (row+i)*MAXB + col/PIX + 1; /* byte address */ temp = *(farptr+addr); /* byte to latches */ *(farptr + addr) = OxFF; /* select all bits */ } idx++; /* increment index */ color = assign(picture[idx]); /* assign color */ outp(Ox3C4,2); outp(Ox3CS,color); /* color in map mask reg */ outp(Ox3CE,8); outp(Ox3CF,ox70); /* bits in bit mask reg */ for (i=O; i<2; i++) /* write two lines */ { addr = (row+i)*MAXB + col/PIX + 1; /* byte address */ temp = *(farptr+addr); /* byte to latches */ *(farptr + addr) = OxFF; /* select all bits */ } idx++; /* increment index */ color = assign(picture[idx]); /* assign color */ outp(Ox3C4,2); outp(Ox3C5,color); /* color in map mask reg */ 198

outp(Ox3CE,8); outp(Ox3CF,OxOE); /* bits in bit mask reg */ for (i=O; i<2; i++) /* write two lines */ { addr = (raw+i)*MAXB + col/PIX + 1; /* byte address */ temp = *(farptr+addr); /* byte to latches */ *(farptr + addr) = OxFF; /* select all bits */ } idx++; /* increment index */

color = assign(picture[idx); /* assign color */ outp(Ox3C4,2); outp(Ox3CS,color); /* color in map mask reg */ outp(Ox3CE,8); outp(Ox3CF,OxOl); /* bits in bit mask reg */ for (i=O; i<2; i++) /* write two lines */ { addr = (raw+i)*MAXB + col/PIX + 1; /* byte address */ temp = *(farptr+addr); /* byte to latches */ *(farptr + addr) - OxFF; /* select all bits */ }

outp(Ox3CE,8); outp(Ox3CF,OxCO); /* bits in bit mask reg */ for (i=O; i<2; i++) /* write two lines */ { addr - (raw+i)*MAXB + col/PIX + 2; /* byte address */ temp - *(farptr+addr); /* byte to latches */ *(farptr + addr) = OxFF; /* select all bits */ } idx++; /* increment index */ color = assign(picture[idx); /* assign color */ outp(Ox3C4,2); outp(Ox3CS,color); /* color in map mask reg */ outp(Ox3CE,8); outp(Ox3CF,Ox38); /* bits in bit mask reg */ for (i=O; i<2; i++) /* write two lines */ { addr = (row+i)*MAXB + col/PIX + 2; /* byte address */ temp = *(farptr+addr); /* byte to latches */ *(farptr + addr) = OxFF; /* select all bits */ } idx++; /* increment index */ color = assign(picture[idx); /* assign color */ outp(Ox3C4,2); outp(Ox3CS,color); /* color in map mask reg */ outp(Ox3CE,8); outp(Ox3CF,Ox07); /* bits in bit mask reg */ for (i=O; i<2; i++) /* write two lines */ { addr = (raw+i)*MAXB + col/PIX + 2; /* byte address */ temp = *(farptr+addr); /* byte to latches */ *(farptr + addr) = OxFF; /* select all bits */ } idx++; /* increment index */ } color = assign(picture[idx); /* assign color */ outp(Ox3C4,2); outp(Ox3CS,color); /* color in map mask reg */ outp(Ox3CE,8); outp(Ox3CF,OxEO); /* bits in bit mask reg */ for (i=O; i<2; i++) /* write two lines */ { 199

addr = (raw+i)*MAXB + 78; /* byte address */ temp = *(farptr+addr); /* byte to latches */ *(farptr + addr) - OxFF; /* select all bits */ } idx++; /* increment index */ color - assign(picture[idx]); /* assign color */ outp(Ox3C4,2); outp(Ox3CS,color); /* color in map mask reg */ outp(Ox3CE,8); outp(Ox3CF,Ox!c); /* bits in bit mask reg */ for (i=O; i<2; i++) /* write two lines */ { addr = (row+i)*MAXB + 78; /* byte address */ temp = *(farptr+addr); /* place byte in latches */ *(farptr + addr) - OxFF; /* select all bits */ } idx++; /* increment index */ color = assign(picture[idx]); /* assign color */ outp(Ox3C4,2); outp(Ox3CS,color); /* color in map mask reg */ outp(Ox3CE,8); outp(Ox3CF,Ox03); /* bits in bit mask reg */ for (i=O; i<2; i++) /* write two lines */ { addr = (row+i)*MAXB + 78; /* byte address */ temp = *(farptr+addr); /* byte to latches */ *(farptr + addr) = OxFF; /* select all bits */ } outp(Ox3CE,8); outp(Ox3CF,Ox80); /* bits in bit mask reg */ for (i=O; i<2; i++) /* write two lines */ { addr = (raw+i)*MAXB + 79; /* byte address */ temp = *(farptr+addr); /* byte to latches */ *(farptr + addr) = OxFF; /* select all bits */ } idx++; /* increment index */ color = assign(picture[idx]); /* assign color */ outp(Ox3C4,2); outp(Ox3CS,color); /* color in map mask reg */ outp(Ox3CE,8); outp(Ox3CF,Ox70); /* bits in bit mask reg */ for (i=O; i<2; i++) /* write two lines */ { addr = (row+i)*MAXB + 79; /* byte address */ temp = *(farptr+addr); /* byte to latches */ *(farptr + addr) = OxFF; /* select all bits */ } idx++; /* increment index */ } /* restore settings */ outp(Ox3CE,8); /* bit mask register */ outp(Ox3CF,OxFF); /* restore all bits mode */ /* display title block */ rptr = &rimage; /* pointer to rimage */ strcpy(rdr site,rptr->rpic arr.title blk.site); 1* get site */ range = rptr->rpic arr.title blk.rng; /* get range */ minute = rptr->rpic arr.title blk.min; /* get minutes */ hour = rptr->rpic_arr.title_bTk.hr; /* get hour */ 200

day == rptr->rpic arr.title blk.day; /* get day */ month == (rptr->rpic arr.title blk.mo + 1); /* get month */ year == rptr->rpic arr.title blk.yr; /* get year */ /* fill space array */ - strcpy(space, " "); printf("\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n"); printf (" NWS %s %03d NAUTICAL MILE RANGE%s%02d-%02d-%02d %02d:%02d GMT", rdr site,range,space,month,day,year,hour,minute); while( Ikbhit() ) /* wait until key is */ ; /* pressed to continue */ /* return to text mode */ mode == 3; /* 80x25 text mode */ regs.h.al == (char)mode; /* display mode */ regs.h.ah == 0; /* set video mode srvc */ int86 (VIDIO, ®s, ®s); /* 8086 software intrpt */

} unsigned char assign(level) unsigned char level; /* assigns simulated NWS colors to supplied gray levels */ { unsigned char clor; /* NWS color */ switch(level) /* which gray level? */ { case 0: /* level o */ { clor 0; /* color == black */ break; } case 1: /* level 1 */ { clor 10; /* color == light green */ break; } case 2: /* level 2 */ { clor 2; /* color == green */ break; } case 3: /* level 3 */ { clor 14; /* color yellow */ break; } case 4: /* level 4 */ { clor 6; /* color brown */ break; } case 5: /* level 5 */ { clor 12; /* color == light red */ break; 201

} case 6: /* level 6 */ { clor == 4; /* color - red */ break; } case 7: /* level 7 */ { clor == 15; /* color == intense white */ break; } default: /* else */ { printf("ERROR: Invalid pixel data - assigning black!"); clor == 0; /* color == black */ break; } } return (clor); } 202

APPENDIX F

other Program Listings

C ********************************************************************** C THSQPSK.FOR C Program to compute and output for plotting QPSK in-phase, quadra­ C ture, and resulting carrier for input binary sequence 01101000. C C Thesis - Hybrid Modulation C C Craig B. Parker 4/89 C C ********************************************************************** real *8 g(1000),gi(1000),gq(1000),arg(1000),pi real *8 m(1000),n(1000) integer i pi = dacos(-1.0) C ********************************************************************** C Do for 8 cycles C arg(i) = carrier argument C m(i) = in-phase envelope C neil = quadrature envelope C gi(i) = in-phase component C gq(i) = quadrature component C g(i) = qpsk carrier C ********************************************************************** do 10 i=1,801 arg(i) = 2.0*pi*dble(i-l)/100 if«i.ge.20l).and.(i.lt.60l»then m(i) = 1.0 else m(i) -1.0 endif gi(i) = m(i)*dcos(arg(i» if(i.ge.201)then neil -1.0 else neil = 1.0 endi£ gq(i) = n(i)*dsin(arg(i» g(i) ~ gi(i) - gq(i) 10 continue C ********************************************************************** C Output in-phase, quadrature, and resultant to separate output files C ********************************************************************** open(11,£ile='c:\plot\qgi.dat') open(12,£ile='c:\plot\qgq.dat') open(13,£ile='c:\plot\qg.dat') open(14,£ile='c:\plot\qenvi.dat') open(IS,£ile='c:\plot\qenvq.dat') do 20 i=1,801 write(11,100)arg(i),gi(i) write(12,100)arg(i),gq(i) write(13,100)arg(i),g(i) 20 continue 203

do 30 i==I,801,10 write(14,100)arg(i),m(i) write(15,100)arg(i),n(i) 30 continue 100 format(lx,el0.4,2x,el0.4) close(ll) close(12) close(13) close(14) close(15) stop end 204

C ********************************************************************** C THSMSK.FOR C Program to compute and output for plotting MSK in-phase, quadrature, C and resulting carrier for input binary sequence 01101000. C C Thesis - Hybrid Modulation C C Craig B. Parker 4/89 C ********************************************************************** real*8 g(1000),gi(1000),gq(1000),arg(1000),pi real*8 wtarg(1000),m(1000),n(1000) integer i pi = dacos(-1.0) open(11,file='c:\plot\mgi.dat') open(12,file='c:\plot\mgq.dat') C ********************************************************************** C Do for 7 cycles C arg(i) = carrier argument C wtarg(i) = sinusoidally weighted pulse envelope argument C m(i) = in-phase envelope C n(i) = quadrature envelope C gq(i) = quadrature component C gi(i) = in-phase component C g(i) = msk carrier C values are multiplied by sqrt(2.0) to match qpsk levels C ********************************************************************** do 10 i=1,701 arg(i) = 2.0*pi*dble(i-1)/100 wtarg(i) = 2.0*pi*dble(i-1)/400 if«i.ge.101).and.(i.lt.501»then m(i) dsqrt(2.0)*dcos(wtarg(i» else m(i) = -dsqrt(2.0)*dcos(wtarg(i» endif gi(i) = m(i)*dcos(arg(i» if(i.lt.201)then n(i) dsqrt(2.0)*dsin(wtarg(i» else n(i) -dsqrt(2.0)*dsin(wtarg(i» endif gq(i) = n(i)*dsin(arg(i» write(11,100)arg(i),gi(i) write(12,100)arg(i),gq(i) 10 continue open(13,file='c:\plot\mg.dat') do 20 i=1,701 g(i) = gi(i) - gq(i) write(13,100)arg(i),g(i) 20 continue open(14,file='c:\plot\menvi.dat') open(1S,file='c:\plot\menvq.dat') do 30 i=1,701,10 write(14,100)arg(i),m(i) write(1S,100)arg(i),n(i) 30 continue 100 format(1x,e10.4,2x,elO.4) 205 close(ll) close(12) close(13) close(14) close(15) stop end 206

C ********************************************************************** C THSEBNO.FOR C Computes P(error) as a function of Eb/No for MSK & QPSK C using erfc(x) from Helstrom, and P(error) from Haykin C C Thesis - Hybrid Modulation C C Craig B. Parker 4/89 C ********************************************************************** real *8 out(100),erfc,snr(100),pe,f,t,p,x,a1,a2,a3,a4,a5 integer i al 0.127414796 a2 -0.142248368 a3 0.710706871 a4 ~ -0.726576013 a5 ~ 0.530702714 p ~ 0.2316419 C ********************************************************************** C Compute P(error) for Eb/No from 1 to 12 dB (snr ~ 0 to 15.84) C ********************************************************************** do 10 i~1,61 snr(i) ~ 10.0**(dble(i-1)/50.0) x dsqrt(2.0*snr(i» t = 1.0/(1.0 + p*x) f = ««a5*t + a4)*t + a3)*t + a2)*t + a1)*t erfc = f*dexp(-(x**2)/2.0) out(i) ~ 2.0*erfc - erfc**2 10 continue C ********************************************************************** C Output curve for plotting C ********************************************************************** open(ll,file='ebno.dat') do 20 i=1,61 write(ll,100)dble(i-1)/S.0,out(i) 20 continue 100 format(2x,f6.2,2x,elO.3) close(11) stop end 207

C ********************************************************************** C THSPSD.FOR C Computes and outputs for plotting the power spectral density C for the MSK and QPSK waveforms given by Haykin. C Normalized with respect to 4Eb. C C Thesis - Hybrid Modulation C C Craig B. Parker 4/89 C ********************************************************************** real*8 tbf(1000),pi,msk(1000),qpsk(1000),eq integer i pi ~ dacos(-1.0) eq ~ 8.0/pi**2 C ********************************************************************** C 600 samples over Tbf - 0 to 6 C ********************************************************************** do 10 i~I,601 tbf(i) dble(i-l.0000l)/lOO.00001 msk(i) ~ (dcos(2.0*pi*tbf(i»/«16.0*tbf(i)**2)-1.0»**2 msk(i) ~ 10.0*dlogl0(eq*msk(i» qpsk(i) ~ (sin(2.0*pi*tbf(i»/(2.0*pi*tbf(i»)**2 qpsk(i) ~ 10.0*dlogl0(qpsk(i» 10 continue C ********************************************************************** C Output for plotting C ********************************************************************** open(11,file~tpsd.datt) do 20 i=1,601 write(11,100)tbf(i),msk(i),qpsk(i) 20 continue 100 format(2x,f8.4,2x,f8.2,2x,f8.2) close(11) stop end 208 c ********************************************************************** c THSMT.FOR c Program to compute and output for plotting the 5-signal c representation of met). c c Thesis - Hybrid Modulation c c Craig B. Parker 4/89 c ********************************************************************** real*8 a1,a2,a3,a4,a5 real*8 £1,£2,£3,f4,£5 real*8 sl,s2,s3,s4,s5 real*8 t(4096),pi,m(4096) integer i C ********************************************************************** C Define constants C ********************************************************************** pi dacos(-1.0) a1 2.0/9.0 a2 1.0/3.0 a3 2.0/9.0 a4 1.0/9.0 a5 1.0/9.0 £1 468.750 £2 937.50 £3 == 1406.250 £4 1875.0 £5 == 2343.75 C ********************************************************************** C 4096 samples o£ waveform over 16 cycles C ********************************************************************** do 10 i==1,4096 t(i) == (dble(i-1»/(256.0*£1) sl a1*dcos(2.0*pi*£1*t(i» s2 a2*dcos(2.0*pi*£2*t(i» s3 a3*dcos(2.0*pi*f3*t(i» s4 a4*dcos(2.0*pi*f4*t(i» s5 a5*dcos(2.0*pi*f5*t(i» m(i) == sl+s2+s3+s4+s5 10 continue C ********************************************************************** C Output met) C ********************************************************************** open(ll,£ile=='c:\plot\time.dat') do 20 i==1,512 write(11,100)t(i),m(i) 20 continue 100 £ormat(2x,elO.4,2x,e10.4) close(ll) stop end 209

C ********************************************************************** C THSCONV.FOR C Program to compute and output for plotting the power C spectral density o£ hybrid modulated signal using C the 5-tone representation o£ m(t). A frequency domain convolution C is performed. C Subroutine MSKPSD.FOR is called to compute the power spectral C density o£ the MSK signal. C C Thesis - Hybrid Modulation C C Craig B. Parker 4/89 C C ********************************************************************** real*8 a1,a2,a3,a4,a5 real*8 £1,£2,£3,£4,£5 real*8 mskpl,mskml,mskp2,mskm2,mskp3,mskm3,mskp4,mskm4 real*8 mskp5,mskm5 real*8 pi,kv,psd real*8 tb£(3000),msk(3000) integer i kv == 0.7 pi == dacos(-1.0) C ********************************************************************** C Compute over range o£ Tb == -12.5 to +12.5 (-/+30kHz for Tb == 1/2400) C by reflecting values obtained from -12.5 to o. C ********************************************************************** do 10 i==1,1251 tb£(i) == (dble(i)-1251.0001)/100.0 call mskpsd(tb£(i),psd) msk(i) == psd 10 continue do 11 i==1252,2501 tb£(i) == 0.0 - tb£(2502-i) call mskpsd(tb£(i),psd) msk(i) == psd 11 continue al == 2.0/9.0 a2 1.0/3.0 a3 2.0/9.0 a4 == 1.0/9.0 a5 1.0/9.0 C ********************************************************************** C Read values for £1-£5 C ********************************************************************** write(6,*) 'ENTER DESIRED VALUES FOR FI-F5' read(5,*)£I,£2,£3,£4,£5 C ********************************************************************** C Normalize to Tb C ********************************************************************** £1 == £1/2400.0 £2 £2/2400.0 £3 == £3/2400.0 £4 == £4/2400.0 £5 £5/2400.0 C ********************************************************************** 210

C Sum effects of each of five convolutions at each point on -Tb axis C ********************************************************************** do 15 i-1,1251 call mskpsd(tbf(i)+f1,psd) mskpl - (kv**2/4.0*al**2)*psd call mskpsd(tbf(i)-fl,psd) mskml - (kv**2/4.0*al**2)*psd call mskpsd(tbf(i)+f2,psd) mskp2 = (kv**2/4.0*a2**2)*psd call mskpsd(tbf(i)-f2,psd) mskm2 - (kv**2/4.0*a2**2)*psd call mskpsd(tbf(i)+f3,psd) mskp3 = (kv**2/4.0*a3**2)*psd call mskpsd(tbf(i)-f3,psd) mskm3 = (kv**2/4.0*a3**2)*psd call mskpsd(tbf(i)+f4,psd) mskp4 - (kv**2/4.0*a4**2)*psd call mskpsd(tbf(i)-f4,psd) mskm4 - (kv**2/4.0*a4**2)*psd call mskpsd(tbf(i)+fS,psd) mskpS - (kv**2/4.0*aS**2)*psd call mskpsd(tbf(i)-fS,psd) mskm5 = (kv**2/4.0*a5**2)*psd msk(i) = msk(i)+mskp1+mskm1+mskp2+mskm2+mskp3+mskm3+mskp4+mskm & 4+mskp5+mskmS IS continue C ********************************************************************** C Reflect to positive Tb axis C ********************************************************************** do 16 i=12S2,2501 msk(i) = msk(2502-i) 16 continue C ********************************************************************** C Output data values for plotting C ********************************************************************** open(ll,file='c:\plot\conv.dat') do 20 i=I,2S01 write(II,100)tbf(i)*2.40,10.0*dlogl0(msk(i» 20 continue 100 format(2x,el0.4,2x,el0.4) close(ll) stop end C ********************************************************************** C Subroutine to calculate the power spectral density of an MSK signal C ********************************************************************** subroutine mskpsd(tbf,psd) real*8 tbf,psd,pi pi = dacos(-1.0) psd - (dcos(2.0*pi*tbf)/«16.0*tbf**2)-1.0»**2 return end 211

ACKNOWLEDGEMENTS

This work was sponsored by the Federal Aviation

Administration and by the National Aeronautics and Space

Administration Langley Research Center under the Joint

University Program for Air Transportation Research grant

NGR-36-009-017.

The author wishes to express gratitude to Dr. Robert w. Lilley, Director of the Avionics Engineering Center at Ohio

University, and advisor for this thesis, and to Dr. Richard

H. McFarland, Russ Professor, Emeritus, of Electrical and

Computer Engineering at Ohio University, whose active sup­ port for automated dissemination of weather data to aircraft made this work possible.

The author also wishes to acknowledge the following individuals:

Dr. John A. Tague, Assistant Professor of Electrical and Computer Engineering and Dr. Donald o. Norris, Professor of Mathematics for their contributions and assistance and for serving as members of the thesis committee; Dr. Frank van Graas, Assistant Professor of Electrical and Computer

:Engineering for his valuable guidance and contributions as project supervisor; The staff and students involved in the 212

Joint University Program for Air Transportation Research at the Massachusetts Institute of Technology and Princeton

University for their enthusiasm, contributions, and support for automated dissemination of weather data to aircraft; Mr.

James D. Waid, undergraduate intern at the Avionics Engi­ neering Center for assistance in software development; Mr.

Richard Zoulek, Chief of Airborne and Mobile Laboratories at the Avionics Engineering Center and Mr. Michael S. Braasch,

Graduate Intern at the Avionics Engineering Center for assistance with the flight test; and Mr. Edgar Espinoza, research engineer at the Avionics Engineering Center for assistance with the artwork contained herein.

The author also wishes to express appreciation to his parents for their loving support and encouragement of all of his endeavors.