AN HF MULTITONE MODEM

______

A Thesis

Presented to the

Faculty of

San Diego State University

______

In Partial Fulfillment of the Requirements for the Degree

Master of Science

in

Electrical Engineering

______

by

Louis A. Rey

Summer 2017

iii

Copyright © 2017 by Louis A. Rey All Rights Reserved

iv

DEDICATION

This thesis is dedicated to my wife, two daughters and all who have loved and supported me throughout this journey.

v

ABSTRACT OF THE THESIS

An HF Multitone Modem by Louis A. Rey Master of Science in Electrical Engineering San Diego State University, 2017

In today’s world, we can find signals being transmitted on every portion of the frequency spectrum from a few Hertz up to several Giga Hertz. Frequencies of upwards of 60GHz are common today. Although it is now common to operate in ever higher frequencies, (HF) frequencies (3 to 30 MHz) enjoyed huge popularity during the advent of early long-range communication work. Before satellites were invented, it was the only means of transmitting information at long distances wirelessly. Initial work in the defense industry gave fruit to Multitone modems which were then superseded by single tone modems for both data and voice communications at 2400bps, 9600bps and 19400bps. With the growing popularity of digital signal processing and software radio, these modems are now easily implemented. This thesis revisits Multitone modems, also known as parallel tone modems, which use 16 and 39 tones for data and voice respectively. A comparison with modern communication systems will be explored. A description of the Modem sections will be created using MATLAB. All information pertaining to the Multitone modem can be found in the public domain and fully declassified. Any detailed classified information is excluded from this thesis.

vi

TABLE OF CONTENTS

PAGE

ABSTRACT ...... v LIST OF TABLES ...... viii LIST OF FIGURES ...... ix ACKNOWLEDGEMENTS ...... xi CHAPTER 1 INTRODUCTION ...... 1 1.1 Problem Statement ...... 1 1.2 Motivation ...... 1 1.3 Thesis Outline ...... 2 2 TECHNICAL OVERVIEW ...... 3 2.1 Overview and History ...... 3 2.1.1 Modern HF Communications ...... 4 2.2 HF Propagation ...... 4 2.3 Skywave Propagation...... 5 2.3.1 Channel Effects on HF Signals ...... 8 2.4 The Watterson HF Channel Model ...... 10 2.5 HF Modem Solutions ...... 11 2.6 Modulation Used ...... 13 2.7 Challenges ...... 14 3 MODEM DESCRIPTION ...... 16 3.1 Modem Requirements ...... 17 3.2 OFDM Modulation ...... 20 3.3 Software Implementation ...... 22 4 HF MODEM STANDARD AND IMPLEMENTATION ...... 24 4.1 The Transmitter ...... 24

vii

4.1.1 The Channel Model...... 27 4.2 Preamble Synchronization ...... 28 4.3 The Data Payload ...... 34 5 MODERN COMMUNICATION SYNCHRONIZATION METHODS ...... 37 5.1 Timing Synchronization...... 37 5.2 Frequency Synchronization ...... 40 6 RESULTS ...... 43 7 FUTURE WORK ...... 45 REFERENCES ...... 46

viii

LIST OF TABLES

PAGE

Table 2.1. Typical HF Fast Fading Periods ...... 8 Table 3.1. Frequency and Initial Phases for the First and Second Part of the Modem Preamble ...... 18 Table 3.2. Frequency and Initial Phases of the 16 Tones for the Parallel Tone Modem ...... 18 Table 3.3. 15 Bit PN Tone Sequence for Each of the 16 Tones ...... 19 Table 3.4. Dibit Grouping of the 32 Bit Data Frame ...... 20

ix

LIST OF FIGURES

PAGE

Figure 2.1. Ionospheric regions throughout the day...... 6 Figure 2.2. Signal path as a function of angle of incident...... 7 Figure 2.3. The Watterson HF Channel model...... 11 Figure 2.4. Differential Quadrature Phase shift keying block diagram ...... 13 Figure 2.5. DQPSK Constellation Diagram...... 14 Figure 3.1. Block diagram of Data Modem...... 16 Figure 3.2. Modem preamble for the 16 tone parallel tone modem...... 17 Figure 3.3. OFDM orthogonal subcarriers...... 21 Figure 3.4. OFDM transmission structure...... 22 Figure 4.1. Time and frequency domain representation of the first part of the preamble...... 25 Figure 4.2. Time and frequency domain representation of the second part of the preamble...... 25 Figure 4.3. Time and frequency domain representation of the third part of the preamble...... 26 Figure 4.4. Time and frequency domain representation of the fourth part of the preamble...... 27 Figure 4.5. Low pass Chebyshev2 Filter Response...... 28 Figure 4.6. Signal Presence Detector...... 29 Figure 4.7. Channel effects on the four tone preamble...... 29 Figure 4.8. Signal Presence Detector Output...... 30 Figure 4.9. Doppler Estimation Block...... 31 Figure 4.10. Doppler Estimation of offset signal compared to original signal...... 31 Figure 4.11. Autocorrelation of the three modulated tones...... 32 Figure 4.12. Autocorrelation output of the three modulated tones...... 32 Figure 4.13. Composite 16 tone signal of modem preamble...... 33

x

Figure 4.14. Phase offset correction of composite 16 tone signal of modem preamble...... 33 Figure 4.15. Time and frequency spectrum of PN sequence...... 34 Figure 4.16. Correlation of PN sequence with match filter...... 35 Figure 4.17. Data payload with no distortion or noise...... 35 Figure 4.18. Data payload with multipath, noise, phase and frequency offsets...... 36 Figure 5.1. Timing synchronization using cyclic prefix...... 38 Figure 6.1. Comparison of Timing synchronization Metrics...... 43 Figure 6.2. Comparison of Frequency synchronization Metrics...... 44

xi

ACKNOWLEDGEMENTS

I would like to dedicate this work to my wife and two daughters who have been very supportive of this effort. I would also like to thank my parents, friends, coworkers, instructors and everyone who has supported me along the way.

1

CHAPTER 1

INTRODUCTION

This chapter introduces the problem, provides the motivation, and summarizes the work done.

1.1 PROBLEM STATEMENT This project is based on the popularity of signal spreading and Orthogonal Frequency Division Multiplexing (OFDM) which was used early on in the design of High Frequency (HF) Communication field. This thesis looks to explore the early work in HF communications by implementing a Parallel Tone modem based on Military specifications found in MIL-STD-188-110C. The modem has been superseded by a Single Tone modem, described in the same standard, which did not provide higher greater performance over its predecessor at the time [1], but nevertheless selected as the standard used today. A comparison to modern modems will be made in order to study the evolution of the technology. The goal of this comparison is to find where current technologies may be used to improve upon the original Parallel Tone modem design.

1.2 MOTIVATION This thesis is based on work done as SPAWAR Systems Center where an HF modem based on MIL-STD-188-110C was required to test modern day voice encryption algorithms. The modem was to be implemented on a Universal Software Radio Peripheral (USRP) as an RF front end along with GnuRadio for signal processing implementation. The standard describes the requirements needed to implement the modem, but no clear implementation detail is given. To test the concepts to be implemented, MATLAB was used as a modeling tool. Since very little information is found in print format, other standards such as IEEE

2

802.11 were used as a point of comparison and as reference. Once the concepts were proven on MATLAB, they were then implemented using GnuRadio.

1.3 THESIS OUTLINE The organization of this document will now be described. Chapter 1 provides an overall introduction to the HF communications problem, motivation for the research, a summary of contributions, and a document outline. Chapter 2 provides a structured historical background of the HF channel and modem solutions. Chapter 3 provides a brief description of the parallel tone modem. Chapter 4 describes, in detail, the software implementation of a 16-tone data modem. Chapter 5 describes modern communication methods used for time and frequency OFDM communications. Chapter 6 describes the results and limitations of the HF parallel tone requirements. Chapter 7 summarizes the accomplishments of the research and future work.

3

CHAPTER 2

TECHNICAL OVERVIEW

This chapter focuses on the history and importance of HF Communications.

2.1 OVERVIEW AND HISTORY At the advent of radio communications, initial work had been performed primarily on signals with long wavelengths for long distance communications. Marconi, Hertz and other radio pioneers worked with wavelengths of up to 1000 meters. These frequencies became a mainstay and were the primary focus of commercial use in the early years of the telegraph industry. Shorter wavelengths at the time were not considered to have much commercial value. For this reason, shorter wavelengths (200 meters or less) were used mainly by the radio amateur community or ‘HAMS.’ The area of communications they explored came to be known as short wave or High Frequency (HF) communications. They found that long range communications was achievable with much less power requirements compared to its long wave counterpart. Atmospheric noise was also found to be much lower in this range [2]. Antennas were also smaller and much easier to build. Thanks to the early work of these HAMS, this technology eventually found importance in areas such as the shipping industry, aircraft beyond-line-of-site communications, military applications and all other systems that required long distance communications. HF came to represent the frequency bands ranging from 3 to 30 MHz. The single-sideband mode of operation was adopted early on in the HF communications industry to achieve spectrum efficiency. HF enjoyed early adoption and popularity in the radio telephone service industry and was originally allocated for voice quality of 3 kHz channels so spectrum efficiency was a necessary requirement. HF communication has gone into cycles of popularity and decline during the last few decades mainly due to the availability of satellites to transmit at long distances. Nevertheless, HF continues to have significant military applications particularly in countries outside of the

4

United States where satellites may become vulnerable in times of war and where sovereignty is of utmost importance.

2.1.1 Modern HF Communications HF radio technology is now in its fourth generation with data rates significantly larger than previously possible. The early evolution of HF communications consisted of using the available 3 kHz wide voice bandwidth spectrum. Up to 240 kHz bandwidth is now possible and well defined in the military standard MIL-STD-188-141C and MIL-STD-188-110C. The fourth generation (4G) technology is known as wideband HF (WBHF). The first generation consisted of both Serial Tone and Parallel Tone Modems at data rates up to 2.4 kHz bandwidth. The second generation adopted Automatic Link Establishment (ALE), used for digitally initiating and sustaining HF communications throughout constantly changing channel conditions, and employed serial tone modems up to 9.6 kHz bandwidth. The third generation ALE system included aspects such as scalable burst modem technology and lower signal-to-noise ratio (SNR) requirements at rates up to 24 kHz bandwidth [3]. Today’s 4G wideband WBHF can deliver rates up to 240 kbps on a 48 kHz wide channel. Although these rates are much lower than modern cellular 3G and 4G systems, HF continues to find importance in military applications requiring long distance transmission and reception.

2.2 HF PROPAGATION HF frequencies behave differently depending on the medium used for transmission. Signals transmitted near the Earth’s surface are known as ground waves. These signals can travel at ranges up to about 100 kilometers. Signals transmitted over the surface of water are known as surface waves and can be transmitted up to 300 kilometers. For much longer ranges, HF signals take advantage of atmospheric interactions to reflect signals between the and ground. These signals can continue to bounce or hop between ground and ionosphere to travel around the earth. The signals may also be able to bounce between ionospheric layers to travel longer distances. These signals are known as sky waves. In this paper, we will focus mainly on skywave propagation of HF waves since we are concerned mainly with long distance communications.

5

2.3 SKYWAVE PROPAGATION HF propagation depends on the path of the signal taken. These can be divided into surface wave, ground wave, and sky wave propagations. Surface wave propagation refers to transmission over sea water and ground wave refers to propagation over land. Surface and ground wave propagations are mainly limited to line-of-site communications where ~100 km have been recorded for ground waves dependent on the terrain and ~300 km for surface waves [4]. In this thesis we are mainly interested in beyond-line-of-site communication or skywave propagation. Skywave propagation is possible due to the composition of the ionosphere. The ionosphere is composed of various layers containing charged particles. This layer protects the earth from harmful electromagnetic waves from the sun. These electromagnetic waves are refracted back into space when they come into contact with the ionosphere. In much the same way, radio waves can be transmitted across long distances by taking advantage of the same mechanism. The ionosphere lies at an altitude ranging from about 85 to 1,000 kilometers. Ionization consists of the different gas molecules found in the atmosphere. The heaviest gas molecules with low electron density and high neutral gas density are found at lower altitudes. This gas is mostly composed of nitric oxide which is ionized when they come into contact with Gamma and X-rays. The lightest particles are found furthest from the atmosphere with corresponding high electron density. These gas molecules are composed of hydrogen, helium and monatomic oxygen [4]. The center layer is composed mostly of oxygen molecules which are ionized by solar rays. A region with free electron density N will reflect radio waves below a critical frequency denoted by 푓0, where

푓0 = 9√푁 (퐻푧) (2.1)

The ionosphere changes drastically with solar activity, time of day, latitude or geographically and seasonally, and is very difficult to predict. Interactions with cosmic variations change ionization patterns within the ionosphere. In addition to this, the ionosphere varies from region to region. Due to the electromagnetic radiation from the sun, the ionospheric layers change during the daytime and night time.

6

The ionosphere is composed of three main layers. These are the D, E and F layers. The F layer is in turn sub-divided into two layers known as F1 and F2. The layer closest to the earth is the D layer and the one furthest away is the F layer. These layers interact with radio waves from less than 1 MHz to about 30 MHz.

Figure 2.1. Ionospheric regions throughout the day. Source: [2].

The D layer is lightly ionized since it is the layer furthest from the sun’s electromagnetic rays and is present only in the daytime since it becomes ionized when radiated by the sun. It does not reflect radio signals, but rather absorbs signals less than 5MHz in frequency. The E layer only exists during the daytime as well. It is the first layer in the ionosphere that reflects radio waves. Frequencies from 5 MHz to 20 MHz can be reflected, although frequencies below 10MHz can be partially absorbed by this layer. The E layer is mainly responsible for communications at shorter distances of less than 2,000 kilometers [4]. The E layer may contain patches of increased ionization known as Sporadic- E. These regions may scatter or reflect HF radio waves. The F layer is subdivided into the F1 and F2 layers during the daytime. These two sublayers combine to become one single layer during the night. The F1 layer does not have any significant effect on HF radio waves. The F1 layer only reflects frequencies under 10 MHz, however these are the same frequencies absorbed by the D layer so the F1 layer does

7 not interact with radio wave propagation when present. Higher frequencies pass through the D, E and F1 layer in the daytime and interact with the F2 layer. The F2 layer is the highest layer of the ionosphere and receives the most radiation from the sun and therefore becomes much more ionized. It is the most important layer for long distance . Every layer of the ionosphere has a . This is characterized by the maximum frequency each layer can reflect. Frequencies above each layer’s maximum usable frequency will typically pass through that layer. The maximum usable frequency may be as high as 50MHz. Frequencies below 10MHz are most useful during the evening. Frequencies above 10 MHz perform best in the daytime. Propagation not only depends on the level of ionization but also on the angle of transmission or angle of incidence. At a certain angle, the signal may ‘punch through’ all the layers and go off into space. This is known as the critical angle of incidence. At certain angles, the signals may bounce between the E and F layer and travel even longer distances before returning to ground. These signals are known as Pedersen Rays [2]. Figure 2.2 displays various signals at different angles of elevation for a single signal hop. These signals may travel around the world by bouncing between the ground and ionosphere in multiple hop sequences. The region where no sky wave signal can be received is known as the skip zone. A part of the skip zone may be covered with the assistance of ground waves, but may interfere with the first hop of the skywave causing multipath.

Figure 2.2. Signal path as a function of angle of incident.

8

Each specific layer will have a maximum usable frequency as previously stated. This frequency is dependent on a critical frequency and the angle of incidence. This is given by:

푀푈퐹 = 푘푓0 sec ∅ (2.2) where ƒ0 is the critical frequency and ∅ is the angle of incidence on the reflecting layer. Since these signals travel over long distances due to the curvature of the earth, a correction factor must be taken into account. This factor is incorporated as the value k and is found to be between 1.0 and 1.2 [4].

2.3.1 Channel Effects on HF Signals The ionosphere is a dispersive media and affects waves travelling through it by spreading the radio signal. The different layers also reflect the signals at different times and intensities creating multipath signals. Multi-hop propagation also adds to multipath reception. If ground waves are present, they may interact with skywave propagation and form another source of multipathing. Dispersion is caused by ionospheric layers having different refractive indexes. This type of dispersion is known as mode dispersion. The dispersion may be found to be ~200 microseconds [2]. Sever dispersion is caused mostly in the F layer where large variations in this region can cause delay spreads as high as several milliseconds. Time dispersion and multipath may be of the order of several milliseconds where they may exceed 10 milliseconds [4]. Coherence bandwidth is proportional to the reciprocal of the multipath dispersion. In a system with a bandwidth smaller than the coherence bandwidth the impact of multipath is observed as flat fading. Flat fading is does not affect specific frequencies, but rather affects all frequencies in the same amount. Systems where the bandwidth is greater than the coherence bandwidth may be affected by frequency selective fading caused by multipath components. Typical fast fading periods are seen in Table 2.1.

Table 2.1. Typical HF Fast Fading Periods Fading Type Fading Period High/low angle rays 0.5 – 2 s Sky waves 1 – 5 s Groundwave/skywave 2 – 10 s Magneto-ionic splitting 10 – 40 s

9

Another effect caused by the HF channel is the Doppler effect. This is mainly due to movement of the ionospheric layer which experiences variations in its characteristics. This causes the path length to change constantly. Assuming there is no movement between the transmitter and receiver, the Doppler shift is given by:

푓 2푓 푣 (2.3) 퐷 ≈ − 푐 푣 cos ∅ 푐 where 푣푣 represents the speed of the ionospheric moving vertically, 푓푐 is the critical frequency of the layer, c is the speed of light and cos ∅ includes the angle of incidence ∅. The factor of 2 indicates that the phase path is changing in both the upward and downward direction of the phase paths. Additional Doppler shifts may be caused when the transmitter is moving with respect to the transmitter. HF systems are typically designed for a total frequency offset of ± 75 Hz. As mentioned above, ionospheric variations interact with the transmitted signal to cause attenuation, delay fading, Doppler frequency shifting, time dispersion, and delay distortion. Multipath is produced by time dispersions caused by ground wave and sky wave paths. These paths may consist of sky waves reflected from different layers of the ionosphere, different number of hops or reflections between the ionosphere and ground, high and low angle of propagation of the skywave, and splitting of magneto-ionic components. Magneto-ionic components are those that have different polarized components (the ordinary and extraordinary) caused by a combination of the earth’s magnetic field and atmospheric ionization. The difference in transmission is small but noticeable. These two magneto-ionic components are circularly polarized. Each propagation path or mode has its own group delay. The delay for each path is the cause of time dispersion or multipath delays. At the receiver, this will affect the received signal in the form of intersymbol interference. The maximum transmission rate is limited to the reciprocal of the range of multipath propagation times. To optimize the signal, it is essential to work as close to the MUF as possible. The group delay is also a function of the frequency and varies across the signal’s bandwidth giving rise to delay distortion. Delay distortion influences the rate of change of delay as a function of frequency and time. Dispersion in the F layer is typically much larger than the E layer [2].

10

Frequency selective fading is caused by interference between two or more paths or modes. For certain periods of time in the day, 10-50 fades per minute are common due to the channel [2]. In addition to this, fades of less than 10 dB is more frequent. For each path, there may be a shift in frequency caused by changes in height of the ionospheric layers and changes in electron densities. This results in Doppler shifts which may vary throughout the day.

2.4 THE WATTERSON HF CHANNEL MODEL Noise levels can be very high and may render a loss in communications at times. Noise affecting this frequency range consists of manmade noise, terrestrial noise or atmospheric noise, and celestial or galactic noise. Noise is more intense at the lower spectrum of the HF band. Noise in the HF band is mostly impulsive or Laplacian in nature. However, it is modeled as Gaussian noise (AWGN) for simplicity. There are four common channel simulation models that are typically used for testing HF systems. These are Replay simulators, Raleigh Fading (Watterson) model, Wave propagation based models, and Advanced Parametric Simulators [4]. All simulation models above are mentioned for completeness, however we will only be describing the Watterson model since it is the most widely used and simplest to implement. The Watterson model has been the most widely used model for many years due to its simplicity and accuracy for bandwidths up to 12 kHz. The channel is modeled as an ideal tap delay line where each tap corresponds to a resolvable propagation path. Two magneto-ionic components are present on each tap which are modeled as a complex Gaussian random process. Gain and Doppler frequency offsets can also be modeled on each path. This model is described in the ITU-R F.1487 recommendation standard. It is recommended for HF modems with bandwidths up to 12 kHz, but can be used for wider bandwidths. Multipath and fading is caused by signal waves traveling over several paths or modes over single or multiple reflections of the E and F layers of the ionosphere. The signal received consists of the original signal along with multipath components which are spread in time and can be several milliseconds apart. The height of the ionosphere is constantly changing and introduces Doppler shifts. The Watterson Channel describes stationary Gaussian scatter to model the HF channel. The model consists of a delay line with several

11 taps representing each delay path or mode. At each one of these taps, the signal is phase and amplitude modulated by a complex function Gi(t). Each tap is defined by the function:

퐺푖(푡) = 퐺̃푖푎(푡) exp(푗 2휋 휗푖푎푡) + 퐺̃푖푏(푡) exp(푗 2휋 휗푖푏푡) (2.4)

The delayed and modulated signals are added together and subsequently combined with additive noise to determine the SNR level.

Figure 2.3. The Watterson HF Channel model.

In the equation above, the subscripts, a and b, represent two magneto-ionic components for each path or mode. The tildes represent sampled functions of two independent complex Gaussian ergodic random processes, each with zero mean values and independent real and imaginary components with equal root-mean-square (RMS) values that produce Rayleigh fading (Gaussian scatter functions) [5].

2.5 HF MODEM SOLUTIONS Initially, there were two competing solutions for the transmission of HF radio waves over long distances. There were the parallel tone modem and the single tone modem. The first generation of modems designed used multiple tones of what is currently known as OFDM to transmit voice and data signals. Popular implementations of these modems used 2 tones, 16 tones, 24 tones, 39 tones and 52 tones. Ultimately, 16 tones were defined for data communications while 39 tones were selected for voice communications. These tones correspond to what we know as an OFDM symbol.

12

A multi-tone system ensures that the data interval on each tone is made longer than the multipath delay time. In essence, the multi-tone modem seeks to spread the signal transmitted. This is achieved by means of an IFFT where the 16 or 39 tones are placed at their corresponding bins of the frequency domain signal. The design of the multi-tone modem is based off of a Kineplex system [2]. This system consisted of a differentially encoded phase shift keying (DPSK) scheme which was transmitted onto 20 tones. Guard bands were used to provide protection against multipath. A Synchronization tone located a 2915 Hz was used. In comparison, a 16 and 39 tone modem uses four tones to achieve signal detection, frequency offset and Doppler correction. Frame synchronization is achieved by using a 15-bit Pseudorandom Noise (PN) sequence. For voice communications, a vocoder is used to guarantee low bit rate transmission of speech. The vocoder used is a Mixed Excitation Linear Prediction extended (MELPe) codec which is based off of the LPC-10 linear predictive coder. The vocoders are intended for compression of speech. MELPe is capable of achieving a compression ratio of over 100 in comparison to more traditional methods such as µlaw. The resultant bit sequence is then encoded using Golay(24,12) forward error correction (FEC) to ensure the most important information bits of the sequence are transmitted. Since multiple tones are used to transmit each OFDM symbol, this produces poor power efficiency since the signals align in phase and a large peak value is obtained. This is measured as peak-to-average power ratio (PAPR) and is given as an RMS value. Since DQPSK is used, the modulation produces a loss of 2.3dB in comparison to QPSK [6] since the symbols obtained depend on the outcome of previous symbols. Therefore, if one of the symbols is in error, the entire sequence that follows is also in error. A single tone modem employs a probe sequence to measure the channel distortions and compensates for them. The modem uses a ½ rate convolutional code for error correction and 8PSK modulation. Due to processing and technological limitations, the original design of the parallel tone modem was restricted to the research and methods of the time it was designed. Some papers have looked into bringing the standard up to today’s modern practices using signal processing methods and devices. For example, Martin Gill looked into using novel signal

13 processing algorithms together with trellis-coded modulation designed to cope with fading and noise to obtain effective communication over an HF channel [1]. We will explore modern time and frequency synchronization techniques and compare to the standard as it was proposed.

2.6 MODULATION USED The primary modulation scheme used in this system is based on phase shift keying (PSK) where each data signal is represented with a change in phase in the transmitting signal. The demodulator, in this way, must have a reference signal in which to compare each incoming signal’s phase to. For the receiver to recover the transmitted signal, it is assumed that there is no frequency or amplitude change in the transmitted waveform. For binary PSK, the separation between the two signal waveforms is 180 degrees. For quadrature PSK (QPSK), also known as Quadrature Amplitude Modulation (4-QAM), each symbol corresponds to a complex signal where two bits are represented for each symbol. The phase separation of each symbol is 90 degrees. For proper demodulation, a carrier phase recovery stage must be implemented to obtain the symbols sent. Phase ambiguity may result from receiver locking to the wrong stage introducing a phase error of π/4 or π/2. To avoid this phase offset or phase ambiguity issue, differential coding is used to obtain differential PSK or DPSK. In DPSK, the symbols to be transmitted depend on the previous symbols. This introduces memory into the system. In this way, the phase ambiguities may be ignored and need for a carrier phase recovery section may be ignored.

Figure 2.4. Differential Quadrature Phase shift keying block diagram.

For DQPSK, x[n] takes on values of one of four levels which are mapped to one of four phases given by

푚 ∅ = 2휋 (2.5) 푀

14

Where M=4 for DQPSK and m = 0, 1, 2, …, M-1. The output of the phase mapper is then run through an accumulator. The output of the accumulator, s[n], is the converted into the exponent of a complex exponential. The output, y[n], is one of four possible outcomes [0+j1, 0-j1, -1+0j, 1+0j]. This can then be multiplied point by point with ej(π/4) to obtain the following outputs: [-0.7071-j0.7071, 0.7071-j0.7071, -0.7071+j0.7071, 0.7071+j0.7071]. This will result in the constellation shown in Figure 2.5.

Figure 2.5. DQPSK Constellation Diagram.

After DQPSK modulation, the data is then modulated onto 16 tones of a 64 point IFFT symbol. At a sample rate of 7200 bits/second, the frequency spacing between bins is 112.5 Hz. The 16 tone table begins with a frequency of 900 Hz with a frequency spacing of 122.5 Hz up to 2587.5 Hz. This corresponds to bins 9 through 24. Once all 64 bins are formed, they are run through an IFFT.

2.7 CHALLENGES The documentation obtained on the parallel tone modems contains only requirements that should be followed to implement a full compliant modem. In addition to this, literature on parallel tone modems was scarce and inconsistent. Various competing versions of the modem were found and, in many cases, describe the evolution of the system in its various stages. In order to fully understand the architecture, a comparison with modern digital communication systems was used. The modem was primarily used in the military realm so

15 much of the documentation was requested through the defense technical information center (DTIC). Along with the military standard requirements, two studies on the modem were used as the primary source of information, and to fully describe the modem operation and implementation.

16

CHAPTER 3

MODEM DESCRIPTION

The following section describes the operation of a 16-tone multi-tone parallel modem as described by MIL-STD-188-110. We will mainly focus our attention on the 16 tone data modem for practical purposes.

Figure 3.1. Block diagram of Data Modem.

The system is composed of a modem preamble which in turn consists of four sections for signal detection, frequency correction and frame detection. This is followed by a communications security section (COMSEC), which contains specific information about the modem including voice/data encryption used, bit rate, modes of operation, degree of interleaving, coding diversity and other configuration data. Following the COMSEC section, a set of dummy frames is used to allow the receiver to process the forward error correction and information found in the COMSEC section. Following the dummy frames is a 16-tone phase reference field. After the phase reference field, a sync verification field is transmitted and is used to verify COMSEC synchronization. This field has a duration of two frames in length and consists of a 32 bit pattern. Following the sync verification field, the data field or payload is included. For 2400 bit-per-second, 32 bit frames are used. Each pair of successive bits are paired together to form dibits, which are then modulated using DQPSK. Once modulated, the symbols are then placed onto 16 tones or OFDM bins. Following the Data Field, and End-of-Message field

17 will is included. This field consists of a 32 bit pattern which is used to identify the end of the entire data block transmitted.

3.1 MODEM REQUIREMENTS The first part of the modem consists of the modem preamble. The modem preamble in return consists of four parts.

Figure 3.2. Modem preamble for the 16 tone parallel tone modem.

The first section of the preamble is composed of four unmodulated tones spaced 562.5Hz apart starting with 900Hz up to 2587.5Hz. The initial phases of each tone are selected to minimize the ratio or peak-to-root-mean-square amplitude of the composite signal. This signal is used by the receiver for signal presence detection and frequency and Doppler correction. The duration of the signal transmission lasts 24 frames at a rate of 75 frames/second corresponding to a frame period of 13.3ms. The second section of the preamble consists of three modulated sync tones spaced 562.5 Hz apart and biphase modulated (±π/2) at the 75 Hz baud frame rate. This signal is used to acquire frame synchronization. The transmission of this signal lasts 8 frame periods. The preamble tones and corresponding initial phases for parts I and II are given in Table 3.1.

18

Table 3.1. Frequency and Initial Phases for the First and Second Part of the Modem Preamble Part I Part II Freq. (Hz) Initial Phase (Deg) Freq. (Hz) Initial Phase (Deg) 787.5 0 1125 0 1462.5 103.7 1800 90 2137.5 103.7 2475 0 2812.5 0

The third section of the preamble consists of a 16-tone reference frame. It contains 16 tones with their corresponding phase angles. The frequency spacing between tones is 112.5Hz and ranges from 900Hz to 2587.5Hz. This signal will be used to establish phase reference for the beginning of fourth section of the modem preamble. The tone reference is listed in Table 3.2.

Table 3.2. Frequency and Initial Phases of the 16 Tones for the Parallel Tone Modem Freq. (Hz) Initial Phase (Deg) 900 19.69 1012.5 71.72 1125.0 84.38 1237.5 350.16 1350.0 171.56 1462.5 21.09 1575.0 101.25 1687.5 344.53 1800.0 53.44 1912.5 60.47 2025.0 208.13 2137.5 68.91 2250.0 25.31 2362.5 189.84 2475.0 45.00 2587.5 243.28

The fourth section of the modem preamble consists of a 15-bit PN sequence. The sequence will be shifted and transmitted 16 times during a period of 15 frames. The polynomial for the PN sequence is given by x4+x+1 [7], with an initial load of all ones. Each

19 of the 15 bit PN sequences will be DPSK modulated at the frame rate onto each of the 16 tones above. This sequence is used for frame synchronization and to identify the beginning of the COMSEC system preamble.

Table 3.3. 15 Bit PN Tone Sequence for Each of the 16 Tones

Framing Sequence Transmitted in 15 Frames Tone Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1 1 1 1 1 0 0 0 1 0 0 1 1 0 1 0 2 1 1 1 0 0 0 1 0 0 1 1 0 1 0 1 3 1 1 0 0 0 1 0 0 1 1 0 1 0 1 1 4 1 0 0 0 1 0 0 1 1 0 1 0 1 1 1 5 0 0 0 1 0 0 1 1 0 1 0 1 1 1 1 6 0 0 1 0 0 1 1 0 1 0 1 1 1 1 0 7 0 1 0 0 1 1 0 1 0 1 1 1 1 0 0 8 1 0 0 1 1 0 1 0 1 1 1 1 0 0 0 9 0 0 1 1 0 1 0 1 1 1 1 0 0 0 1 10 0 1 1 0 1 0 1 1 1 1 0 0 0 1 0 11 1 1 0 1 0 1 1 1 1 0 0 0 1 0 0 12 1 0 1 0 1 1 1 1 0 0 0 1 0 0 1 13 0 1 0 1 1 1 1 0 0 0 1 0 0 1 1 14 1 0 1 1 1 1 0 0 0 1 0 0 1 1 0 15 0 1 1 1 1 0 0 0 1 0 0 1 1 0 1 16 1 1 1 1 0 0 0 1 0 0 1 1 0 1 0

A full description of the COMSEC section is left out for security purposes. Since the COMSEC section will not be implemented, the sync verification field will also be omitted. The data section, as mentioned above consists of 32 bits per frame where each two successive bits are paired together and modulated using DQPSK.

20

Table 3.4. Dibit Grouping of the 32 Bit Data Frame

Dibit grouping 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Even bits 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 Odd bits 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32

These 16 dibits are then modulated onto the 16 tones or bins and an IFFT is used to obtain time domain OFDM symbols. The final section consists of a 32 bit end of message sequence. This sequence finalizes the entire transmission block and completes the process. The receiver then goes back into a search mode and looks for the next signal detection block to start the process again.

3.2 OFDM MODULATION Parallel tone modem systems employ OFDM modulation to combat multipath fading. As stated earlier, serial tone modems have replaced parallel tone modems. There has been a resurgence in HF communication systems due to increased reliability and a reduction in cost when compared to satellite communication systems. The current push is towards increased data rates using 3 kHz frequency allocated channels and by using several contiguous and non-contiguous 3 kHz channels. Serial tone modems use 8-PSK for a bit rate of 4800 bits- per-second and 64-QAM for 9600 bits-per-second. Higher M-QAM schemes are more power efficient, but as the M-PSK constellation is doubled, the SNR is reduced by 3dB [3]. To achieve a data rate of 19,200 bit-per-second would require the use of 4096-QAM, which becomes much more susceptible to multipath fading. Furthermore, as data-rates increase, the complexity and size of the serial modem’s adaptive equalizer filter increases. In this case, a multicarrier transmission scheme such as OFDM may be a better choice. In the last decade, there has been continual interest in multiple-input multiple-output (MIMO) systems as a way to increase the data rate. By increasing the number of transmit and receive antennas by a factor of N, the data rate may also be increased by the same factor. The downside of MIMO is due to the fact that higher frequencies are required in order to take advantage of smaller antenna spacing. For HF this would require that the array of large HF antennas be separated significantly due to the longer wavelengths used. Nevertheless, antennas with different polarizations may be considered.

21

OFDM consists of partitioning a wideband signal, consisting of M-QAM symbols, into several narrowband signals, which reduces the complexity of the receiver equalizer for each subchannel. By selecting orthogonality between each subchannel, inter-channel interference (ICI) may be reduced. The multiple subchannels are selected to be of equal bandwidth and positioned at the center of different carrier frequencies in the frequency domain. These different symbols are then transmitted on orthogonal channels in parallel. In order to achieve orthogonality, the OFDM symbols are created by using an IFFT. The spectra of the subcarriers are overlapped for bandwidth efficiency. The spectrum of the OFDM symbol can be considered as frequency shifted sinc functions in the frequency domain. All overlapping sinc functions are spaced by an interval of 1/T, where T is the sampling interval.

Figure 3.3. OFDM orthogonal subcarriers.

In order to reduce Adjacent Channel Interference, a guard bands are inserted around each OFDM symbol. Furthermore, a Cyclic Prefix is inserted within these guard bands to reduce Inter Symbol Interference. A cyclic prefix is created by copying the last number of symbols of each OFDM symbol and inserting them at the beginning of each corresponding symbol. These can range anywhere from 32 to 64 symbols. For a 16-tone modem, the symbols transmitted on 16 tones is 16 times longer and so the expected multipath delay spread can be mitigated and therefore significantly eliminate ISI. This is due to the fact that the percentage of the symbol affected by the multipath delay is sufficiently short that it can be ignored with a small loss in performance. The addition of the guard time and cyclic prefix assists in the reduction of interference from one symbol to

22 another. The guard time is typically kept as short as the expected multipath delay spread as to not waste signal power and symbol rate.

Figure 3.4. OFDM transmission structure.

Binary phase shift keying (BPSK) consists of representing a bit of value 0 or 1 with a sine wave with a phase shift of 0 degrees and a phase shift of 180 degrees. At the receiver, the signal can be recovered by multiplying it with a phase coherent copy of the original signal. Quadrature phase shift keying (QPSK) consists of 2 bits per symbol represented by four equally spaced phases (0o, 90o, 180o, and 270o). Each dibit pair can be represented by a complex number (1+j, 1-j, -1+j, -1-j). Differential Phase shift keying (DPSK) is a scheme that introduces memory into the system. It is similar to BPSK with the exception that the current phase of the signal is dependent on the phase of the previous symbol transmitted. A change in the phase determines the value of the symbol transmitted. Although, the data capacity is the same as BPSK, its noise performance decreases by 3dB for the same bit error rate. This is due to the fact that, for a given sequence of bits, if any bit is incorrect at the receiver all the following bits will be received in error because of memory. Nevertheless, DPSK is much simpler to implement due to the fact that the receiver does not need to have a copy of the reference signal to determine the phase of the signal received. This is a non- coherent modulation scheme.

3.3 SOFTWARE IMPLEMENTATION MATLAB was used to simulate the preamble and payload of the HF modem. Portions of the modem have been implemented in GNU Radio using a USRP N210 as the chosen

23 platform. In this paper, the MATLAB code of the publicly available portions of the modem will be made available. All other sections will remain unavailable due to classification reasons.

24

CHAPTER 4

HF MODEM STANDARD AND IMPLEMENTATION

This chapter will describe the software implementation of the modem which will include both the transmitter and receiver. This implementation was written in MATLAB, which included the Communications and Signal Processing Toolboxes.

4.1 THE TRANSMITTER The modem in question consists of a preamble, a COMSEC section, 16 tone phase references, sync verification section, data on 16 multiband tones and an end of message section. The preamble or synchronization field is used for detection of signal presence, Doppler correction, bit synchronization, frame synchronization and to mark the beginning of the COMSEC section. The preamble itself consists of four unmodulated tones with the initial phases selected to minimize the PAPR amplitude of the composite signal. The length of this section is made up of 24 frames at 0.0133 seconds/frame. The four tones range from frequencies of 787.5 to 2812.5 Hz with a frequency separation of 675Hz. The initial phase for the first and last tones is 0 radians and 1.81 radians for the second and third tones.

25

Figure 4.1. Time and frequency domain representation of the first part of the preamble.

The second part of the preamble consists of three tones modulated 180 degrees at the frame rate. Initial phases are used to minimize the PAPR amplitude of the composite signal. The length of this section is made up of 8 frames at 0.0133 seconds/frame. The three tones range from 1125 to 2475 Hz with a frequency separation of 675 Hz. The initial phase for the first and last tones is 0 radians and 1.57 radians for the second tone.

Figure 4.2. Time and frequency domain representation of the second part of the preamble.

26

The third part of the preamble consists of 16 tones ranging from 900 Hz to 2587.5 Hz with a frequency separation of 112.5 Hz. The length of this section is made up of a single frame at 0.0133 seconds/frame. The 16 tone composite is used to establish a phase reference. Each tone contains an initial phase angle as seen in Table 3.2.

Figure 4.3. Time and frequency domain representation of the third part of the preamble.

The fourth part of the preamble consists of a 15 bit PN sequence cyclically shifted by one bit and transmitted 16 times for a period of 15 frames. The frame period is 0.0133 seconds/frame. These tones are differentially encoded using DPSK and sent onto 16 frequency bins an IFFT symbol. This section is used to obtain frame synchronization and identify the beginning of the next section. The COMSEC section will contain information regarding the encryption to be used for the data as well as information about the parameters, modes, data rates, bits per frame, degree of interleaving, type of coding, in-band diversity, and other information for the modem for configuration. Since this section is outside the scope of this thesis, it will be excluded. The data rate we will be using is 2400 bps and will assume that the preamble will only be sent once at the beginning of the data call. Since this section is not included, the sync verification section is also excluded since it is used to verify COMSEC synchronization.

27

Figure 4.4. Time and frequency domain representation of the fourth part of the preamble.

The payload section consists of data. Since we will be working with a data rate of 2400 bps, no encryption no coding, interleaving, or in-band diversity is required. For lower data rates, this is not the case. In the case of the data rate mentioned, this will require 32 bits per frame. The 32 bit frames will be assigned to 16 dibits, which in turn are modulated onto 16 bins of a 64 point IFFT using DQPSK modulation. The mapping of these bits into dibits will be achieved by taking consecutive pairs of bits and grouping them together. The final section consists of an end of message section which contains 8 frames of 32 bits representing the end of message pattern. This section will not be implemented since we will fix the length of the data to be 200 frames.

4.1.1 The Channel Model The Watterson Channel model is a commonly used model, however for simplicity we use an AWGN channel with multipath delay and Doppler offset. For multipath, extreme values have found to consist of 6 milliseconds differential delay and 5 Hz Doppler spread [8]. The maximum possible frequency offset is +/- 75 Hz. The modem requires a noise level of about 10 dB or better to achieve a bit error rate better than 10-6 [1].

28

4.2 PREAMBLE SYNCHRONIZATION The first part of the modem preamble is used for signal presence detection and frequency offset correction. This is done by mixing the incoming signal with its corresponding frequency to bring it to baseband or DC. This is then lowpass filtered using a 3rd order Chebyshev2 filter with a passband of +/- 75 Hz.

Figure 4.5. Low pass Chebyshev2 Filter Response.

At the receiver, the 4 tones corresponding to the first part of the modem preamble will be used for signal presence detection followed by Doppler frequency correction and tracking. This is done by a comparative energy measurement of the noise signal with the expected tone energy signals. A total of 8 filters are used. Four filters are centered at each of the 4 Doppler tone frequencies while another set of filters are centered in between the Doppler tones at empty slots to collect energy information. The bandwidth of each filter is 150 Hz so a range of ±75 Hz Doppler shift can be detected. The signal detection block is shown in Figure 4.6.

29

Figure 4.6. Signal Presence Detector.

Figure 4.7. Channel effects on the four tone preamble.

30

The resultant output in the presence of noise is shown in Figure 4.8 where the larger signal corresponds to the signal presence and the smaller one corresponds to the noise signal.

Figure 4.8. Signal Presence Detector Output.

After successful signal presence detection, A Doppler shift estimator is used for Doppler correction due to frequency selective fading channel. A two stage adaptive Doppler estimator is described in the literature. The same four tone filters above may be used for the initial stage of the estimator. A single stage Doppler estimator is shown in Figure 4.9 for course frequency correction. The output of the course frequency correction block is shown in Figure 4.10 for a frequency offset of 32 Hz. The figure shows the original signal from samples 250 to 300 after the multipath and AWGN with no frequency offset in blue. The plot in red shows the output of the Doppler estimate to the 32 Hz offset.

31

Figure 4.9. Doppler Estimation Block.

Figure 4.10. Doppler Estimation of offset signal compared to original signal.

32

Following the first four tones of the preamble is the second part consisting of three tones modulated 180 degrees at the frame rate. A delay line autocorrelation block was used to find where the frames of this part of the preamble begin.

Figure 4.11. Autocorrelation of the three modulated tones.

A delay of 64 points was introduced in the original preamble followed by a multipath filter and additive noise. The signal begins after 160 samples which is the result of the 64 point delay plus the delay of 96 in the correlation block. The output of the average is given in Figure 4.12.

Figure 4.12. Autocorrelation output of the three modulated tones.

The second part of the preamble is then used for fine frequency correction. The same estimation block in Figure 4.7 was used with a minor modification in the lowpass filter. The bandwidth of the filter is reduced to 16.2 Hz to obtain a finer correction estimate. The third part of the modem preamble consists of a single frame of a composite signal made up of the 16 tones with initial phases shown in Table 3.2.

33

Figure 4.13. Composite 16 tone signal of modem preamble.

The phase offset is found by multiplying the composite signal with the conjugate of the received signal and resolving for the phase angle.

Figure 4.14. Phase offset correction of composite 16 tone signal of modem preamble.

The fourth part of the preamble consists of a 15-bit PN sequence as shown in Table 3.3. The 15 bit sequence is circularly shifted at each of the 16 consecutive frames. Each 15

34 bit frame is modulated using DPSK and spread in frequency using a 64 bin IFFT. A guard band or cyclic prefix of 32 bits is used to obtain a total of 96 samples per frame.

Figure 4.15. Time and frequency spectrum of PN sequence.

A matched filter is used at the receiver to find the beginning of the next frame, which consists of the data payload. By using a match filter of length equal to the 15 frames we essentially average the result of each individual frame.

4.3 THE DATA PAYLOAD For this example, we will use a sequence of 200 frames consisting of DQPSK modulated symbols. Each frame is modulated using a 64 point IFFT block. A cyclic prefix will be added to the beginning of the signal to obtain 96 samples for each frame. The results with frequency offset, phase offset, multipath and AWGN noise is shown in Figure 4.17.

35

Figure 4.16. Correlation of PN sequence with match filter.

Figure 4.17. Data payload with no distortion or noise.

36

Figure 4.18. Data payload with multipath, noise, phase and frequency offsets.

37

CHAPTER 5

MODERN COMMUNICATION SYNCHRONIZATION METHODS

We will now focus our attention to modern communication synchronization algorithms. We will compare these algorithms with the requirements given in the military standard documentation, in particular the MIL-STD-188-110C documentation. As stated earlier, OFDM is a simple method of dealing with multipath fading. Data is transmitted in parallel on orthogonal subcarriers for longer symbol periods thereby essentially doubling the spectral efficiency. The parallel transmission ensures a low data rate per subcarrier which helps to reduce the effects of inter symbol interference. A cyclic prefix is added to the start of each OFDM symbol, which consists of about 25% additional data bits taken from the end of the OFDM symbol itself. If the cyclic prefix is longer than the delay spread, the multipath signal will not interfere with subsequent signal and therefore eliminates ISI. Nevertheless, OFDM is sensitive to timing synchronization and frequency errors. This is due to the fact that the subcarriers are composed of sinc functions with side lobes that may leak into other subcarriers if orthogonality is not maintained.

5.1 TIMING SYNCHRONIZATION Synchronization offsets in OFDM consist of both timing and frequency offsets. In the case of timing synchronization, δ represents the normalized synchronization timing offset (STO). The effects of STO in the time domain results in a time delay, while in the frequency domain introduces a phase offset of 2πkδ/N where N is the total number of orthogonal subcarrier where k being the kth subcarrier [9]. There are four possible timing offset cases. In the first case, the estimated starting point coincides with the kth OFDM symbol starting point. This is the ideal case, where orthogonality is maintained. The second case

38 consists of the estimated starting point beginning before the start of the OFDM symbol, but after the delay spread of the channel response. Orthogonality may be preserved and therefore no ISI is incurred, but a phase offset is created. A single tap frequency domain equalizer may be used to correct this [9]. The third case consists of the estimated point starting before the OFDM start point and before the end of the channel response. This results in an ISI as well as interchannel interference due to loss of orthogonality. The fourth and final case occurs when the estimated starting point begins after the start of the OFDM symbol destroying orthogonality and introducing ICI. ISI from the next OFDM symbol is also introduced. The distortion in this case is too severe to compensate and therefore must be avoided by using a symbol timing synchronization scheme. STO estimation can be implemented in the time or frequency domains. In the time domain both non-data aided or data aided techniques can be used. For non-data aided methods, the cyclic prefix is used for timing estimation since the same data can be found at the start and end of each OFDM symbol. Therefore, a set of correlation windows can be used to find the start of each symbol. By using two sliding windows of the same size as the cyclic prefix and separated Nsub samples apart, we can find where the start of the cyclic prefix and its matching block within the OFDM symbol coincide.

Figure 5.1. Timing synchronization using cyclic prefix.

The similarities between the two windows is maximum when the difference between them is minimized. This can be represented by the following equation:

̂ 푁퐺−1+훿 훿 = 푎푟푔훿 min{∑푖=훿 (|푦푙 [푛 + 푖] − 푦푙 [푛 + 푁 + 푖]|} (5.1)

th Where yl is the l received signal. However, this can be degraded if CFO exists [9]. The estimation can be improved by minimizing the squared difference between the samples within each window and conjugating the second window’s samples.

39

̂ 푁퐺−1+훿 ∗ 2 훿 = 푎푟푔훿 min{∑푖=훿 (|푦푙[푛 + 푖]| − |푦푙 [푛 + 푁 + 푖]|) } (5.2)

A third approach consists of taking the correlation between the two blocks and estimating its maximum likelihood

̂ 푁퐺−1+훿 ∗ 훿 = 푎푟푔훿 max{∑푖=훿 (|푦푙 [푛 + 푖] 푦푙 [푛 + 푁 + 푖]|} (5.3)

For data-aided schemes, training symbols are transmitted for synchronization at the receiver. Classic algorithms include Schmidl and Cox, H. Minn et al., Park, Ren and Seung, to name a few. Schmidl and Cox employs two consecutive and identical training symbols to estimate the time and frequency offset. The correlation between the two training symbols can be used to find the start of the frame. The metric, however, suffers from a plateau which introduces ambiguity in determining the start of the frame. The sequence used to generate the training symbols is chosen to reduce the peak-to-average power ratio so that there is little distortion at the transmitter. An m-sequence is chosen to reduce PAPR [10]. The timing metric is given by

|푃(푑)|2 푀(푑) = (5.4) (푅(푑))2 where

퐿−1 ∗ 푃(푑) = ∑푚=0(푟푑+푚푟푑+푚+푙) (5.5)

퐿−1 2 푅(푑) = ∑푚=0 |푟푑+푚+푙 | (5.6) where L=N/2 is the length of the samples in half of the training symbols, rk is the received signal, and d is the time index of the first sample of the sliding window of length 2L samples. Minn et al. modified the Schmidl and Cox algorithm by representing the training symbols as a modified sequence of length N/4 repeated twice and then repeated another two times but in negative form.

SM = [A A –A -A]

Here A represents the samples of length L = N/4 which is composed of the IFFT of modulated data of a PN sequence. The timing metric used to evaluate the timing estimation is given by

40

|푃(푑)|2 푀(푑) = (5.7) (푅(푑))2 where

1 퐿−1 ∗ 푃(푑) = ∑푘=0 ∑푚=0 푟푑+2퐿푘+푚푟푑+2퐿푘+푚+퐿 (5.8)

1 퐿−1 2 푅(푑) = ∑푘=0 ∑푚=0 |푟푑+2퐿푘+푚+퐿 | (5.9)

Here the sliding window size is of length 2L. Based on Minn, Park proposed a preamble consisting of four parts where the first two are symmetrical with each other and the last two are conjugate of the first and second parts respectively.

SP = [A B A* B*]

Here A is of length L=N/4 generated by a PN sequence. The outcome is a sharper metric but smaller in amplitude. The timing metric is given by

|푃(푑)|2 푀(푑) = (5.10) (푅(푑))2

푁/2 푃(푑) = ∑푘=0 푟푑−푘푟푑+푘 (5.11)

푁/2 2 푅(푑) = ∑푘=0 |푟푑+푘 | (5.12)

Non-data aided methods work well under AWGN channels but a channel with sever degradation may cause fluctuation of the starting point estimation [11].

5.2 FREQUENCY SYNCHRONIZATION After precise timing estimation and correction, the training sequence will be processed using an N point FFT. The results will then be used for coarse and fine frequency offset estimation. There are two types of distortion from frequency offsets. One is associated by the mismatch between the transmitter and receiver’s local oscillator. The other is the carrier frequency offset (CFO) due to the Doppler shift (fd) caused by the relative movement between transmitter and receiver. The frequency offset is given by

′ 푓표푓푓푠푒푡 = 푓푐 − 푓푐 (5.13)

41 where fc is the carrier frequency at the transmitter and fc’ is the carrier frequency at the receiver. The Doppler frequency shift can be represented by

푣∗푓 푓 = 푐 (5.14) 푑 푐 where v is the speed of the receiver, and c is the speed of light. CFO can be expressed as

푓 ɛ = 표푓푓푠푒푡 (5.15) ∆푓 where Δf is the subcarrier spacing and. CFO in turn is composed of integer carrier frequency offset (IFO) and a fractional carrier frequency offset (FFO). This can be expressed as ɛ = ɛ푖 + ɛ푓. In the time domain, CFO can be seen as an offset of 2πnɛ and in the frequency domain it results in a frequency shift of -ɛ. The effect of IFO results in a severe degradation of the bit error rate (BER). Nevertheless, orthogonality is maintained and no ICI is introduced [9]. FFO on the other hand introduces amplitude and phase distortion and therefore destroys orthogonality and incurs ICI. As with timing estimation algorithms, CFO estimation techniques can be categorized into data aided and non-data aided algorithms. Non-data aided techniques take advantage of the cyclic prefix and is used to calculate CFO in the time domain. This technique, however, is good for estimating CFO in the range [-0.5, 0.5], and hence is suitable for fractional CFO estimation [9]. The phase difference between the cyclic prefix and the corresponding block at the end of the OFDM symbol spaced N samples away is given by 2휋푁ɛ/푁 = 2πɛ [9]. Therefore the CFO can be found from the phase angle given as

1 ɛ̂ = arg {∑−1 푦∗ [푛]푦 [푛 + 푁]} (5.16) 2휋 푛=푁퐺 푙 푙

Data aided frequency estimation can be used by repeating the same frame and finding the phase value of each carrier between consecutive symbols. The range can be increased by increasing the number of repetition patterns [9].

퐷 ɛ̂ = arg {∑푁⁄퐷−1 푦∗ [푛]푦 [푛 + 푁⁄퐷]} (5.17) 2휋 푛=0 푙 푙

42 where D is the number of repetition patterns. However, MSE is degraded as the number of repetition patterns is increased. In order to improve MSE performance, the repetitive patterns must be composed of shorter periods [9]. A third method consists of inserting pilots to estimate the CFO in the frequency domain. These pilots are transmitted on every OFDM symbol and are ideal for CFO tracking. In this method, two identical training symbols are used for estimating CFO. This method is known as the Moose algorithm [12]. CFO estimation can be estimated by

1 ɛ̂ = arg {∑푁−1 퐼푚[ 푌∗[푘]푌 [푘]⁄∑푁−1 푅푒[푌∗[푘]푌 [푘]]} (5.18) 2휋 푘=0 1 2 푘=0 1 2

The CFO estimation is also limited in the range [-0.5, 0.5], and can also be increased by increasing the repetition of the training symbol patterns [9].

43

CHAPTER 6

RESULTS

A comparison of the different timing estimation models were introduced in the previous chapters. In particular, Schmidl and Cox, Minn et al., Park et al., and the method used in the parallel tone modem were discussed. A fading channel with 10dB Signal to Noise ratio was used for this model. All four methods were passed through the same channel conditions. Comparing the timing metric it can be seen that the Park method may be a better candidate due to its spectrum efficiency compared to the method used in the standard. It also reduces the timing ambiguity when compared to Schmidl and Cox’s method. The PN based preamble used in the HF standard consists of 15 frames of about 95 symbols each. This is a long training symbol in comparison to either one of the methods suggested.

Figure 6.1. Comparison of Timing synchronization Metrics.

44

The result of the metric shows the low level corresponding to the parallel tone modem are much lower compared to the other more modern methods proposed. This indicates a higher probability of false detection when using the parallel tone method in comparison to either the Park or Schmidl methods. The parallel tone modem uses a standard FM discriminator circuit approach for CFO estimation for both the coarse and fine frequency correction with the pattern increased multiple times to increase the estimation error correction for IFO. For various SNR values, the cyclic prefix and Moose methods were compared to the parallel tone method.

Figure 6.2. Comparison of Frequency synchronization Metrics.

The Moose method far outperforms the method used in the parallel tone modem. The mean square error (MSE) for the parallel tone modem can be improved by correcting the received signal and then running it through a lowpass filter and use the same method. This, however, gives us only a slightly smaller improvement [1]. The Moose method is a good candidate to and in the literature may even be improved by using another method described by F. Classen and H. Myer., which also uses pilot tones [9].

45

CHAPTER 7

FUTURE WORK

The channel mode used in this paper consisted of a standard Multi-path channel. The next step would consist of running the same comparative models on a Watterson channel model or other available HF channel models such as a parametric model. Additionally, no indication of a channel estimator is proposed in the standard. This area should be looked into given the difficulties in working with the ionospheric channel. A comparison of the HF model with and without channel estimation should be considered. Additional areas of improvement lie in the implementation of PAPR reduction algorithms, which were not covered in this thesis.

46

REFERENCES

[1] M. C. Gill, "Coded-waveform design for high speed data transfer," Ph.D. thesis, Dept. Electron. Eng., Univ. South Australia, 1998. [2] N. Maslin, HF Communications: A Systems Approach. New York, NY, USA: Plenum Press, 1987. [3] W. N. Furman, E. E. Johnson, M. Jorgenson, E. Koski, and J. Nieto, Third Generation and Wideband HF Communications. Boston, MA, USA: Artech House, 2012. [4] N. D. Davies, "Digital radio and its application in the HF (2-30MHz) band," Ph.D. thesis, Dept. Electron. Eng., Univ. Leeds, West Yorkshire, England, 2004. [5] ITU-R, "Testing of HF modems with bandwidths of up to about 12 kHz using ionospheric channel Simulators," ITU, Geneva, Switzerland, Rec. ITU-R F.1487, 2000. [6] J. G. Proakis, Digital Communications. New York, NY, USA: McGraw Hill, 2001. [7] Department of Defense Interface Standard, "Interoperability and performance standards for data modems,” Dept. Defense, Washington DC, USA, Rep. MIL-STD-188-110B, 2000. [8] W. M. Jewett and R. Cole, Jr., "Modulation and coding study for the advaced narrowband digital voice terminal," Naval Res. Lab, Washington DC, USA, Interim Rep. ADA060367, 1978. [9] Y. S. Cho, J. Kim, W. Y. Yang, and C. G. Kang, MIMO-OFDM Wireless Communications with MATLAB. Singapore: Wiley-IEEE Press, 2010. [10] B. Ai, G. E. Jian-Hua, and W. Yong, "Symbol synchronization technique in COFDM systems," IEEE Trans. Broadcast., vol. 50, no. 1, pp. 56-62, Mar. 2004. [11] B. Ai, Z.-x. Yang, C.-y. Pan, J.-h. Ge, Y. Wang, and Z. Lu, "On the synchronization techniques for wireless OFDM systems," IEEE Trans. Broadcast., vol. 52, no. 2, pp. 236-244, June 2006. [12] P. H. Moose, "A technique for orthogonal frequency division multiplexing frequency offset correction," IEEE Trans. Commun., vol. 42, no. 10, pp. 2908-2914, Oct. 1994. [13] Mathworks. Homepage. [Online]. Available: http://www.mathworks.com/. Accessed Jun. 12, 2015. [14] W. M. Jewett and R. Cole, Jr., "Use of decoder information for detection of start and end transmission in the ANDVT HF modem," Naval Res. Lab, Washington DC, USA, Mem. Rep. ADA125727, 1983.

47

[15] G. Duprat, Demodulation and Decoding Studies of the 39-Tone MIL-STD-188-110A HF Signal. Ottawa, Canada: Defence R&D, 2002. [16] W. M. Jewett and R. Cole, Jr., "Performance of ANDVT HF modem with peak clipping," Naval Res. Lab, Washington DC, USA, Interim Rep. ADA092114, 1980. [17] W. M. Jewett and R. Cole, Jr., "Non-real time stress tests of the ANDVT HF modem," Naval Res. Lab, Washington DC, USA, Mem. Rep. ADA102364, 1981. [18] C. Dick, F. Harris, and M. Rice, "FPGA implementation of carrier synchronization for QAM receivers," J. VLSI Sig. Process., vol. 36, no. 1, pp. 57–71, Jan. 2004. [19] John M. Wilson, "A low power HF communications system," Ph.D. thesis, Dept. Elect. Eng., Univ. Manchester, Manchester, UK, 2011. [20] F. Harris and G. Smith, "On the design, implementation, and performance of a microprocessor controlled fast AGC system for a digital receiver," in IEEE Military Commun. Conf., 1988, pp. 1-5. [21] T. M Schmidl and D. C. Cox, "Robust frequency and timing synchronization for OFDM," IEEE Trans. Commun., vol. 45, no. 12, pp. 1613-1621, Dec. 1997. [22] A. G. Abshir, M. M. Abdullahi, and A. Samad, "A comparative study of carrier frequency offset (CFO) estimation tehcniques for OFDM systems," IOSR J. Electron. Commun. Eng., vol. 9, no. 4, pp. 1-6, Aug. 2014. [23] R. R. Mosier and R. G. Clabaugh, "Kineplex, a bandwidth-efficient binary transmission system," Trans. Amer. Inst. Elect. Eng., vol. 76, no. 6, pp. 723-728, Jan. 1958. [24] M. B. Jorgenson and K. W. Moreleand, "HF serial-tone waveform design," presented at the 1999 RTO IST Symp., Lillehammer, Norway, June 14-16, 1999. [25] M. Coutolleau, P. Vila, D. Merel, D. Pirez, and B. Mouy, "New studies about a high data Rate HF parallel modem," in IEEE Military Commun. Conf., 1988, pp. 381-385. [26] P. A. Bello, C. Boardman, L. Pickering, and R. Pinto, HF Modem Evaluation for the Advanced Narrowband Digital Voice Terminal (ANDVT). Washington DC, USA: Defense Technical Information Center, 1978. [27] M. Adler, "High frequency modem for advanced narrowband digital voice terminal," TRW Defense Sys. Group, Rodondo Beach, CA, USA, Final Rep. ADA085984, 1980. [28] A. A. Nasir, S. Durrani, and R. A. Kennedy, "Performance of coarse and fine timing synchronization in OFDM receivers," in Proc. 2nd Int. Conf. Future Comp. Commun., Canberra, Australia, 2010, pp 412-416.