<<

University of Central Florida STARS

Retrospective Theses and Dissertations

Fall 1979

The Theory and Application of Domain Deconvolution Using Optimum Compensation in Time Domain Measurement

Robert Bruce Stafford University of Central Florida

Part of the Engineering Commons Find similar works at: https://stars.library.ucf.edu/rtd University of Central Florida Libraries http://library.ucf.edu

This Masters Thesis (Open Access) is brought to you for free and open access by STARS. It has been accepted for inclusion in Retrospective Theses and Dissertations by an authorized administrator of STARS. For more information, please contact [email protected].

STARS Citation Stafford, Robert Bruce, "The Theory and Application of Deconvolution Using Optimum Compensation in Time Domain Measurement" (1979). Retrospective Theses and Dissertations. 449. https://stars.library.ucf.edu/rtd/449 THE THEORY AND APPLICATION OF FREQUENCY DOMAIN OECONVOLUTIOr~ USING OPTIMU~1 COMPENSATION IN TIME DOMAIN MEASUREMENT

BY ROBERT BRUCE STAFFORD B.S.E., Florida Technological University, 1978

THESIS Submitted in partial fulfillment of the requirements for the degree of Master of Science in Engineering in the Graduate Studies Program of the College of Engineering at the University of Central Florida; Orlando, Florida

Fall Quarter 197lq THE THEORY AND APPLICATION OF FREQUENCY DOMAIN DECONVOLUTION USING OPTIMUM COMPENSATION IN TIME DOMAIN MEASUREMENT

ROBERT BRUCE STAFFORD

ABSTRACT

This thesis describes a method of transforming a decon­ volution problem into an equivalent problem. A short discussion of the theory behind convolution and deconvolu- tion is included, along \'lith a survey of techniques now being used. A new method using a synthesized compensator is proposed and mathematically developed. Finally, the results of applying this method to several real and analytical signals are given.

Approved by: ACKNOWLEDGEMENT

I would like to thank Dr. Sedki M. Riad and Dr. Aicha A.R. Riad for their constant support and encouragement during the writing of this thesis. Also, a special word of thanks is due to Alan Geeslin for his tim2ly assistance with the graphic portion of this thesis. Finally, I should acknowledge a special debt of grat­ itude to my typist Susan Rhoades, whose assistance above and beyond the call of duty was instrumental in the completion of this thesis.

iii LIST OF FIGURES

2-1. Representation of a linear system ...... • . . 4

3-1. The optimum compensation process .. . . . ~ 16 4-1. System error energy as a function of A •• • 27

4-2. Input reference ~1aveform for Nt~ = 0.1. & • • • • • 28

4-3. System impulse response waveform for NM = 0.1. r 29

4-4. Predicted reference \'laveform for NM = 0.1 .. 30 4-5. Input reference '1/aveform for 14M = 1. 0. • • • 31

4-6. System impulse response waveform for Nr~ = 1.0. . . • .. 32 4-7. System reference response waveform for NM = i.O. . • 33

4-8. Predicted reference \'lave form fo1 .. Nrta = 1 • 0 .. • 0 co • 0 .. 34

5-l. System test impulse ...... ' 9 • • • • • • . . . • • ~ 42 5-2. Test impulse response ...... 43 5-3. System input refP.rence \llaveform. . . . ••• 44 5-4. System response to input reference \•Javeform.

5-5. Nahman-Gans windowed test impulse ...• ~ . 46

5-6. rJahman-Gans windowed test impulse response •. " . • .. 47 5-7. Nahman-Gans windowed reference waveform •.. • • 48 5-8. Mahman-Gans windowed system reference response . • • 49

5-9. Frequency domain magnitude of windo~-ted test impulse. . 50

iv 5-10. Frequency domain magnituc!e of windo\:Jed impuls~ response .•...•..•...... •...•..• 51

5-11. Frequency domain magnitude of \o.Ji ndo\~Jed reference waveform. • ...... • . . • • ...... 52

5-12. Frequency domain magnitude of \t'Ji ndov1ed system reference response. . . . • ...... 53 5-13. Calculated system magnitude 54 5-14. Optimum compensator transfer function magnitude . 55

5-15. Reference waveform regenerated thro~gh inverse ...... • • • 55

5-16. Predicted reference waveform ...... • 57

5-17. Regenerated reference waveform plotted on same axis as predicted reference waveform .. 58

6-1. Reference waveform for testing 4 km fiber • 0 • • 63

6-2. Response waveform of 4 km fiber . . .•. 64

6-3. 4 km fiber attenl!ation for various values of A. • • 65

6-4. 4 km fiber impulse response as computed using A = 0. • ...... • 66 6-5. 4 km fiber step response as computed using

A = 0. . . Jl • • • • • • • • • • ••••• . 67 6-6. 4 km fiber impulse response as computed using A = 10 • . . .. • . . • • . . . . . • • • • . 58 6-7. 4 km fiber step response as computed using A = 10 ...... , . . • •• ...... 69

6-8. 4 km fiber impulse response as computed using A = 100 .....•.•...... •...••.... 70 5-9. 4 km fiber step response as computed using • • • . 71 A = 1 00 • • • • • • • • • • • • • • e • •

6-10. 4 km f~ber attenuation and~phase functions using opt1mum value of A (=lOu) .•••••••••• • c • • 72

v TABLE OF CONTENTS

ACKNOWLEDGEf·1ENT. • • • • • 0 • iii LIST OF FIGURES...... iv

I. INTRODUCTION I

II~ THE DECONVOLUTION PROBLEM. 3 The Physical Deconvolution Problem ...... 3 Numerical Deconvolution and Instabilities. 7 Survey of Deconvolution Methods ...... 9

I I I. THE OPTIMAL COr1PENSATOR IN THE FREQUENCY D0~1AIN. 15

The Problem Statement. 15 The A1 gori thm. , • • • 18

IV. THE IDEALIZED TRANSFER FUNCTION EXPERIME~T . 22

The Software • • . . . 23 The Data . . • . . . • 24

V. THE OSCILLOSCOPE HEAD EXPERIMENT . 35 The Experiment • ...... 35 The So ft\'t'a re • • • • . • . . 36

VI. THE OPTICAL FIBER EXPERIMENT . . ~ . . " 59 The Experiment • . . • ...... 59 The Data • . . . • ...... 60

VII. COf·~CLUS I ON • • • • • • • • • • • 0 • • • ~ • • 73

APPENDIX 1 NSTDCN. . • • • • 0 • • • • • • • 75 APPENDIX 2 WNDWl ...... 79

APPENDIX 3 WNDW2 ...... 81

LIST OF REFERENCES ...... 84

vi CHAPTER 1

INTRODUCTION

This thesis describes the synthesis and software imple­ mentation of an optimized compensator in the frequency domain.

This compensator is ·intended to perfonn the frequency domain equivalent of deconvolution in systems where classical methods are rendered untenable. This thesis is divided into six chapters, A brief dis­ cussion of the theory behind frequency domain deconvolution and a survey of methods presently available to perform such a deconvolution form Chapter 2. The proposed technique is syn­ thesized and mathematically justified in Chapter 3. Chapter

4 contains a description of an experiment \'lhich Nas perfonned using a PDP 11/34 computer to operationally verify the compen­ sator . A hardware based experiment is described in Chapter 5 in which the optimum compensator technique was used to reconstruct the input waveform to a Tektronix S-1 oscillo­ scope sampling head. Chapter 6 discusses an experiment which was carried out at the Kennedy Space Center Laser Lab in \'lhich the optimum compensator method was used to extract the impulse response of an optical fiber. Finally, Chapter 7 discusses conclusions which may be drawn from the results of the three 2 experiments and presents suggestions for possible future re­ search into the optimum compensator technique. Software sup­ port for the exper·iments described in Chapters 4 and 5 may be found in appendices 1, 2 and 3. CHAPTER 2

THE DECONVOLUTION PROBLEM

This chapter presents the deconvolution problem as en­ countered in physical systems, as well as several techniques which have been previously used to solve this problem. The dif­ ficulties inherent in each of these techniques will be discussed and a new approach will be introduced.

THE PHYSICAL DECONVOLUTION PROBLEM

Deconvolution is often encountered in time and frequency domain measurement techniques as the process of separating the effect of a non-ideal test signal from the detected output of a linear system to yield the system impulse response and transfer function. In some cases, where the system transfer function is known, deconvolution may be used to gain infonmation about the exciting input. A pictoral representation of typical measurement technique for determining a system impulse response or transfer function is given in fioure 2-1,

3 4

x(t) y(t}

Fig. 2-1. Representation of a linear system. Small letters re­ present time domain quantities and capital letters re­ present frequency domain ~uantities.

In this technique an input x(t) is impressed upon the system, resulting in an output y(t). The relation between these time

functions and the system impulse response, h(t} is given by

y{t) = x(t) * h(t) 2-1

where * denotes the convolution operation. This operation is de-·

fined by

y(t) =j(=X(T)h{t-T)dT 2-2

-~ In the frequency domain the above relation reduces to

2-3

where Y{ejw), X(ejw), and H(ejw) are Fourier transforms of y(t), x(t), and h(t), respectively. The Fourier transform itself is

defined by

Y(ej 00 ) =j(~y(t)e-jootdt 2-4

-~ The convolution process implies that x{t), the input func­ tion, and h{t}, the system impulse response, are both known, and y(t} is being sought. Unfortunately, this is seldom the case. 5 Occasionally, y{t} and h(t} are known and x(t) is being sought, as in the case of measuring instruments. More often x(t) and y(t) are given and the objective is to find h(t), as in the case of system or device characterization. The solution to either of these problems can be found in the operation which is the inverse

11 11 of convolution, the so called deconvolution • Furthermore, both of these problems are mathematically equivalent, with only a change of variables necessary to transform one problem statement into the other.

Assuming h(t) is being sought, the simplest vJay of deriv­

00 ing it is implied by equation 2-3. A harmoni·c- input, X(ej ) is

00 applied to the system and the complex output, Y(ej ), is measured for various . The ratio of these two quantities, Y(ejw) I X(cjw) gives H(ejw). The inverse Fourier transfer~

co • • 1 H(eJw)·eJwtdw h(t) = 2-rr j( 2-5 -co may then be applied to get h(t). This method suffers from several serious dra\"Jbacks. To properly uti 1i ze the inverse Fourier transfonn, the band\-Ji dth over which the measurements are taken must be as large as possi­ ble. This makes the process quite time consuming and usually necessitates the use of several different signal gener·ators in the test setup to achieve the required bandwidth. This constant changing of the test setup makes experimental errors hard to 6 avoid. Further, at high frequencies erroneous data may be pro­ duced through mismatches in the test setup, connector and cable effects, and difficulties in measuring the relative phase of the input and output signals. Another commonly used technique involves the use of an impulsive input for x(t}. In the ideal case it can easily be shown that using a characterized by

Lim 2-6 6~

t+~ Lim u0(t)dt = 1 2-7 ~~ / t-~

in the Fourier transform equation 2-4 yields a frequenc~ domain function of unity. This may be interpreted as meaning that this function contains all frequencies at unit strength. In this case the output of the system is actually h(t). The main problem in using the time domain technique lies in the impossibility of producing a true Dirac delta function which has infinite amplitude, zero transition duration and zero duration. Fortunately, finite amplitude pulses with non-zero transition durations are still quite rich in harmonics, and are relatively easy to produce with short enough duration to approxi­ mate the delta function for most applications. Due to thermal and high frequency circuit effects, successive pulse occurences tend to fluctuate in time, an effect known as jitter. This can be overcome by using signal averaging techniques. 7 The most appealing feature of the time domain technique is the speed \'lith which data may be obtained, since there is no need for repetitive measurements over various frequencies. For these reasons the impulse response technique is more attractive than the harmonic input method in many cases. In cases where the available input signal is not a good approximation of the delta function for the system under study, problems can arise. The next section will define numerical de- convolution, the discrete form of deconvolution used on sampled data systems, and will investigate some of the problems which may arise from its use.

NUMERICAL DECONVOLUTION AND INSTABILITIES

Data for real systems is seldom found in the continuous forM \'Jhich is implied by equation 2.... 2. Instead, the functions involved are usually manipulated as \~aveforms \'Jhich have been sampled over discrete intervals of time and then stored as numer­ ical arrays of values. This gives rise to the operations of nu­ merical deconvolution. In the time domain, the discretized

counter~art to·equation_2-2 is

y{k) = I h(k-n)x(n) 2-8 n=-oo This may also be stated as a matrix equation

.2-9 8 where ~and ~are column vectors and h is the matrix

h(k) h(2k) h(k) h(3k) h(2k) h(k) •

• h(nk) • • h(2k) h(k)

[Ekstrom, 1969]. The frequency domain representation is of sim­ i.l ar fonn to equation 2-3

Y = X • H 2-10 except that !, !, and ~' are all vectors and the multiplication is performed element by element. Thus I(l) = !(1) • ~(1), Y(2) = X(2) • H(2), etc. Deconvolution in the time domain involves mul­ tiplying both sides of equation 2-9 by.the appropriate inverse matrix or vector. In the frequency domain an element by element division is used.

If the measurements involved were pure mathematical en- tities no problem would exist. However) the presence of in the system causes grave difficulties. As Hunt [1972] clearly indicates, the matrix h of equation 2-9 11 is very nearly singu­ lar and ill conditioned ... Indeed, i1e goes on to say that sam- pling the h function more closely causes the problem to increase, ash becomes .. increasingly more ill-conditioned .. .. '' Ekstrom [1973] further indicates that 11 Small perturbations in Iy{k)J give r~ise to large perturbations in [x(n)]." See equation 2-8. The 9 transform equation 2-9 runs into the sarr1e sort of oroblems, since transforming to the frequency domain does not change the stability of the system. In addition, if the time domain windov1 over which the samples are taken does not contain an integral number of cycles, the fast Fourier transform {FFT), \•Jhich is often used to perfonm the transformation into the frequency domain, may generate false frequency components due to the process known as leakage [Bertram, 1970]. These difficulties are serious but not insoluble. The next section will survey several methods which have been used with some degree of success to perform the deconvolution in phys­ ical problems. A new method is also proposed which appears to be potentially useful in a wide variety of cases.

SURVEY OF DECONVOLUTION METHODS

From the above discussion it is apparent that noise is the major cause of problems in physical deconvolution. Techniques which have been used to minimize the effect of noise in deconvolu­ tion include filtration, iterative predictor-corrector methods, and the use of optimal compensation. The term filtration, as it is used here, will include any process which performs some arbitrary weighting of the frequency domain data. Techniques which fall under this classification in­ clude bandlimiting, sliding averages, and the use of window func- tions. 10 Perhaps because bandlimiting is the easiest process to per- form, it has been used extensively on the deconvolution problem. This is often accomplished by the use of digital filtering tech­ niques, but pure bandlimiting by simply truncating frequency com­ ponents in the FFT of the signal is also possible. Since decon­ volution represents an anti-integration operation, rapid discon­ tinuities are enhanced by the operation. Noise fits this crite­ rion and usually transforms as the upper frequency components of the signal. Bandlimiting eliminates these components and signif­ icantly reduces the noise problem. Examples of the use of this technique may be found in Papoulis [1975] and the work of Cohen­ Sfetcu [1975], among others. Another signal modification method which is a form of fil­ tering is the method of sliding averages. This method seeks to smooth the time domain signal so that the FFT does not 'see' the noise. Gans [1972, p.19] describes a scheme in which the averaged value over a number of adjacent points on either side of the dis­ continuity is used only if the magnitude of the discontinuity ex­ ceeds a pre-defined limit. It is important to note that here we are talking about time averaging rather than ensemble averaging, a procedut'e \~hi ch averages the waveform of interest over many samples of the entire \oJaveform and \-Jhich reduces uncertainty caused by jitter. Both of the above filtering schemes seek to minimize noise found in the time domain, but windowing is an attempt to reduce 11 noise generated by the use of the FFT on signals with certain properties. Since the FFT assumes a periodic signal, errors will occur if the beginning and the end of the time domain waveform do not have exactly the same slope and amplitude. Failure to meet this condition causes the spectral components to •leak• from their expected positions. The windowing technique applies some weighting function to the time domain signal to assure that it meets the above stated condition. These functions may take various forms but in general they emphasize the middle of the time sample with respect to the t\'IO ends [Bertram, 1970]. A particularly elementary example \

frequency to eliminate the problem of ~ reduction in sig­ nal frequency below this point not only results in reduced resolu­ tion in the frequency domain [Nersereau, 1978] but may eliminate

useful information as well. At this point it should be noted that,

due to the propet"t i es of the FFT, the band\'Ji dth of the trans for·med signal is 12

1 ~F = (2~t) where

T ~t - ~ for an N point sample over time T. [Andrews, 1977, p.78] Similarly, signal averaging over time causes the loss of information about natural transients in the time domain signala Finally, windowing suffers from the fact that the FFT returns the convolution of the window with the windowed function. Bertram [1970] notes that this may "accentuate the effect of the errors in the middle of the time sample. 11 Andrews [1975, p.42] emphasizes that "the resulting spectrum must be in error to the extent that the weighting function distorted the original waveform data."

The inadequacy of filtering methods for deconvolution has In aen- led to extensive investigation into iterative techniques. ~ eral these algortihms utilize some a priori information about the system of interest to make an initial guess for the waveform being sought. Subsequent iterations are used to modify this guess in the direction of the true waveform. An implementation of this technique when one of the known waveforms is of infinite duration and some finite portion is known has been proposed by Peat~son

[1973]. Papoulis [1975] addresses the situation where the signal of interest is bandlimited but only a portion of it is known.

Data with a high signal to noise ratio has been deconvolv~d sue- 13 cessfully using an algorithm by Goutte [1977] and a process by Mersereau [1978] may be utilized if the unknown signal may be modelled by an approximation \'lhich is bandlimited, time limited:. and positive. In general, iterative methods suffer from at least one of two major faults. In some cases the a priori information or the constraints \·Jhich must be put on the solution restrict the utility of the method to a small class of problems~ In others the conver­ gence of the solution is strongly dependent upon the waveform used for system excitation or convergence is so slow that finite word length errors in the computation produce a solution of limited viability. Thus, Papoulis [1975] mentions limiting the number of iterations to control noise effects, and Pearson [1973] speaks of "ad hoc methods for handling large amounts of noise ... The final technique which will be discussed here is the method of optimal compensation. In this technique a compensating system is postulated which possesses a transfer function of ap­ proximately H- 1(ejw). Utilizing some a priori constraint or con­ straints upon this system, an error function is synthesized. This function is then minimized mathematically. If the constraint(s} are \~ell posed the resulting function specifies the optimal com­ pensating system. Thus, in essence this technique converts the deconvolution problem into one of convolution which is an integral, and therefore smoothing, process. Ekstrom [1969] presents an ex­ ample of this process where the minimization is performed entirely 14 in the time domain using matrix derivatives. This technique is a poor one when performed in the time do­ main, since good resolution demands the utilization and subsequent inversion of a very large matrix. This causes problems with com­ puter system storage and with the time needed to perform this pro­

cess on even a moderately fast machine. Furthermore, if the large matrix is decomposed into smaller sub-matrices to aid in the in­ version process, problems with singularities may still be encount­ ered. Optimum compensation retains many attractive features in

spite of these drawbacks, howeve~ A high degree of selective control over the final output function is available through the intelligent selection of constraint weighting, and yet no overt changes in the input data are performed. Thus, this technique avoids some of the pitfalls inherent in iterative and filtering techniques. Finally, if all functions of interest are transformed to the frequency domain before the error function is synthesized the need for an N x N point matrix is obviated and an N point vector may be used to contain the necessary data. This is the technique which is mathematically developed in the next chapter. Besides the advantages already noted, it offers the benefits of being quite straight-forward conceptually, and be­ ing very easy to test for analytic functions. Further, the par­ ticular implementation studied presents only one constraint, and this constraint is easily fulfilled by most signals of interest. CHAPTER 3

THE OPTIMAL COMPENSATOR IN THE FREQUENCY DOMAIN

In the search for the optimal compensator in the fr-equency domain, it would seem that any parametric statement of the problem must include some criterion of optimality, as well as some con­ straints upon the so 1uti on with a view to\·Jards real i zabi 1i ty. Finally, a compensator producing algorithm must be generated and optimized. In this chapter these requirements are fulfilled as the proposed compensator technique is mathematically developed. Also, a short discussion concerning the choice of a good initial guess for the constraint parameter is presented.

THE PROBLEM STATEMENT

The task at hand is to reduce the process of deconvolution to a process of convolution. Since convolution is an integral process this should significantly reduce the effects of noise pro­ duced random discontinuities in the system function. A diagram of the process described is given in figure 3~1.

15 16

----> 1-----> +----->

Fig. 3-1. The optimum compensation process

Given a knowil system transfer function H(ejw) which may h~ve no invers9 by virtue of its high noise content, we seek to synthesize some optimum compensating system, D(ejw)_ Upon excit­ ation by the output of the known system, Y(ej(l)), this compensator .,. . 00 must produce an output X (eJ ) which is in some sense a close approximation to the original system input, X(ejw). In other

00 words, the system D(ej ) may be thought of as a deconvolver for

00 1 H(ej ). Of course, if H(ejw) possesses an inverse H- (ejw), then

00 1 the optimum compensator D(ej ) is just equal to H- (ejw). This fact will be used later to help test the algorithm. In order to optimize the compensator, a 11 goodness of fit 11 function must be described. Here we use the int_egra 1 square error criterion. Thus, we define

3-1

\;here BW stands for the bandwidth of the system under study and E is the error function. From Parseval•s Theorem [Tranter, 1976] it is clear that E is proportional to the energy difference bet\,Jeen . ,. . 00 X(eJ ) and X (eJ\IJ). To help insure realizability we constra·in 17 the compensator to be a finite energy function \aJhere the energy Q, is indicated by

BW • 2 Q = ID(eJw)l dw 3-2 / 0

and Q

00 real, finite input signal X(ejw) that the operation X(ejw)·D(ej } yields finite results. We therefore define

3-3

.. Since this leads to no loss of generality, the finiteness of Q will be used as our constraint condition. The problem may then be stated as follows. Defining the Lagrange multiplier equation

.- J = E + A Q 3-4

the task is to minimize E, the error energy, under the A. \·Jeighted , stability condition Q . 18

THE ALGORITHM

Expanding equation 3-4

3-5

From fiqure 3-1 it is seen that

Thus

Now, from complex variable theory we know that

where * denotes the complex conjugate.

Thus, we may factor

But since 3-10 and 3-11 19 then we may expand to obtain

Under further expansion, and after cancelling like terms, this yields

BW • 2 2 2 2 J X( eJw) ·DR (w) [HR (w)+H (w}+A] dw 3-13 =/ I I 1 0

2 2 2 2 +f BH I X( ejw) 1 {OI (w) [HR (w)+H I (t~ )+>..] +2H I (w) 01 (w) -2HR (w) DR (w)-l}dw 0

From the calculus of variations, to minimize J over all DR and all set n1

and 20 Since the right hand sides of both equations are equal to zero we may multiply equation 3-14 by one-half-and ~quatio~ 3-15 by one-half j and sum both equations to yield

But, from complex variable theory

3-17

and 3-18

so utilizin9 the'equality stated in equation 3-11, equation

3-16 reduces to .

3-19

Since this must hold for ~ X(ejw)

3-20

* . H {eJw) 3-21 or = -~-=----=--- (H(ejw)I2+A 21 This 1s the defining function for D(ejw).

Some motivation for a good choice of A may be gained fron1 equation 3-4. Under the minimization procedures used above,

J becomes small, and in the limit

Lim A = E 3-22 J~o Q~

Thus A should approximately represent the ratio of the error energy to the compensator response energy. It should be noted that, due to the physical nature of the quantities involved, T must always remain positive. Further if A = 0 then 3-23

As previously stated, this is the result expected in the case where no noise is present e.g. analytic functions. CHAPTER 4

THE IDEALIZED TRANSFER FUNCTION EXPERIMENT

In the previous chapter the optimum compensator was derived and mathematically justified. However, the mathematical model includes many inherent assumptions concerning the accuracy of the operations performed. Certain real-world phenomena such as pulse jitter, and finite word length errors in signal quanti­ fication or frequency domain transformation, may so severly re­ strict accuracy that the method becomes unusable. This chapter describes an experiment which was designed to test the algorithm under some of these less than ideal conditions. The program which was designed to implement the experiment allowed the user to create two seperate waveforms, one of which would be the trans fer function \vaveform, and the other waul d be the input, or forcing function for the system~ Various amounts of noise could be added to both waveforms so that the algorithm did not see completely smooth input functions. Given the trans~ fer function waveform and the output function (e.g. the convolu­ tion of the two \vavefonns) and vii th the proper choice of a \\lei ght­ ing constraint A, the compensator routine should be able to re­ construct the forcing function.

22 23 THE SOFTWARE

The routine to accomplish this is listed in appendix 1 as NSTDCN. Although the programming is rather straightfon1ard, a few points deserve clarification. A synthesis routine which begins on line 390 is used to generate both waveforms. Basically both are constrained to be of the form sin2 and magnitude l, but the user has control over the beginning and ending points of each waveform, and thus controls its period. The sin2 form was chosen for its symmetry and because of the fact that it is zero based. These factors help to avoid possible errors in the fre- quency transformation routine which could arise if the waveform of interest exhibited a jump discontinuity between its beginning and end points. The parameter NM is equivalent to the algebraic ratio between the maximum noise amplitude and the maximum signal ampli­ tude. Parameters R1 and R2 are seed parameters which insure reproducibility in the random number noise generation scheme. Finally, some control over the portion of the waveform which is evaluated to produce an error parameter may be exerted by the operator using parameter NC. It should be noted that from line 890 to line 1000 the program basically follows the procedure devised in Chapter 3. A transfer function is produced and stored in array T in line 890, and this function is used to produce the compensator in lines 930 and 940. A complex multiplication in the frequency 24 domain, which corresponds to convolution in the time domain, is used to produce a predicted input function in lines 970 and 980.

Finally, the so called 11 error energy" parameter is generated by taking the root mean square value of the difference between the predicted input function and the true input function, and mul­ tiplying the result by the number of points the operator wishes to observe in lines 1010 - 1030. Once an optimum value for A is found, the operator may jump to line 1140 to investigate the differences between his predicted input function and the true input function.

THE DATA

To test the conjecture that a noiseless waveform will have a progressively smaller "error energy" as A. approaches 0, the noise magnitude parameter was set to 0 and the error energy check~ ed for various values of A. The results are graphed over three decades of variation in A in figure 4-l. Note that a definite decreasina trend is observable as A approaches 0.

The experiment was also run for the cases where NM = 0~1 and where NM = 1.0. The true input function, impulse response and the input function predicted by the optimal compensator for the case NM = 0.1 (e.g. max noise amplitude I max signal ampli­ tude= 0.1} are given in figures 4-2, 4-3, and 4r4 respectively.

It can be seen that there is a very h_igh degree of resemblance between the predicted and the real input functions, 25

The waveforms for the case where NM = 1.0 are given in figures 4-5, 4-6, 4-7, and 4-8. Figures 4-5 and 4-6 are the true input function and the impulse response, respectively. Figure

4-7 sho\'JS the output of the system for this case. This figure was plotted in an effort to determine why figure 4-8, the pre­ dicted input function, shows significantly less noise than the true input function. A possible explanation may be that the very high frequency components which the noise represents are truncated through the bandwidth limitations of the data representation. That is, since convolution is an integral process and therefore inherently smoothing, and since the output waveform, which again is much smoother than the input \"laveform or the impulse function since it represents the convolution of the two, is convolved with the com­ pensator function to produce the predicted input 'llaveform, the compensator function must be very noisy to yield noise in the pre­ di cted function. Ho\-Jever, the frequency of the noise in any of these \·1aveforms is limited by the sampling rate, \'Jhich translatEs to array size, so the noise in the compensator function is prob­ ably limited by this. It is possible that the same effect helps to account for the relatively smooth output function. This experiment indicates that the compensator producing algorithm is behaving in general as predicted. The error energy does indeed decrease as A approaches 0, and the second portion 25 of the experiment indicates that a workable prediction of the input function can be produced by the compensator, even in the presence of considerable noise. In the next chapter a physical system will be treated using the optimal compensator technique, to determine what physical limitations might apply to the tech­ nique. 27

_c: 8 rl

I I ! I ~ '·. .-< 4- \ \ 0 \ c \ i .,....0 \ J 4J I u ' I \ t c ::l _J~ 4- IC rtj ~ (./) rtj

~ rn ~ QJ s:: QJ \ S- c s... s... a; E CJ +l I Vl ~ I§ (/)

•I l~ r- t \ I -=;:t. Ol •r- L1..

...I I

\

i Lr. c t.r. N N

N V) 1- _J C) > VOLTS 1.2

1.0 --

0.8 -

0.6

0.~

0.2 .. \ o. o +MMVIIf~~~~~~ltr~~~~~·W~J~t~ ~~

-0.2 -1-- · ----1---t-~ I t- ~---~-- - --··~ 0 1 2 ' l· 3 4 5 6 7 8 9 10 SECONDS

Fig. 4-2. Input reference waveform for NM = 0.1 N 00 29

~ --$ ..;:#:' I~ r-- ,~ / 0 rr ~ -z.:.... S- 0 ~ Cf- E S- 0 4- c/ (l) > t'd 3: ~~ QJ f U) ~ t: ~ ' 5 a ~ c.. c.r~ Vl ~ (!J S-

(]) I~ Vl r-- =s 0.. E -r-

~ ~ ~...... ,(ij (/) >.. (/) . M I i ¢ Ol I •r- v. \ I I ~ lf\c u.. I.:" N 8 K ~. ~ E? ~ ....; ..-t C' ·c 0 0 c-: ~ ' I 1), 5'1- -

0).5-. ·-

~---~1---~t--~1 --~1---+1---+--·-rl --~1----tl---1 1 2 3 Ll 5 f. 7 R 9 Jfl Sf'Dln1S

Fig. 4-4. Predicted reference waveform for NM = 0.1 w 0 VOLTS '.75

1.50 - I ).?5 I' I~ l.'Jl

0.75 I o'.·so

'l~'?S i . \ o.oo. \ I I -1).?5- I -0.50-~-·j-·---J 0 1 I. ~ i~ SECONDS

F1g. 4-5. Input reference waveform for NM = 1.0 32

a

II

s.. 0 tf- t: • 0 4- QJ > ctj __:;::::::= 3:: QJ U) 77 s:: o_0 Vl QJs... QJ tn

...J o_- E

:::

~~~ - - :;:::

U'\ 0 I vrtT!' 1.25

ltO

'1.91 -- ~.75 1

o.m--

-!1.?.5 l f I 1--+- I n 1 2 3 lJ 5 G 7 R n 1'1 ~0.~!11S

Fig, 4-7. System reference response waveform for NM = 1.0 w w VOLTS

0.5'1--

-0,50 -l 1 t I I I -I 0 1 2 3 Lt 5 6 7 8 9 10 SECONDS Fig. 4"8. Predicted reference waveform for NM = 1.0 w .[.::a CHAPTER 5

THE OSCILLOSCOPE HEAD EXPERIMENT

Although the optimum compensator performed as expected for the purely imaginary case of a sin2 response in the previous chapter, it was felt that some testing of the algorithm for real world cases would be desirable. In order to more easily test the response of the algorithm in a real environment a fairly ideal test case \·Jas chosen. This chapter describes the experiment which resulted. Included also is a description of the software which \1as used to perform the and some observa- tions concerning experimental results.

THE EXPERIMENT

In an effort to restrict the complexity of the waveforms which would have to be digitized and processed by the software a search was conducted for a system with a fairly flat frequency response upon which to test the routine. An excellent choice immediately presented itself in the form of a Tektronix S-1 samp­ ling head which was being used in the 7512 sampling unit of a Tektronix 7000 series digital processing oscilloscope which digi­ ti zed the input vJaveform and stored it in mainframe memor-y \·Jith a resolution of 512 points over the total time window,

35 36 Although the 7512-Sl combination seemed quite favorable s i nee its output information \1/as a 1ready in a form optimum for computer acquisition, the choice of this system did pose some problems. A major consideration was the fact that the S-1 head has a specified rise time of 350 picoseconds or less. In order to observe the impulse response of the head a signal generator had to be found which could generate a pulse with a rise time of

30 picoseconds or less. Fortunately, the same system included a S-52 pulse generator head, which has a specified rise time of 25 picoseconds, so this was used to approximate the impulse response of the system. The same system provided a considerably slower pulse generator head, the 5~56, with a specified rise time of 1000 picoseconds, so this was used to generate the input waveform to the S-1 "system". Finally, a very fast sampling head, the S-6, which is accurate down to a 30 picosecond rise time, was used to generate accurate characterisations of the reference, and impulse wavefonns.

THE SOFT\4ARE

The infonnation from these devices was fed directly into a

PDP 11/34 minicomputer and was operated upon by a program written using Tektronix SPS Basic. Each point in the final input wave­ forms was averaged over 20 successive samples to reduce the ef- fects of jitter. 37

r·1emory 1 imitations in the system in conjunction v1ith the relatively large amount of memory required to store and manipulate the signal arrays necessitated a modular programing approach. Two routines were used to manipulate the data. These routines, which may be found in appendices 2 and 3, are entitled WNDUl and WNDW2. WNDWl is basically a bookeeping routine. The small section which starts at line 80 opens each of the data files in turn and stores the information found there in an array called WT. The format of the digital processor is such that the first 3 values stored in the mainframe memory are the number of points~ the hori­ zontal data interval, and the vertical scale factor respectively. After each waveform is acquired the program branches to a graphic subroutine which starts on line 360 and gives a graph of the wave­ form. This pro vi des a check of \-Jhether the waveform data has been correctly read. The operation performed at line 160 deserves special atten­ tion. This technique, which \'las de vi sed by Hi 11 i am L. Gans and N. S. Nahman of the National Bureau of Standards especially to facilitate the generation of frequency domain transfer functions from time domain waveforms via the fast Fourier transform, may best be described as turning the waveform off in the same \vay that it was turned on (Gans, 1972). The technique basically consists of concatenating an inverted version of the \-Javeform to the origi­ nal waveform. This makes the signal appear periodic to the fast Fourier transform, thus satisfying the requirement that the ampli- 38 tude and slope of the initial and final portions of the transform input waveform be the same. Although the technique doubles the number of points in the time domain waveform, thereby effectively doubling the frequency resolution, it also makes the waveform appear to be an odd function to the Fourier transform. Conse~ quently, all even harmonics of the frequency domain representation go to zero. To facilitate storage, these zero components are discarded and the remaining information is compressed in line 230 and line 240. The frequency domain magnitude function is generated in line 260 and graphed in line 290. Finally, the real and imaginary components of the frequency domain \"taveform are saved in two arrays in the block between line 300 and line 330 and the next program, WNDW2, is ca 11 ed in 1 i ne 350. The output \·Javeforms from th~ \-JfiD\41 routine are sho\·ln in figures 5-l through 5-12. The optimum compensator algorithm is actually implemented in the routine called WNDW2. The module starting at line 30 graphs the input \'Javeforms to insure that no data was lost from l~NDl~l. An initial guess for the constraint parameter, as well as the amount to increment it for the next guess, is input in lines 110 through 140. The magnitude portion of the original transfer func­ tion is graphed in line 200. At this point the user is expected to determine which portion of the transfer function represents useful information and which portion is inaccurate due to noise or computational error. The inaccurate portion is then set to 39 zero so that the optimization routine does not "see" it. A graph of the transfer function of the S-1 head is shown in figure 5-13. The data is fairly smooth up to the 10 giga­ hertz point but becomes somewhat uncertain after that point. For this reason it was decided to set the array to zero after the 10 gigahertz point. This is equivalent to bandlimiting the transfer function, but this \'Jill not reduce the validity of the experiment since the transfer function and the output are the known quan­ tities and it is the input function which is sought.

The magnitude trans fer function \'ta vefom of the opti- mizer is generated and stored in WT at line 240. This function is convolved \'lith the rea 1 and imaginary parts of the system output fu~ction and the magnitude of the resulting optimizer output func- tion is stored in WT in line 280. Line 310 gives some m2asure of the error energy by taking the root mean square value of the dif- ference between the predicted and the input functions over the bandwidth of interest and the routine which starts in line 340 saves both the optimum constraint value and the so called "error energy" for that constraint value. Finally, the portion of the program which begins at line

690 is used after an optimum compensator is found. It exoands• the frequency compressed transfer functions of both the true and the predicted input functions, and utilizes a reverse Fourier trans- form to bring them both back to the time domain. 40 The actual frequency domain representation of the opti­ mized compensator is presented in figure 5-14. The bandlimiting above approximately 10 gigahertz is quite evident in this wave- .. 3 form, ~Ihich was obtained for A= 7.8 x 10"". At this value the so called error energy was 412.688. A plot of the real time domain input waveform, as determined by the inverse Fourier transform, appears in figure 5-15 and the predicted input wave­ fonn is shown in figure 5-16. Finally, the t\oJO are p1 otted to­ gether on the same scale for purposes of comparison in figure 5-17. Although the two waveforms of figure 5-17 are quite simi­ lar, there appears to be a slight difference in phase. Since the phase information \·tas· not investigated per se duri.ng the course of the experiment, this is not a particularly surprising result. If a reduction in this error energy is necessary, the error energy function could easily oe modified to include some indica­ tion of phase error. Figure 5-17 also shows both functions riding on 0 d.c. offset voltage, although the original input function shown in figure 5-3 rides on a negative d.c. offset.

This phenomenon is a result of the Nahman ~ Gans algorithm. Since point zero is an even point, it is assumed to be zero and discarded. Then, during the unpacking part of the time domain transformation, point zero is set to 0 for both waveforms, yielding no d.c. offset. 41

Although these results were quite promising, it was felt that the routine needed a more stringent real world test in a less ideal case. The next chapter discusses the use -of the optimum compensator metnod to find the transfer function of an optical fiber. 42

ClJ (/) Ln r-- ~ fit"\ C'. E •r-

~ (/) C) (/) ~ 0z 0 E ln u ,

r- I LO Ln r-4 0') •r- LL.

0 +------~------~------+------+------~------~--+-0 0 0 0 0 L.'\ 0 C) ~ I ...... N N I I ' 43

ClJ (/) c 0 0... (/) C) S-

Q.) (/) ,..- (,/) :::s e 0.. C5 E wLJ ..... U) 0 ~ :z: (J') $ OJ !--

N I 1..1")

Ol •r- LL.

0 r------~------~------~------~------+------~~~0 (/) 0 0 0 0 0 0 t- Ln U"\ 0 o Ln __J I ...... N N 0 I I > I _J _J :i: f11LLIVOLTS lOU

0

-100

-21)0 -

-300

-400

-soo~--~----4---~----~--~--~----+----+----~--4 o.o 1.0 2.0 3.0 4.0 5.0 0.5 1.5 2.5 3.5 4.5 NANOSECONDS Fig. 5-3, System input reference waveform r11 LLIVOLTS 100

0

-100

-200

-300

-'•OO

-sao -;----+---+---+---+---1---11---+-----t-1--+-----1-f 0.0 1.0 2,0 3.0 4.0 5.0 0.5 1.5 2.5 3.5 4.5 NANOSECONDS Fig. 5-4. System response. to input reference waveform 46

0 0 - r-i

C)... m Q) tl) r-­ :::s 0 0. E co •r-

a r-...-

0 ... lD (/) Q :z:: 0 0 u w t.r.- Cl) 0 ~ :z:: 0 .::r

C')... "" lO I t_")

Ol •r- 1..1..

l V) 0 0 0 0 0 t- U"\ U"\ 0 0 Lr\ ._j M N N 0 I I I > ' -_J -_J Iii LL J 'IUI.TS 100

50

0

-50

-100

-150

-200

-250 I l o.o 2.0 4.0 6.0 8.0 10.0 1.0 3.0 5.0 7.0 9.0 NANOSECONDS

Fig. 5-6. Nahman-Gans windowed test impulse response tHLLIVOLTS 100

-100

-200

-300

-400

-sao o.o 2.0 4.0 1,0 3.0 5.0 7.0 9.0 NANOSECONDS

Fig. 5-7. Nahman-Gans windowed reference waveform MILLIVOLTS 100

0 -

-100

-200

-300

-400

-500 -.J=~-~-~l--lt--1~-lr-----tl--l~~t--t---r o.o 2,0 LJ,O 6.0 S.O 10.0 1.0 3.0 \ 5.0 7,0 9.0 NAHOSECONDS Fig. 5-8. Nahman-Gans windowed system reference response HILLIVOLTS PER 90 GIGAHERTZ 80

70

. 60

50

40

30

20

10

10.2 20.5 30.7 41.0 51.2 5.12 15.4 25.6 35.8 Ll6.1 GIGAHERTZ

Fig. 5-9. Frequency domain magnitude of windowed test impu1se tn 0 ~i 1LL 1VUL IS I'LH ~U fi lfif\111 IU I

R() --

30

20

10

a~~----~~--~~~--~~~--~~1--~~~ 0.0 1d.2 20.5 30.7 q1,0 51,2 5.12 15.4 25.6 35.8 ~6.1 GIGAHERTZ

Fig. 5-10. Frequency domain magnitude of windowed impulse response NILLIVOLTS PER 180 GIGAHERTZ 160

140

.120

100

30

60

20

10.2 20.5 30,7 51.2 5.12 15.4 25.6 35.8 41,0 46.1 GIGAHERTZ .

Fig, 5-1 .1. Frequency domain magnitude of windowed reference waveform c:.n N f·ll Ll 1VOL 1S PER 130 GIGAHERTZ 160

140

120

100

80

60

20

51.2 5.12 15.4 25.6 35.8 Ll6 .1 GIGAHERTZ

Fig. 5-12. Frequency domain magnitude of windowed system reference response U1 w f1I LL 1VOLTS PER 3 GIGAHERTZ 7

6

5

3

2

0 o.a 5.12

Fig. 5-13. Calculated system transfer function magnitude HILLIVOLTS PER 7 GIGAHERTZ

6

5 ~~ I ll 3 --

1

0 -r----t---Lt--+---+-1--+-----t-} --+---t------t} ---1 010 10.2 20,5 3017 4110 5112 5.12 15 ,ll 25 t 6 35 I 8 116 I 1 GIGAHERTZ Fig. 5-14. Optimum compensator transfer function magnitude 0'1 0"1 HILL! VOLTS 300

200

100

0 --

-100

-200

NANOSECONDS

Fig, 5-15~ Reference waveform regenerated through inverse Fourier transform ll ILL IVOI .TS 300 -

200

100

0

-100-

-200

-300~---+----+---~----~--~--~----4----4----~--~ 0.0 0.5 1.5 2.5 3.5 4.5 NANOSECONDS Fig. 5-16. Predicted reference waveform 111LLIVOLTS ~tJU •. -

·~--·------·--·- 200

100 -

0 -

-100

-200

1.5 . NANOSECotmS Fig. 5-17. Regenerated reference waveform plotted on same axis as

U1 predicted reference waveform co CHAPTER 6

THE OPTICAL FIBER EXPERIMENT

The true test of any compensation scheme must be its abil­ ity to adequately perform in a non-ideal (e.g. real) environment. One which presented itself quite readily for the purposes of this thesis was to be found in the Kennedy Space Center Laser Lab. Work was being performed there under a government grant to deter­ mine the pulse dispersion characteristics of an optical fiber. This involved the extensive use of time domain measurements and techniques and thus was ideally suited to test the proposed algo­ rithm.

THE EXPERIMENT

The experimental set up consisted of a SG 2002 semiconduc­ tor injection laser diode which fed pulses of approximately 100 picosecond duration and 2 watt optical power into a 4 meter mode scrambler filter. This mode scrambler consisted of a piece of graded index fiber directly coupled between two pieces of step­ index fiber. It was used to make the resulting optical pulse more independent of launch conditions. The mode scrambler fed the test fiber and from the test fiber the optical pulse went to a TIED 56 semiconductor avalanche photodiode (APD). The APD output signal was acquired using a 59 60 Tektronix 7001 series digital processing oscilloscope of the same type described in Chapter 5 using an S6-7Sl2 sampling head. As mentioned in Chapter 5, the oscilloscope digitized the information into 512 points covering the time window. The digitized signal was then fed into an HP 9825 computer and, after averaging 50 times, was processed using the compensator algorithm. Since the compensator routine requires at least two wave­ forms and since the transfer function was the desired output, both input and output signals were recorded. The APD was coupled to the mode scrambler directly to ascertain the input, or reference signal, and was coupled through the test fiber for the response

\t~aveform measurement.

THE DATA

The fiber chosen for the experiment was a 4 kilometer

lenqth of oraded inde~ fiber. Fiqure ~-1 shows the·p~lse whic~

was iaunched into the fiber by the mode scrambler and f{~ure 6-2 depicts the fiber response waveform, in which considerable pulse dispersion is evident.

The quantity·of interest, d~cibel fiber attenu~tiori, is

defineri b.v ATTEN -10 log H(ejw) = 10

and is shovJn over three decades of frequency for severa 1 choices

of A in fiaure 6-3. Althouah the nlot extends well into the 2 61 gigzhertz reg1on, inherent limitations in the equipment used se­ verly restricted the accuracy of the data in this region. The criterion for optimum choice of ~' the error weighting parameter, thus became that choice of ~ which produced a maximum attenuation in the response signal above 2 gigahertz while leaving the lower frequency signal coffiponents virtually unchanged from their A = 0 values. It may be recalled that the transfer function generated for A = 0 corresponds to that which would be found using classical deconvolution techniques.

A lack of time prevented the determination of ~ any closer than to within the correct order of magnitude, but even at this coarse resolution the results were impressive. Figure 6-4 shows the fiber impulse response for ~ = 0 as reconstructed by the com­ puter through the use of the inverse Fourier transform. This is the classical impulse response found by normal methods. It can be seen that the response is almost completely obscured by noise.

The step response of figure 6-5, generated by having the computer integrate figure 6-4 is considerably more recognizable, and in­ dicates that an impulse response is indeed hidden within all that noise.

The corresponding waveforms for ~ = 10 are given in figure 6-6 and figure 6-7. For this choice of A the impulse response shown in figure 6-6 is recognizable as such although resolution of fine detail is still lacking. At this point the step response has settled down to essentially its final value. This may be at- 62 tributed to the fact that it is produced by integration, which is essentially a smoothing process. The impulse response for the final choice-of A, A = 100, is given in figure 6-8. The response is now sufficiently smooth to allov·l it to be used in dispersion measurements. The corre­ sponding step response is shown in figure 6-9. Predictably, it is not si9nificantly different from the response given in figure 6-7. Finally, figure 6-10 gives the attenuation and phase re­ sponse of the test fiber versus frequency.

Althouqh no single experiment can validate or invalidate a technique, these results show that the optimal compensator tech­ nique is viable in a real-world situation. As this experiment indicates, an essential factor in the use of this technique is the intelligent choice of a constraint parameter characterization.

Althouqh the constraint parameter, A, was chosen only to within an order of magnitude, the results were still quite favorable. The next and final chapter will expand upon these conclusions and present observations concerning the potentiality of this technique for further research. f1f, i.!lf2R E L E C T R 0 - 0 P 1· C 5 L R f3 VOLTS a::rmn:ot srncr

l."tC 81

1.-zc a1

I .l~t: n I

O.IIC Ill s.ec .aa

't.Ct D21

2.0C '"' o.oc aa ~ ~J SECONDS cU

Fig. 6-1. Reference waveform for testing 4 km fiber Pt• ?.t!2B E L E C 1· R 0 - 0 r..J T I cs LRB VOLTS r.•:tai(DY 5Pl1Ct: CEUT~H u-31-1!) •t.&.a:-UI

3.t;C•DI I

J.CC•SU

2.IC-nl

2.r.c-n1 r ,$!:-r.a 1

I • r.t:•D I

s.ct'-Dii!

II.Ct' ISil -I In ~ ro~ r.: g: rJ ~ ~ ~ ~ ~ I I I SCCDNDS I I ~l ltJ rJ ~ kl ,.; ..; I~ ~ ~ I!i ~ ai :r ui - - - N

Fig. 6-2, Response waveform of 4 km fiber ...... I : . ' : r • dD t i I i I . ! ~i !( . !·/·,,; . . .. I I . I . ! ~ '!:: r I ! i 1 l I I i I I I ; I , I j 1\fill. A M ;oo j I I I • i'll I 11'1 i I I I l ! ! !).1.\t~:J~ i I I . I ...... ~ I I I' j ; I ' I I t I 1 Iii\ ... I • ,,~1 (II·} i::~ I I : I ... .~. I j: I ~· Ij~ j'f Iilii j .jl • j:: ~'\! M I I I I i : i I I ; !~ I l ; I J A 10 .. , . I t ,.t I ~~~ 1~1i'' .~!H h·1 ...... I I i l .1 : 1 i :) ~iij ,, h.j}~l Jn .~~ ~ :, ,' 1 I I I I I 1 I 'I I I i : ' . Iilli ~ Jjl· .'i'f il I , ...... ·; .. I I i I .. ,. . I , • f ' Ill" l;t)iJ I . ! ,...... I .tj·l~ J II q, ¥! l J1 .1il' A • 0 .~· ~ ..... ; I : I I' ,I I . ~~~~~lr,1 b!l f"~ i I I : 1[·1 • ! .. , t rlj ·1r· ,.. I I 1 ~ I I I 1 1\ . ; I ; \ . I j I , I i ' . I • . I l . : I . I . I I • I ~ !\ I I : i I I ~ r • I I I I I ; I I l r I i I . I ~ ~~ I I I . ; I .: ...... ; ...... ·- ·- ... 1. .!.A •· . . . "' . . . FREQUENCY' in ~Ulz

Fig. 6-3. 4 km fiber attenuation for various values of A 66

•r- V) ::l -o -- 0. ,_ ·~ E ~· •r- .,_ .. . tl S­ T.: QJ '-· ..0 -.I... ___..,.... ___ _ '-.A. If I

~~ i ---1------4---...;---~ ---. -~ - -~ ---~- ~ ;:- ::l"il :~ r:. ~ ~ ~ ~ c ~ r.J Q ...Q •t J 't I I ~ ~ ~ ~ u ~ • a.:• ~ ~v ~ ~ Ll .~ ' ~ ~ ~ !'\1 ;II "' e. ,.: : 67

-;1::-~·l 0 II ..-<

1.2-:.,·: 0') .,...c: Vl ::s ~1 "'0 QJ ..L ~ n:::l E 0 .. r ;:­ u ~I~·~ ,. T Ul nj

QJ V) .- c c 0 .__ ,.: -. ,_ 0. Vl .... ~: . '-'

-..:..::.!·~ r::~ U') I \0 --+----f..- [._ ,., -+---.J ------;-----· ·-. a...... N .v =~ r.: u ~~ r.: u 1 • wI ' :~ '.i ~ '~"' :: r.a 68

0 ,.- f'.l-:::J·~ II r<

0) s:: •r- tl') ::s "'0 li1 Q) +' rr: :l 0. _j E 0 1:'1 0 if:~ '~ U) v~ n:s Q) Ul c • 0 ll. a: 0...... Ul n~...... c; QJ \..• s.. 1 w QJ LJ~ Vl ~;~ r-- ,... :l a. J- ~ E •r- '-':';:'8 ~ F.!-;:::!"9 S- w QJ _j ..a •r- w t>.-:O.l"h «+- E ~ L-~.n -=::t

~ I U) r; ,.. ~ M rr. M ~~ ~ F.; N ~ u :".I r::: u a' I I I i I I I I u \..1 u \ol t:;') u • '-' ('\ ;• ~ ~ '· ~ r: ~ .,_ :i '"' " N N N c L1- 69

0 r-- .n·t--...·•·:-- II .-< c~ L:.~ •r- (/) ::s "'0 (JJ tE-n .fJ ~ ::s u. c_ E 0 lr·z-=•>l u Cl (/) ·~· ~ "". ":"":: rd 1 • :T.

' ~ . ::::;): QJ L~-:;n (.It c 0 ~'l c_ ~.,. V) .1 a

...... I lO

rn •r- ,.----; ----.;----i u... .. ~ :~ '·· ~; t ·.- :;~ ... :.0 ..:: _... , ~ :.· \o: \.: \.:• ,, ~ :..~ -· = :... ;. 70

0 0 ,.- .,.,_ . .,.,. \4 .... -

~-:~·1 •r-

••·-

Cl) nl

Q.) (/1 t:: 0 ,- a_ -, :1:, (/) w Q) ._.-1.:,.: 0·- S- (]) 1:.: (/) -;~ .-­ i~ ~a. >- 0 E ,- ~ ~ .. = ~ ~-

1.1 -.1

J----f --\---~------f--4 -·---f :o. ~ ... r'i ~ !"~ t: ~ ~ !.J !..i ::t ~ u 'i I ·; r I 1.' 'U t.J !.1 ' ' t.!'• ~ ,...... ~: ~ H c "X ,._ :"'. !':. ... =" " 71

0 0 r-

•r- (1) ::3

·~ "'0 .l.l Q) ~ ::s 0.. E 0 :'I'; u ·-I' :---J -J "":;z 't ...... U1 \..' ~ Q: QJ V1 s::: 0 ~ % 0.. - \&! Cl) ;~ ---...... : QJ . \..• S- •t..: -~ o. Q) ~ V1 S­ QJ ..c •r- lf.- E '...... ~

m I \0

---4 ---1---+- ----i N N N ~ :~ u i'i ~ r;; 'f I '~I I I "'I :.r ~ '-1 u t.t u ~ X lC ~ .:r' -· ::l - t& . N ~---·- uD DEG I I ~ ~s; S'!l'l PILil!i l

.. •m 520

.. JS' li6i! I ~ 3D ;;gs I ! I ~ i!S' 33il I .. ;:a 2E 1t ' I i ' ' I . I I I I ~ I • IS IDO I U+U' I I' ! ATT CN. -._~ ~ ~ Ill 132 - vv v ... s 66 1--'~ ~ PIIASE ._... :.- _.-J ---l----,.-- - r I :~":" f: L_L_f._ ~ ~ jg FREQUENCY in MHz

fig 6-10. 4 km fiber attenuation and phase functions using optimum value of A (= 100) CHAPTER 7

CONCLUSION

The optimum compensator technique appears to be quite viable in a large variety of situations. Probably the factor which most significantly enhances the flexiBility of the method is the-high degree of operator control concerning the definition of an optimum value for A, However, it must be highly stressed that, as in any other optimization technique, the success or failure of the method rests upon the intelligent characterization of the qualities necessary to define a good~. No general rule can-be stated for this since it will vary with the problem, but it may be noted that \·Jhen the capabi 1 i ty of graphing the output of the compensator is available, this probably provides the most information regarding the optimum choice of A. A case in point may be found in figure 6-3 where the graphic representation of the attenuation function is used to determine the optimal choice for A..

It should be noted that the final compensator is not a function of the input to the system. This means that once the optimum compensator is determined for a system, it should remain fixed for· all types of input signals. This means that this com-

73 74 pensator could conceivably be implemented on a high speed signal processing system to provide real-time operation. Further in­ vestigation into this possibility seems warranted. APPENDIX 1

NSTDCN

75 76

5 REM CLE~~ LAST RUN 19 !• :::L ETE t·3~ ~ 1.J~ .. I.J ! .. ~ .. AR _,. A! .. 8R .. 8 I , CR .. C I 20 DELETE HR .. HI .. V2~VI .. UR~UI .. DR .. DI,PR,PI 38 D~LETE ~R .. E! .. WT .. T.. R ~:a tJAVEFORi·! tJri IS A (5 11),. H~ .. H!'1S,. VA$ 5J HAS=usn 11 68 VAS= '·./" 70 H;=l ~ 10/~ ~ 2 75 Rt:M SET 1..:.!:-:·-.,.~FD~i\,1 L Ir1!T3 88 F'R i tlT 11 STA~T r:;dD END t=U~ L·~A 0 U8 n :38 I ~:PUT ~-~! .. N2 .. N3 .. ~~4 95 R~J·l 1'-:i':= O·lAX NO IS::: Ari?L I-IIJDE.) /(t'iAX SIGNAL AMPLITUDE) 1ilS p;~ i i :T tl ~~0 I SE r-;ACN I TU~)~~ R~ii i 0 II; 110 I ~~PUT ~~M 115 REi'! ~ ~i IT! sL:: ZE ~Pi~!DOn ~~C~V:~1 ER GENERATOR 12a P:~ z:rt 11 SE~D '../:~L.uc::s (G ~v·~ TLJO) ,. ; 1Z8 I N?U.I ;{ 1,. R2 l-4~ 1\AN!)Qi·i R 1 .. R2 15~) H8 =H ~ '.N:: =~~2 1s·J r~~!·l SU3~0UTZNE 393 C~:::~TES S I~t SOUt~F:ED l.JMV~FORMn L38 GOSUB 398 17 J :U H1 R ~ 5 11 ) 175 REn F! IS NO IS~ r~:1~f.lY 1:3::] R =I·<:"~ D ( R) 19:) R=-~-i .. ~=:~t(2) 2·1-:.t fi=A+~~:--~·:~?.~:J'!~~<(A) 21:.J PAGE 22:) PR IHT "WAn 238 GRAPH LJA 2-4~ lJA IT 250 Pf-1GE 260 l.JA'./EFO!(M WR IS HR (256) ~ HF .. H::-SJI VFS 270 lJ;1VEF02t1 LJI IS H 1 (256) .- HF .. HFS,. VFS

29:] tl8 =N3'.~~E =N4 3~ll COSU8 398 3 10 ~~ =RND (;::) 320 R=l<-i·:~A (R)

3:-:'J PRINT •~t.J3" 35t.:) GRAPH U~~ 3S8 L.J;·:ll T 3'?1J P:'~GE 38a coru ~~so 3'35 li.C:t·l S i ,·1 SQUARE::J l·JAV~~OR~1 SYNTH~S IS ROUT H'i~ 39d 1~:::8 4:::l:J P::~~~;:;~T'·! ( 1) .... (N~-NB) 410 FOf~ ~ =•'18 TO NE 4:2£1 >:::SIt·; ( C! -H3) :,:!?) 4:~a-J A< I) :::-:;~~;~~~ 4:.~J I~E::

-:: .;: ·_. ~.. ;:r.r·L:::- #_., J"l v~ I:3 cr~ ( 255) ~: -;r: ... h;=s,. VF~ 4?:J u::i"\:.::,=o~ti VI l S C I (25G), 1-1;: .. HF5... VF$ ~~'3::J I:Fi=-T !.. ~:~ .. Vi~,. V1 ~:-38 t.._1.i\li::FO:d~J U2 1S D2 (256) ... HF ... Hi-!3,. UFS 5J.J t~_i,~~.;t:r-;J:~:·! U i IS D I ( 255) .. Hi: .. HF$ .. UF$ si tl u1~ =LJ;~.:~··./;~-w 1*'·/I

338 RFFT ~.:1 ... UR, U! .. ·n INV0 5.:'i3 P;~ I ~~T 11 COI·~VOL \!ED tJriVEFORM" 55:-J cr.:~:PH U8 568 t.J.-.:11 T 570 P;-IG!:: 588 R=R;~D(?.) 59:1 ~:=~-i--;:::;~ (R) t51 r) F;~ Ii ~T 11 EFi=~C7 Or COi--;\h"JLUTlON NOISE 11 528 G:~~!F'H t·Jrl'-PAG~ 538 F~~F-f t·!.=l .. U~ ... U t 535 ~-:.:::-1 l~:=GEi-:E!{~lTt: t·J~ WiTH hO ISE 648 i-.;8=~\1 ,~~~=1·~2 650 GOSU3 3S8 555 R~t'J F:=s~T NO IS!:: GENER~1TOR 558 R::.:·;D:JI i R 1 .. R2

5:38 l~ ~R-t'i:::A ( R) 69<) A-=~t+~:r-~*!~*:iAX (!i) ?0'J GI~A~H WPr,PAGE 713 r~.=FT LJ~ .. t.. l!< ... LJI 715 i~Et-1 f~cGEt-~ERATE W3 WITH NOISE 720 t~8 =N~1'llE =N4 738 GOS~::i 390 7~8 :-:.:.-:~: E.J (J~) 75'3 !~=i~-i~:f=l (R) 7.; 3 ~ =A·r~~~·i::~;~;::ft~>~

925 REi·~ COiviPEi·~SAT0~1 SY~lTH'::S 13 i::Qw~T IONS 930 BR=HR ....-(·!~"-;~u I 9:3a PI =:::r~:::.u I +E 1~:=UR 99~J I.·JAV::::::IJ;.:t-1 t.ri IS T, H? ~ HPS ... V?S 1<103 T =S(~i ~ ( ;;R~t.f~~-;-~11 ~;:,:)I) 182i5 R::;·j ~:~RuR 1ND 1CAT ION ?:-Y~I~i·~::TER S'{~lTHES IS 18! IJ b!! .:p~~-\.. R'-SR =F-~~ · :3 ( Bf.~ ( G: HC) ) 1020 OI=P!-Vi,SI~RMS(S!C0:NC)) ! 030 S:~ ( S: -~::~SR+S! >r:S!) ~;:( i·~C+ 1) 11 1048 Pf~ I !-17 I! EI~ROR ErlS!~GY = .: s; II FCR Li~i'i3!JA = n; LA 12l58 w::l 1-r 1060 IF S>Sl THeN 1080 10"?0 St=S'\.LlcLA 1030 IF K>!O THEN 1110 1890 K=~+!'-LA=LA+DL 1100 GOlD 838 1110 Pi~ !~~T' ...0 R INT 1128 PR I~ff "ERROR EN:::RGY = "; S 1;" FOP. OPTH"'iAL LAf19DA = u -~ ll 1130 END 1135 REl't T2ANSFORi"\;::lTION TO T!r-i~ DOi"iAIN 1140 DELET~ EI~ER~UI~UR,UI~WR,UT,BI,BR,DI,DR,Ht,HR,T 1150 :V lt1 RR C511 ) ,. PP ( 511) 1160 RFFT RR,CR,Cl,. 11 INV" 1170 RFFT PP,A~ .. AI,"INV 11 1180 E~ID APPENDIX 2

WNDWl

79 80

5 REM CLEA~ LAST RUH 10 DELETE LJT ,.l·J~,.lJB,.IAC,. t,JD,. A?.,. A!~ 8R,. 8 I,. CR,. C I,. DR,. D I,. ER, E!,. HR,. HI ,.PR,.PI 20 DIM WTC1023),.TRC512),.Tl(512) 25 REM READ INPUT DATA FROM FILE 30 FOR J=! TO 4 4ra IF J<>! THEN GQTO 50',F5= 1'S52S6.!'ATn'T$= 11 l·JAn 50 IF J<>2 THEN GOTO 68',FS=:'S32Sl.DAT 11 ,TS="lJ9" 60 iF J<>::J THEN GOIO f'0,FS-::•iS34S6.DAT 1 ,..,T~="LJC 11 70 iF J< >4 TH!::d GOTO 8fl',F£= ·'S54S 1. Di~T 11 '.TS== "WD 11 80 OPeN r:;:1 AS FS FO!< I~;:AD 85 R~l·i F I L~ t~.:~I'-!AGE;--:~~rr !1·4FC~~i.~T I ON 9~J !i·;PUT ::;:1"' i·L .. DT,. V 100 II,! PUT ;;: 1.. tJf ( e: 5 11 ) 110 CI_OSS ~~1 128 Z~=aJli 138 K=5!1 135 REi·: J2R;lPH TIM~ DOi·!AIN i-LJ:--.!CTIGN 140 GOSUS; ··~88 145 r~EI'I PE~FOR~I NA~-ii'1AN-G~1i·iS L·J I H)OLJ I NG RCUT IN~ 159 FOi~ ! ==512 TO 1823 t 6£1 LJT ( I) =tJT ( 0) -HJT ( 5! 1) -t·JT ( ! ·-5! 2) 178 NE;;r I 180 ZS=uNAht~N-GANS UINDOW~D " 190 K=1023 200 GOSUB 409 210 RFFT t.JT ... TR,. TI 2 i 5 R~f"! COMPf~cSS FR::L1UEJJCY DOi--;A! N DATA 220 FOR i=l TO 25G 230 -r~ ( ! -1) =TR ( 2~i< I -1) 240 TICI-l)=TIC2~I-l) 25a N..:: ~T I 255 R~i·; r·;;:lGN I TU:!)E FlJNCT! ON 251:) lJT(£l: 255) =SOR CTR (~: 25~) ~::!R C2: 255) +T! (8 :255) *TI (8 :255)) 270 t<=25~ 2a0 2$=c~R~O DOMAiN " 298 GOSUB 400 295 RE~J OUTPUT M~TRIX STORAGE ROUTiNE 380 IF J< >1 THEN 328'-D 1i'"l t~R C255) ... A I (255) 318 AR=TRC8:255),Ai=TIC8:255) 32t~ IF j< >2 THEN 3~~8'D IN Dl< (255), 81 (255) 339 Bl~=-i~~~ OJ: 255) '8 I =TI (8: 255) 348 IF J'>3 TI-iEN 350'011"'1 CR(255) .. CI<255) 350 CR=TR(8:255),CI=T1(0~255) 360 IF J< >4 THE:'! 388'D Ii-·1 D~ (255) ... D I (255) 37~ DR=TR(0:255),Dl=T!C0:255) 380 NEXT J 385 REM CALL NEXT ROUTINE 390 CHAIN QWNDW2.8AS" 395 REM GRAPH ROUTINE 400 PAGE,PRINT Z$;T$ ~18 s:rc~ cRnT 4~4 42a GfH~r::-! lJT(f.J: i~) '~tJH IT .!.130 I~L:-iUI~I'J APPENDIX 3

\~NOH 2

81 82

! ~ D:.::~ETE t.JT 20 D 1l''l lJfC255) 25 REr-1 C~:~nP!-i INPUT DATA 3~J FO!~ I== 1 TO 4 40 GOSU8 I OF 410,478~530~5S8 50 NE~~T ! 60 D !i·i t-:: :~ r: 255), HI (255) 70 l·jT::AR:.:::~~~+8 I'::A I 75 REli -iT~Ai·iSFER Fl!~~CTION EQUriTIONS 80 Hl~=~8f~:<·=AR+8 I*A I) /WT 90 HI= (8 I ~;~Af~-BR~~A I) /l.JT 100 DELETE AR~AI~BR~BI 110 PRINT 11 LAi13DA = "; 120 !~~PUT LA 13:J Pr~ 1:-rr :, I NCREr·I:::I~T = n .: 14'~ il'iPUT DL 151:J Z:·.. ~= ''F:-!?cQ DO!"! RATIOn lS:J TS=="U3/I _ .J,~" 17a s 1 = il:-:<28 16'J r~;.;8 190 DH1 Ef?(255),EI(235) 195 Rl.-::i·l T1~1-)~~Si-El~ f=Ui·tCT I ON M;iG~l I TUDE 2~J Wf::SO~ (H:~:.:~H:~·:~;-ll:i:;--{!) 211:) IF S! =1~+20 THE~1 GOSU8 620 215 I~[M O?TI~liZEf~ SYi"ITHES IS 220 ER :::~:~/ ( I_.JT~:~tJT +LP.) 230 c ~ =-:·l! /(WTY.~!.JT +L~i) 248 l·JT=Su, · ~ S 1 THEl~ 350 340 S 1 =s·~,!_l ==LA 358 IF K>18 TH~N 308 368 I~ ==I~ -r 1' ·.LA =LA.j-DL 3 78 GO~fiJ 2tl9 3;3~J Pk i ~iT'-!:~~ i !-IT 391:) F~~ Jt l"1- ''~f~I ·~OR ENERG"r' = n:: S 1; " FOR OPT! i'':AL LAMBDA = "; L 1 :~8:~ EHD 418 T~3= "i.!~ II 428 I. .Jr~~:C!i~ \ 1-iR:::r:~+A i :;~FJl) 43:-J Go·ro s;~u 4.::; 1.3 ·r~:>= 11 1.•J8 u 450 I.Jf::l.J3 ~68 G,J.(i.J (12£1 83

4S:J Go·ro G2t:l

5J:-J ·~= IIL~C II 51J UT;;:LJC 52~J CiJTO 628

5~ J tJT<3C~:~ ( CR:-:~CR+C I ~::c 1) 538 GOTO 528 55J TS= 11 lJD 11 57D WT=l·JD s:~J GOTO 620 ~9;-J TS::.."U!.)" 68J l·JT=SU~ ( DR>:

73~ RI=8 74·:1 SR=8 75a SI=0 768 FO~ 1=2 TU 512 STEP 2 7('8 J=(i/2)-1 788 RRCi-l)=CR(J) 798 RIC!-l)=Cl(J) 8~0 SR

S25 ~.::r·l T!i'·i~ DOI'"~rliN TRANSFORi"i~TION 830 D~LETE CR .. C!,PR,.PI 849 D lt'l R ( 1023) 8SJ RFFT R,RR,Rl, 11 1NVu 850 DELETE RR,Rl 870 D It1 S ( 1023) 880 RFFT S,SR .. SI,"IHVN 890 END LIST OF REFERENCES

l. Andrews, J.R., and Gans, W. L. 1975. Time domain automatic network analyzer for measurement of rf and ~icrowave com­ ponents. NBS Technical Note 672. Boulder, Colorado: Na­ tional Bureau of Standards. 2. Andrews, J.R., and Arthur, M.G. 1977. Spectrum amplitude­ definition, generation and measurement. NBS Technical Note 699. Boulder, Colorado: National Bureau of Standards. 3. Bertram, S. 1970. Frequency analysis using the discrete Fourier transform. IEEE Transactions on Audio a~d Electro- acoustics. AU~l8: 495-500.

4 . Cohn-Sf etc u , S . , Srn i t h , M. R. , Ni c ho 1 s , S . T. , an d Henry , D.l • 1975. A digital technique for analyzing a class of multi­ component signals. Proceedings of the IEEE. 63: 1460-67.

5. Ekstrom, M.P. 1969! An iterative, time domain method of sys­ te~ response correction. Paper ~resented at Western Elec­ tronic S~ow a~d Conve~tion, August 19-22, 1969.

6. Ekstrom, M.P. 1973. A spectral characterization of the ill~ conditioning in numberical deconvolution. IEEE Transactions on Audio and Electroacoustics. AU-21: 344-48.

7. Gans, W. L. , and Nahman, N. S.. 1972. Fast Fourier transform im­ plementation for the calculation of net\-:ork frequency domain transfer functions from time domain \'laveforms._ NBSIR 73-303. Boulder, Colorado~ National Bureau of Standards. B. Goutte, R., and Prost, R. 1977. Deconvolution when the convo­ lution kernel has no inverse. IEEE Transactions on Acoustics, Speech, and Signal Processing. ASSP 25: 542-48.

9. Hunt, B. R. 1972. A theorem on the di ffi cul ty of numeri ca 1 deconvolution. IEEE Transactions on Audio and Electroacous­ tics. AU-20: 94-5. 10. Mersereau, R.M., and Schafer, R.W. 1978. Comparative Study of iterative deconvolution . Proceedings at the IEEE Conference on Acoustics, Speech, and Signal Processing. April Tcf- 12 , 19 78 •

84 85 11. Papoulis, A. 1975. A new algorithm in spectral analysis and band-limited extrapolation. IEEE Transactions on Circuits and Systems. CAS-22: 735-42. 12. Pearson, A.E. and Silverman, H.F. 1973. On deconvolution using the discrete Fourier transform. IEEE Transactions on Audio and Electroacoustics. AU~21: 112-18.

13~ Tranter, W.H., and Ziemer, R.E. 1976. Principles of Communi­ at ions. Boston: Houghton Mifflin Company.