Contents

Volume 3 – Reference Manual

Contents

1 Sampling analogue signals 1.1 Introduction...... 1-1 1.2 Selecting a sampling speed ...... 1-1 1.3 References...... 1-5 2 Digital filters 2.1 Introduction...... 2-1 2.2 Simple digital filters ...... 2-4 2.3 Practical digital filters...... 2-13 2.4 References...... 2-14 3 Fourier analysis 3.1 Introduction...... 3-1 3.2 Fourier series and the Fourier transform ...... 3-2 3.3 Reference...... 3-6 4 Power spectral density 4.1 Introduction...... 4-1 4.2 Computer calculation of power spectral density ...... 4-2 4.3 Window functions...... 4-3 4.4 Calculation of signal properties from the PSD...... 4-8 4.5 References...... 4-10 5 Frequency response functions 5.1 Introduction...... 5-1 5.2 Calculation of frequency response function...... 5-2 5.3 Gain...... 5-2 5.4 Phase change ...... 5-3 5.5 Coherence...... 5-4 6 Time-at-level & probability density 6.1 Introduction...... 6-1 6.2 Time-at-level analysis ...... 6-1 6.3 Probability density ...... 6-4 6.4 Computer calculation ...... 6-5 6.5 Interpretation of results...... 6-5 7 Amplitude analysis 7.1 Introduction...... 7-1 7.2 Rainflow cycle counting...... 7-1 7.3 Level crossing analysis...... 7-9 7.4 Peak and valley counting...... 7-11 7.5 Range counting ...... 7-12 7.6 Summary ...... 7-13 8 The data chain 8.1 Introduction...... 8-1 8.2 Data recording - analogue or digital ...... 8-1 8.3 Short term recording or long term analysis ...... 8-3 8.4 Analogue to digital conversion ...... 8-4 8.5 Sequential sample and hold...... 8-5

©safe technology limited Volume 3 Contents 1 Vol. 3 Contents Issue: 1 Date: 31.10.02 Contents

8.6 Simultaneous sample and hold ...... 8-6 8.7 Reference...... 8-7 9 Signal statistics

Volume 3 Contents 2 ©safe technology limited Vol. 3 Contents Issue: 1 Date: 31.10.02 Sampling analogue signals

1 Sampling analogue signals

1.1 Introduction The world is an analogue world, which must be converted into digital form if it is to be processed by computer. The process of conversion is called sampling. The way in which signals are sampled has an effect on all the subsequent signal analysis and so it is appropriate to consider the subject of sampling before studying the analysis of the measured data. Sampling may be carried out using an analogue to digital converter (ADC). An ADC takes samples of analogue signals at specified times, and converts the samples into binary digits for analysis or storage. The way in which ADC's operate in described in Section 8.

1.2 Selecting a sampling speed Consider an ADC which is set to sample at 250 Hz, i.e. 250 samples per second. The diagram below (Figure 1.1) shows sine waves, with arbitrarily chosen frequencies of 55 Hz and 71 Hz, sampled by the ADC at 250 samples/second. It is reasonable to suppose that these sine waves could be reconstructed from the samples taken.

Figure 1.1 Signals of 55Hz and 71Hz, sampled at 250Hz

At the same sample frequency of 250 Hz, sine waves of 100 Hz and 150 Hz produce the same set of samples (Figure 1.2).

Figure 1.2 Signals of frequency 100Hz and 150Hz sampled at 250Hz produce the same set of samples

A reconstruction of a signal from these samples would give the signal with the lower frequency, and it would not be possible to reconstruct the higher frequency sine wave from this set of samples. Consider one further sine wave, with a frequency of 125 Hz, i.e. a frequency between the two sine waves shown above. If this is sampled at 250 Hz, a number of sets of samples are possible (Figure 1.3).

©safe technology limited Volume 3 1-1 Vol. 3 Section 1 Issue: 1 Date: 31.10.02

Sampling analogue signals

Figure 1.3 Possible samples from a signal of 125Hz sampled at 250Hz

In each case, it is possible to reconstruct a sine wave of the correct frequency, but a number of different amplitudes are possible. A graph of the frequency of the sine waves, and the amplitudes and frequencies that would be reconstructed from the samples taken by the ADC, could be plotted as shown below (Figure 1.4).

Figure 1.4 Amplitude - frequency plot for the sinewaves

It can be seen that when the sine wave has a frequency less than 125 Hz, both the amplitude and frequency can be reconstructed correctly. For a sine wave of 125 Hz the frequency can be reconstructed correctly, but the amplitude cannot. For the sine wave of a frequency greater than 125 Hz, two possible frequencies may be reconstructed. This suggests that for a sample rate of 250 Hz, the maximum frequency of sine wave that can be reconstructed is 125 Hz, i.e. half the sample rate. Further, sine waves of frequencies greater than half the sample rate will be reconstructed as sine waves of frequencies less than half the sample rate. In other words a 'folding' of the apparent frequency occurs, whereby frequencies greater than half the sample rate fold back to appear as frequencies less than half the sample rate. Indeed, a sine wave with a frequency equal to the sample rate (in this case, a sine wave of 250 Hz) would be sampled as shown.

Volume 3 1-2 ©safe technology limited Vol. 3 Section 1 Issue: 1 Date: 31.10.02

Sampling analogue signals

Figure 1.5 250 Hz signal sampled at 250 Hz

The result is zero frequency and an unknown amplitude, so a signal frequency equal to the sample rate folds back to give an apparent frequency of zero Hz. On the frequency diagram, therefore, there is an axis of symmetry at half the sample rate. Sine waves of higher frequencies have their apparent frequency folded about this axis, to appear as a lower frequency. This phenomena is known as aliasing. It leads to one of the most important theorems of sampling, which is that

if the highest frequency present in a signal is f1, then at least 2 x f1 samples per second must be taken in order to define this frequency.

As has been shown above, the highest frequency present in the signal is not the same thing as the highest frequency of interest in the signal. The frequency which corresponds to half the sampling frequency is called the Nyquist frequency. It represents the maximum frequency which can be interpreted by a analysis. Clearly, if it is proposed to analyse the signal in order to define the frequencies present, then a sample rate could be selected which is at least twice the maximum frequency present in the signal. This would avoid the problem of aliasing, and is therefore one possible criteria on which to select the sample rate. Other forms of analysis may require a higher sample rate. A sine wave sampled at four times its frequency could produce different sets of samples. Two possible sets of samples are shown in Figure 1.6.

Figure 1.6 Possible samples from a signal sampled at four times the signal frequency

In the first instance, the samples have occurred (by chance) on the peaks and valleys in the signal. In the second case, the peaks and valleys have not been accurately defined. The sample rate, of four times the frequency of the signal, is twice the Nyquist frequency, but is not adequate to define the amplitudes of the peaks and valleys. Clearly the conventional recommendation of the Nyquist frequency as a basis for selecting a sample rate is quite inadequate for analyses such as fatigue analysis, which require an accurate definition of the amplitudes of the peaks and valleys in the signal. It can be shown (ref 2) that the error in defining the peaks in a sine wave is given by

f Pk = 2 sin   2fs

©safe technology limited Volume 3 1-3 Vol. 3 Section 1 Issue: 1 Date: 31.10.02

Sampling analogue signals

where Pk = percent error on peaks/100 f = frequency of the sine wave

fs = sample frequency

This equation is plotted below (Figure 1.7). It shows that at four points per cycle, the error in peak resolution can be up to 30%

Figure 1.7 Error in sampling a sinewave

In Ref 3, a narrow band Gaussian signal was used to investigate the error in fatigue life prediction which results from undersampling a signal. Taking a sample rate 100 times the signal frequency as giving the 'correct' answer, and normalising all other calculated lives by this value gave the results shown in Figure 1.8 for local stress-strain analysis and for analysis of welded joints using BS5400 fatigue life data. It can be seen that sampling at 10 times the signal frequency gave calculated lives 1.3 to 1.5 times the true value. A sampling rate of four times the signal frequency produces calculated lives which are two or three times the true value, and a sample rate based on the Nyquist frequency (two points per cycle) gives an error which is too large to be shown on the graphs.

Narrow band Relative life

Broad band

Points/cycle

Figure 1.8 Effect of sampling frequency on fatigue life estimation

In practice, errors would be expected to be less than those shown, because most signals contain a mixture of frequencies, and the fatigue damage tends to be produced by large amplitude lower frequency components in the signal. However, a sample rate of ten times the maximum signal frequency generally gives a reasonable compromise between quantity of data and accuracy of analysis.

Volume 3 1-4 ©safe technology limited Vol. 3 Section 1 Issue: 1 Date: 31.10.02

Sampling analogue signals

1.3 References 1.1 Caxton C Foster REAL TIME PROGRAMMING - NEGLECTED TOPICS Addison Wesley 1981

1.2 Donaldson, K. FIELD DATA CLASSIFICATION AND ANALYSIS TECHNIQUES S.A.E. Paper 820685

1.3 Morton, K., Musiol, C., Draper, J. LOCAL STRESS-STRAIN ANALYSIS AS A PRACTICAL ENGINEERING TOOL Proc SEECO 1983, City University, London

©safe technology limited Volume 3 1-5 Vol. 3 Section 1 Issue: 1 Date: 31.10.02

Sampling analogue signals

Volume 3 1-6 ©safe technology limited Vol. 3 Section 1 Issue: 1 Date: 31.10.02

Digital filters

2 Digital filters

2.1 Introduction A filter modifies an input signal to produce an output signal.

Filters may be used for many purposes - to remove high frequency noise, to remove long term drift, or to shape computer-generated signals for mechanical testing.

LOW PASS filters are designed to pass low frequencies, but eliminate higher frequencies. HIGH PASS filters are designed to pass high frequencies, but eliminate low frequencies. BAND PASS filters are designed to pass a certain bands of frequencies, but eliminate others.

This introduction to digital filters will concentrate on the principles of low pass filters, although the information is generally applicable to high pass and band pass filters. For a low pass filter, the low frequencies will be passed with their amplitudes largely unchanged, whilst the higher frequencies will have their amplitudes reduced. The extent to which the filter modifies the amplitudes is expressed as the gain of the filter, and the gain will have different values at different frequencies. An ideal low pass filter will pass low frequencies without any modification to their amplitudes (gain = 1), and the higher frequencies will have their amplitudes reduced to zero. The gain diagram of an ideal filter, plotted as a function of frequency, would be as shown below.

Figure 2.1 Gain diagram for an ideal filter

The gain is the ratio of the output and input amplitudes at any specified frequency. The frequency at which the gain changes from unity to zero is the cut-off frequency. In practice it is difficult to obtain such a sharp step transition between the pass band and the stop band, and a typical filter will have a gain diagram as shown in Figure 2.1.

©safe technology limited Volume 3 2-1 Vol. 3 Section 2 Issue: 1 Date: 31.10.02

Digital filters

Figure 2.2 Typical gain diagram

It is less obvious just what frequency represents a cutoff in this case, so a definition of the cutoff frequency is required. By definition, the cutoff frequency is the frequency at which the gain falls to a value of 0.707. This number is not quite as arbitrary as it looks. In the generation of electrical power, power is proportional to (current)2, so a gain of 0.707 represents a reduction in power of (0.707)2 = 0.5. The cutoff frequency is therefore the frequency at which the power is reduced to one-half of its value. Filter descriptions may specify both a cutoff frequency and a roll-off rate. The roll-off rate is usually defined as decibels/octave, or dB/octave. One octave represents a doubling of frequency, and 3 dB/octave gives a reduction in amplitude of 0.707 over each octave, 6 dB/octave gives a reduction of 0.5 in amplitude, etc. (See Appendix 2 for the Decibel scale). Most filters also effect the phase relationships within a signal. Consider a signal which is constructed by superimposing four sinewaves of different amplitudes and frequencies, as follows.

Figure 2.3 Time-shifting of a series of sinewaves

If this signal is shifted in time, so that the start point is at the line AA, without changing the shape of the waveform, then each of the component sine waves will have been shifted through a different angle. The highest frequency component has been shifted through 90o, the next highest frequency through 45o, and so on. In other words, the phase relationship between the component sine waves will have been changed. Plotting the phase change as a function of the frequency of each sine wave produces a phase diagram as shown.

Volume 3 2-2 ©safe technology limited Vol. 3 Section 2 Issue: 1 Date: 31.10.02

Digital filters

Figure 2.4 Linear phase diagram for time-shifted signals

It is clear that a linear phase change with frequency maintains the shape of the original signal. For mechanical engineering applications, and in particular fatigue analysis or testing, it is desirable that those frequencies which we wish to retain (the frequencies in the pass band) should be retained with the shape of the waveform unaltered. A linear phase diagram is therefore a desirable characteristic of a filter. Phase diagrams are usually plotted between limits of ±180o, with a positive slope representing a phase delay. A linear phase diagram for the time-shifted signals would then appear as shown in Figure 2.5.

Figure 2.5 Linear phase diagram for time-shifted signals, plotted between ±180o

The phase change which can be produced by filtering is illustrated below. A sinewave with added noise has been low-pass filtered, and the phase lag produced on the filtered signal can be clearly seen.

Figure 2.6 Phase change produced by filtering

©safe technology limited Volume 3 2-3 Vol. 3 Section 2 Issue: 1 Date: 31.10.02

Digital filters

2.2 Simple digital filters

2.2.1 Three point smoothing filter Figure 2.6 shows a sinewave with added high frequency noise. A simple method of reducing the noise content of this signal would be to smooth the signal digitally. A simple smoothing program replaces each data point by the average of several adjacent data points. For example, a 3-point smoothing filter may form each output data point by averaging three input data points. In the notation used for these notes,

y(t) is the output signal x(t) is the input signal

yk, yk+1 ,xk, xk+1 etc are individual data points in the signals

A three point smoothing filter could be expressed as

1 yk = 3 (xk-1 + xk + xk+1) ...(2.1)

The gain and phase diagrams for this filter can be obtained by filtering computer-generated white noise signals.

Figure 2.7 Gain and phase diagrams for a three-point smoothing filter

The three point smoothing filter could be expressed as

yk = b1xk-1 + b2 xk + b3xk+1 ...(2.2)

1 The three constants bn , which before each had a value of 3 , can now have any value, although for a low pass filter the sum of the constants must equal unity if zero Hz is to be passed unaltered. The three point smoothing filter is in fact only one of a large class of filters which all produce the output signal by combining a number of input data points with different values of constants before each data point. It is a non-recursive filter, because it forms the output signal only from the input signal.

2.2.2 Recursive low pass filter Consider now a filter defined by the equation

yk = b.xk + a.yk-1 ...(2.3)

In this case the output signal y(t) is formed by mixing the input signal x(t) with previous values of the output signal.

Volume 3 2-4 ©safe technology limited Vol. 3 Section 2 Issue: 1 Date: 31.10.02

Digital filters

To show that this is still a filter, the equation can be expanded, as follows.

The value yk-1 will be given by

yk-1 = b.xk-1 + a.yk-2 so that 2 yk = b.xk + a.b.xk-1 + a .yk-2

Continuing the expansion,

yk-2 = b.xk-2 + a.yk-3 so that 2 3 yk = b.xk + ab.xk-1 + a bxk-2 + a .yk-3 ...(2.4)

n If this process of substitution is continued, the contribution of the term a .yk-n will become negligible and the expression becomes

2 3 yk = b.xk + ab.xk-1 + a bxk-2 + a bxk-3 + ...(2.5)

The output value yk is now made up only of input values xk , and the expression is similar to that of the three-point smoothing filter given in equation (2.1).

7 1 If values are defined for a and b, (say) a = 8 , b = 8 , the filter has the characteristics shown :

Figure 2.8 Gain and phase diagrams for the simple recursive filter

The expression

yk = b.xk + a.yk-1

is a recursive filter, because it mixes input and output to produce the new output. The process of expansion transformed the recursive filter into a non-recursive filter, with a slight approximation introduced by ignoring the final term in equation (2.4). In general, recursive filters can be expressed as non-recursive filters, although the non-recursive form is an approximation. Comparison of equations (2.3) and (2.4) shows that the recursive filter requires two multiplications and one addition

©safe technology limited Volume 3 2-5 Vol. 3 Section 2 Issue: 1 Date: 31.10.02

Digital filters

to produce each data point. Its expansion, even in its truncated form, requires seven multiplications and two additions to produce each data point. In computer operation, the non-recursive filter will be much slower. It is generally true that it is more efficient to achieve a required filter characteristic by using recursive filters. However, because recursive filters feed previous output back into the filter, they may require careful design if they are to be stable for all signals. It can be seen that these simple filters have gain and phase diagrams which are far from ideal - the gain diagram does not have a flat pass band and the phase diagram is not linear. To understand the design of more effective filters, we first need to define some filter characteristics. This will be done by reference to simple analogue filters.

2.2.3 Design of a simple recursive low pass filter The basic notation for filters was derived for analogue filters. Simple analogue filters consist of a and a in series, and so have the general name of RC filters.

Figure 2.9 Simple analogue filter

The response of an RC filter to a step input takes the form shown in Figure 2.10

Input Output

Figure 2.10 Response of a simple analogue filter to a step input

This shows that if the input is held constant for some time, the output will reach the value of the input - so a very low frequency is passed with its amplitude unaltered. However, if the input is held for only a short interval of time, the output has insufficient time to reach the input value, so the amplitude of the higher frequency is reduced. The circuit is therefore acting as a low pass filter. In physical terms, at low frequencies the capacitor has time to fully charge and discharge, so the output voltage V2 across the capacitor has time to equal the input voltage V1 . At higher frequencies the V2 capacitor cannot fully charge and discharge in the time available, and so the ratio ( /V1 ) falls towards zero as the frequency is increased. High frequencies are therefore rejected. If the capacitor is larger, it will require more time to 'fill'. Similarly, if the resistor has a high value, it will cause the capacitor to 'fill' more slowly. The maximum frequency that the filter could pass will therefore be inversely proportional to the size of the capacitor and the size of the resistor. V2 Using the definition of the cut-off frequency as being the frequency at which the gain ( /V1 ) falls to a value of 0.707, then the cutoff frequency for the RC filter is 1 f = ... (2.6) cutoff 2πRC

Volume 3 2-6 ©safe technology limited Vol. 3 Section 2 Issue: 1 Date: 31.10.02

Digital filters

The term RC in equation (2.6) is called the time constant for the filter. If the input voltage is suddenly reduced to zero, the time constant is the time taken for the output voltage to fall to 1/e of its original value, where e = 2.718. It was shown in Section 2.1 that a simple recursive low pass filter can be produced using an equation such as equation (2.3)

yk = b.xk + a.yk-1

where xk is the input signal

yk-1 is the output signal. and a and b are constants.

A low pass filter has a gain of unity at a frequency of zero Hz. For zero Hz, the input is held constant, and so the output must also be constant and equal in value to the input. In equation (2.3) this will be true if a + b = 1 ... (2.7)

The time constant for the filter can be calculated if the value of the input is suddenly reduced to zero.

Figure 2.11 Step input at k = 0

7 1 Consider a filter defined by equation (2.3) with constants a = 8 b = 8 If the value of the input signal is held constant at 1, then reduced to zero when (say) k = 1, then a table can be constructed as follows

©safe technology limited Volume 3 2-7 Vol. 3 Section 2 Issue: 1 Date: 31.10.02

Digital filters

k x (input) y (old output) y (new output)

-100 1 1 1

-1 1 1 1

0 1 1 1

1 0

2 0

3 0

1 7 From equation (2.3), and using the values of b = 8, a = 8, the rest of the table can be completed.

k x (input) y (old output) y (new output)

-100 1 1 1

-1 1 1 1

0 1 1 1

______input reduced to zero

1 0 1 0.875

2 0 0.875 0.76

3 0 0.76 0.67

4 0 0.67 0.59

5 0 0.59 0.51

6 0 0.51 0.45

7 0 0.45 0.39

8 0 0.39 0.34

This table is plotted in Figure 2.12

Volume 3 2-8 ©safe technology limited Vol. 3 Section 2 Issue: 1 Date: 31.10.02

Digital filters

Figure 2.12 Response of filter to step input at k = 0

1 By the 8th sample, the value has fallen to about /e ( = 0.368 ). In general, if the time between samples is T seconds, then it takes 8T seconds for the value of the T output to fall to 0.368. Using the notation of equation (2.3), it takes b seconds, and this is the value T of the time constant for the filter. It will be seen that the term b for a is analogous to the term RC for an analogue filter, and it may be deduced that the cutoff frequency, which for an analogue filter is

1 f = cutoff 2πRC

will for a digital filter be given by

1 f = cutoff T 2π b

b or f = ...(2.6) cutoff 2πT

Notice that the cutoff frequency for the digital filter is not an absolute value, but is a function of sample rate (T is the time between samples). The constants a and b in equation (2.3) have to be recalculated for the filter each time it is used for signals of different sample rates. For this reason, filter characteristics are often expressed in terms of the Nyquist frequency, rather than absolute frequency.

Example. For a signal sampled at 1000 Hz, a filter with a cutoff frequency of 100 Hz would require

b = fcutoff 2πT and as T is the time between samples, i.e. 1/1000 seconds,

100. 2π b = 1000

©safe technology limited Volume 3 2-9 Vol. 3 Section 2 Issue: 1 Date: 31.10.02

Digital filters

so b = 0.628 and from equation (2.3) a = 1 - 0.628 = 0.372

The impulse response for analogue filters was defined earlier. The impulse response of a digital filter can be obtained by holding the input values to zero, except when k = 0, when the input value is 1.

1 1 For the simple filter, (say) a = , and b = , then 2 2

yk = 0.5 xk + 0.5 yk-1

The impulse response can be calculated in a table

k x y

0 1 0.5

1 0 0.25

2 0 0.125

3 0 0.063

4 0 0.031

5 0 0.016

6 0 0.008

If the impulse value was a number other than 1, then the values in the table could be scaled accordingly. As any digital signal can be considered as a sequence of pulses at successive values of k, then the response of the filter to a signal can be calculated from the impulse response table shown above. For a triangular input signal :

k = 1 2 3 4 5 6 7 8 9 10 11 12 13 14

x = 0 1 2 3 4 3 2 1 0 1 2 3 4 3 etc

the response can be calculated by superimposition as

Volume 3 2-10 ©safe technology limited Vol. 3 Section 2 Issue: 1 Date: 31.10.02

Digital filters

k x y

0 0 = 0

1 1 .5 = 0.5

2 2 .25 + 1 = 1.25

3 3 .125 + 0.5 + 1.5 = 2.125

4 4 .063 + 0.25 + 0.75 + 2 = 3.06

5 3 .031 + 0.125 + 0.375 + 1 + 1.5 = 3.03

6 2 .016 + 0.063 + 0.188 + 0.5 + 0.75 + 1 = 2.52

7 1 .008 + 0.031 + 0.094 + 0.25 + 0.375 + 0.5 + 0.5 = 1.76

8 0 0.016 + 0.047 + 0.125 + 0.188 + 0.25 + 0.25 = 0.88

A B C D E F G

where column A is the response of a unit impulse column B is the unit response multiplied by 2 to give the response to an impulse of 2, delayed by one sample column C is the unit response multiplied by 3 to give the response to an impulse of 3, delayed by one additional sample, etc.

This result is plotted below (Figure 13), and shows that a time delay, or phase shift, has occurred between the input and output signals.

Figure 13 Comparison of calculated input and output signals

The above example also shows why output filters are needed on digital-to-analogue converters. Without filters, the digital signal is simply a series of pulses, rather than a continuous signal. Closing this section on simple recursive low pass filters, it has been shown above that the recursive filter could be expanded into a non-recursive filter which had an infinite number of terms in the expansion. For this reason, recursive filters are known as infinite impulse response (IIR) filters. Non- recursive filters, which have (by definition) a finite number of terms in the series, are called finite impulse response (FIR) filters.

2.2.4 Recursive high pass filter The recursive low pass filter averaged the new input with the old output, so that any sudden changes (high frequencies) were smoothed out, and only long term trends (low frequencies) are retained. For a

©safe technology limited Volume 3 2-11 Vol. 3 Section 2 Issue: 1 Date: 31.10.02

Digital filters

high pass filter, only the sudden changes themselves are required, so the filter must operate by discarding long term trends. The formula

yk = b.xk - a.yk-1 ...(2.7)

subtracts the old (long term) output a.yk-1 from the input signal, and represents the equation of a high pass filter. The gain of such a filter is

b gain = (1-a)

and as the gain for a high-pass filter must be unity at the high frequencies,

a + b = 1

as for the low pass filter.

2.2.5 Recursive band pass filter Band pass filters can be constructed by adding together a low pass and a high pass filter. The two cutoff frequencies can be calculated in the same way as for the individual filters. For the low pass filter

yk = b1.xk + a1.yk-1 ... from (2.3)

The output from this filter can be passed through a high pass filter

zk = b2.yk - a2.zk-1 ... from (2.7)

Adding the terms

zk = b1b2.xk + (a1- a2) zk-1 + (a1 a2) zk-2 ...(2.8)

to give the equation for a band pass filter.

2.2.6 More complex recursive filters The band pass filter was constructed by taking the output from one filter and passing it through another filter. This process can be used to construct more complex filters, low pass and high pass as well as band pass.

The output from a low pass filter

yk = b.xk + a.yk-1 ... from (2.3)

can be used as input to a second low pass filter

zk = b.yk + a.zk-1

Adding the terms gives a single expression

2 2 zk = 2a.zk-1 + a .zk-2 + b .xk ... (2.9)

Volume 3 2-12 ©safe technology limited Vol. 3 Section 2 Issue: 1 Date: 31.10.02

Digital filters

Methods such as these, and much more complex ones, can be used to construct digital filters with characteristics to suit particular requirements.

2.3 Practical digital filters An ideal low-pass filter would have the frequency response shown in Figure 6, with all the frequencies below the cut-off frequency passed completely unaltered, and all frequencies above the cut-off frequency completely rejected. None of the simple filters described in the previous sections come close to meeting these criteria. The design of practical filters involves a compromise between the sharpness of the cutoff, and the smoothness of the frequency response. Three types of filter are commonly encountered in engineering data acquisition - Butterworth, Chebyshev and Elliptical filters. Butterworth filters have a smooth frequency response in both the pass band and the stop band, and their effect on the phase relationships in the signal is reasonably linear over much of the pass band range. Typical gain and phase diagrams are shown in Figure 2.14 for a low pass with a roll-off of -18dB per octave

Figure 2.14 Gain and phase diagrams for a butterworth low pass filter

(See Section 2, page 3, for description of roll-off rate). The Butterworth filter is a recursive filter, defined by a set of polynomial equations. These equations may be expanded to produce a non-recursive filter expression. For a low-pass filter, two design criteria must be specified - the cut-off frequency, and the roll-off rate. A first-order Butterworth filter has a single term polynomial, and a roll-off rate equal to -6dB/octave. A second-order filter has a - 12dB/octave roll-off, a third-order filter -18dB/octave, and so on. In the terminology used in , a first-order Butterworth filter may be referred to as a 2-pole filter, a second order filter as a 4- pole filter, etc. As the number of poles approaches infinity, the gain and phase diagrams approach the ideal filter characteristics. Chebyshev filters produce a sharper corner frequency, but at the expense of in either the pass band (type 1 Chebyshev filters) or in the stop band (type 2). The ripple in the pass band allows the filter to have a gain very close to unit a zero Hz and close to the cut-off frequency. Three design criteria are required - the cut-off frequency and roll-off rate, plus the amount of ripple to be permitted. Elliptical filters produce ripple in both the pass and stop bands, so four design criteria are required. They produce a linear phase diagram. Figure 2.15 shows a comparison of the gain diagrams for the three types of filter.

©safe technology limited Volume 3 2-13 Vol. 3 Section 2 Issue: 1 Date: 31.10.02

Digital filters

Figure 2.15 Comparison of butterworth, chebyshev and elliptical filters

2.4 References For the description of recursive filters, this section has drawn on the following book

Caxton C Foster REAL TIME PROGRAMMING - NEGLECTED TOPICS Addison-Wesley 'Joy of Science' paperback 1981 ISBN 0-201-01937-X

This is an excellent book for non-programmers as well as programmers, covering many hard-to-find topics in data acquisition and test control in a very readable way.

Volume 3 2-14 ©safe technology limited Vol. 3 Section 2 Issue: 1 Date: 31.10.02

Fourier analysis

3 Fourier analysis

3.1 Introduction Section 1 referred to the analysis of the frequency content of signals and introduced the concept of plotting the amplitudes present in a signal against the frequencies at which those amplitudes occurred. Consider a sine wave with an amplitude of 1, and a frequency of 1 Hz.

Figure 3.1 A 1 Hz sine wave of amplitude 1

It can be represented by the equation y1 = a sin ω1 t where a is the amplitude of the sine wave. 1 A second sine wave, of amplitude , and frequency 3 Hz y = b sin t 3 2 ω2

1 Figure 3.2 A 3 Hz sinewave of amplitude 3 can be added to the first sine wave, to give

y = a sin ω1 t + bsin ω2 t

Figure 3.3 Summation of the two sinewaves

1 A third sine wave, of amplitude 5, and frequency 5 Hz, when added, gives

©safe technology limited Volume 3 3-1 Vol. 3 Section 3 Issue: 1 Date: 31.10.02

Fourier analysis

After a total of ten or so such sine have been superimposed, the resulting signal is a fair representation of a square wave.

Figure 3.4 Summation of ten sinewaves

It seems a reasonable argument that if a square wave can be produced by adding together a number of sine waves, then almost any signal can be produced from sine waves of various amplitudes, frequencies, and phase relationships. This is broadly true, providing the signal obeys certain rules. These rules will be defined fully later on, but in general terms the signal must repeat over a certain time interval, and be reasonably constant in its properties. Engineers have extended this argument by the convenient assumption that a length of signal that doesn't repeat at all represents one occurrence (or repeat) of a signal. As in section 1, the amplitudes could be plotted at their respective frequencies.

Figure 3.5 Amplitude vs frequency for a synthesized square wave

The process of extracting from a signal the amplitudes and frequencies present is the basis of frequency domain analysis. In essence it is the reverse of the procedure to construct the square wave - it shows what frequencies and amplitudes were used in its construction.

3.2 Fourier series and the Fourier transform To calculate the amplitudes and frequencies present in a signal, a band pass filter could be used to filter out all but a narrow band of frequencies, and the procedure repeated for a succession of pass bands to cover the frequency range of interest. This was the original method used, but its disadvantage was its susceptibility to the characteristics of the filters. Modern methods are based on Fourier analysis, and in particular the Fast Fourier Transform.

Volume 3 3-2 ©safe technology limited Vol. 3 Section 3 Issue: 1 Date: 31.10.02

Fourier analysis

The square wave was derived using a series of sine waves

y(t) = a1 sin ω1t + a2 sin ω2t + a3 sin ω3t + ...... (3.1)

In general, the series is

2πt 4πt 6πt y(t) = a + a cos + a cos + a cos + ..... 0 1 T 2 T 3 T 2πt 4πt 6πt + b sin + b sin + b sin + ...... (3.2) 1 T 2 T 3 T

Note that T is the period of the signal, i.e. the length of time between repeats of the signal. In strict terms Fourier series only apply to signals which repeat after a fixed interval of time. The series can be expressed in more compact form as

k=∞  2πkt 2πkt y(t) = a + a cos + b sin ...(3.3) 0 ∑  k T k T  k=1

where a0 is the mean of the signal, and by definition

T 2 1 a0 = T ⌡⌠ y(t).dt T -2

T 2 2 ⌠ 2πkt ak = T ⌡ y(t).cos T .dt , k ≥ 1 T -2

T 2 2 ⌠ 2πkt bk = T ⌡ y(t).sin T .dt , k ≥ 1 ...(3.4) T -2

From the square wave example, each term in the series represents a discrete frequency, and equation (3.4) shows that the frequencies are spaced at intervals of 2π/T.

©safe technology limited Volume 3 3-3 Vol. 3 Section 3 Issue: 1 Date: 31.10.02

Fourier analysis

2π Figure 3.6 Spacing of fourier coefficients at intervals of T

If the time T over which the signal repeats becomes larger, the spacing between the discrete frequencies becomes smaller. Real signals, which do not repeat, have an infinitely large value of T, and so the individual coefficients merge into a continuous series, called a Fourier Transform The derivation of the equations for the Fourier Transform is as follows. The x-axes in Figure 3.7 represent the series 2πk/T. 2πk Let = ωk T

2π and let the interval, , be ∆ . T ω

Then equation (3.3) can be written as

k=∞ T 2 2 2πkt  2πkt y(t) = ⌠y(t) cos .dt cos T ⌡ T  T -T ∑ 2  k=1

k=∞ T 2 2 2πkt  2πkt + ⌠y(t) sin .dt sin T ⌡ T  T -T ∑ 2  k=1

or, substituting ω for 2πk/T, and ∆ω for 2π/T

Volume 3 3-4 ©safe technology limited Vol. 3 Section 3 Issue: 1 Date: 31.10.02

Fourier analysis

k=∞ T 2 ∆ω  y(t) = ⌠y(t) cos ω t .dt cos ω t  π ⌡ ()k  ()k -T ∑ 2  k=1

k=∞ T 2 ∆ω  y(t) = ⌠y(t) sin ω t .dt sin ω t  π ⌡ ()k  ()k -T ∑ 2  k=1

As the time over which the signal repeats becomes longer, T → ∞ , ∆ω → dω, and the summations become integrals from ω = 0 to ω = ∞ , so that

ω=∞  ∞  ⌠dω  y(t) =  ⌠y(t) cos()ωt .dt cos ()ωt  π  ⌡  ⌡  -∞  ω=0

ω=∞  ∞  ⌠dω  +  ⌠y(t) sin()ωt .dt sin ()ωt  π  ⌡  ⌡  -∞  ω=0

In order to simplify this expression, let two new terms be introduced :

∞ 1 A(ω) = ⌡⌠y(t) cos()ωt.dt 2π -∞

∞ 1 B(ω) = ⌡⌠y(t) sin()ωt .dt ...(3.5) 2π -∞

then ∞ ∞ y(t) = 2 ⌡⌠A(ω) cos ωt.dω + 2 ⌡⌠B(ω) sin ωt.dω ...(3.6) 0 0

Equation (3.6) can be thought of as the definition of the components of the signal y(t), consisting of two 'amplitude' terms, A(ω) and B(ω), which have different values at different frequencies. A(ω) and B(ω) are the Fourier Transforms of y(t), as they represent the components of the signal. Equation (3.6) is the Inverse Fourier Transform of y(t), as it represents the method of reconstructing the signal from its components.

©safe technology limited Volume 3 3-5 Vol. 3 Section 3 Issue: 1 Date: 31.10.02

Fourier analysis

It is usual to introduce a i notation into these equations, where i = -1

iθ If, by definition e = cos θ + i sin θ then, also by definition, Y(ω) = A(ω) - i B(ω).

Note that these terms are definitions only.

A(ω) and B(ω) are defined in equation 3.5, so that

∞ 1 Y(ω) = ⌡⌠ y(t) (cos ωt - i sin(ωt)) .dt 2π -∞ or more concisely as

∞ 1 -iωt Y(ω) = ⌡⌠ y(t) e .dt 2π -∞

Y(ω) is the Fourier Transform of y(t) in this new notation. It represents the conversion of the signal from the time domain into the frequency domain. It can be shown that

∞ -iωt y(t) = ⌡⌠ Y(ω) e .dω -∞

is the Inverse Fourier Transform, which represents the reconstruction of the signal from the frequency description.

3.3 Reference Newland, D E AN INTRODUCTION TO RANDOM VIBRATION AND SPECTRAL ANALYSIS Longman (2nd ed)

Volume 3 3-6 ©safe technology limited Vol. 3 Section 3 Issue: 1 Date: 31.10.02

Power spectral density

4 Power spectral density

4.1 Introduction The power spectral density diagram, or PSD, contains a description of the amplitudes and frequencies present in a signal. In Section 3 it was shown that complex signals could be constructed by adding together sine and cosine waves of various amplitudes, frequencies and phase relationships. Figure 4.1, below, shows a signal constructed from a series of sinewaves.

Figure 4.1 Signal constructed by adding sinewaves

Power spectral density analysis is essentially the reverse of this signal construction process - it determines the amplitudes and frequencies present in a signal. The rms amplitudes of the sinewaves shown in Figure 4.1 may be plotted at the appropriate frequency.

Figure 4.2 RMS amplitudes plotted vs frequency

More complex signals contain a large number of different amplitudes and frequencies so the graph in Figure 4.2 would be a continuous function. For these signals a series of filters could be used to determine the amplitudes present between any two frequencies, and the rms amplitudes plotted in the centre of each frequency band of the filter. In fact, is it usual to plot the (amplitude)2, as in electrical power generation power is proportional to (current)2. Also, in order to remove the effect of the width of the filter, the power is divided by the width of the filter to produce a density diagram. The power spectral density diagram is therefore a display of

(rms amplitude)2 versus frequency frequency

©safe technology limited Volume 3 4-1 Vol. 3 Section 4 Issue: 1 Date: 31.10.02 Power spectral density

Figure 4.3 Area under the psd is the variance of the signal

The area under the curve, between any two frequencies, gives an (rms amplitude)2 term, and so power spectral density diagrams show not the 'power' at a given frequency, but the power between two chosen frequencies. As shown in Appendix 1, the average value of (rms amplitude)2 in a signal is the variance, so the area under the PSD between any two frequencies is the variance of the signal between these frequencies.

4.2 Computer calculation of power spectral density In Section 3 it was shown that Fourier analysis could be used instead of band pass filters to obtain the frequencies and amplitudes present in a signal. The computer calculation of power spectra uses a Fourier analysis. The derivation in Section 3 of the Fourier Transform and Inverse Fourier Transform applies to continuous signals. Sampled signals are not continuous, but are represented by discrete samples. Discrete versions of the Fourier relationships, called Discrete Fourier Transforms (DFT's), are used in the analysis of sampled signals. From section 3, the Fourier transform Y(ω) of a signal y(t) is given by

∞ 1 Y(ω) = ⌠ y(t) (cos ωt - i sin ωt) .dt ...(4.1) 2π ⌡ -∞

and by definition, Y (ω) = A(ω) - i B(ω). ...(4.2)

The complex Fourier coefficients A(ω) and B(ω) are calculated from the measured signal. The procedure is to take a length of signal, form the complex Fourier coefficients, and take the complex conjugate of the coefficients to produce the (amplitude)2 terms. (The complex conjugate of a pair of complex numbers is obtained by multiplying together the real parts of the numbers, and adding the product of the imaginary parts of the numbers.) The computer algorithm used to form the Fourier coefficients is usually a Fast Fourier Transform (FFT) algorithm (Ref 1). It works most efficiently when the length of the signal is a whole power of two data values, i.e. 2N, where N is a whole number. Typical numbers of samples would be 64, 128, 256 ...... 4096, 8192, and so on. The program takes a set of (say) 256 data value, forms the Fourier coefficients and their complex conjugates. As the next set of 256 data values will produce a slightly different set of coefficients, the program averages each set of coefficients with those already produced. The set of 256 averaged coefficients will produce a description of the amplitudes in the signal at 128 frequencies covering a range from zero to the Nyquist frequency (i.e. from zero to a maximum frequency equal to half the sampling frequency). For a signal sampled at 100 Hz, the maximum frequency contained in the power spectrum will be 50 Hz, and 128 discrete values would be displayed between zero and 50 Hz. Figure 4 shows the PSD of a computer-generated signal. Most of the area of

Volume 3 4-2 ©safe technology limited Vol. 3 Section 4 Issue: 1 Date: 31.10.02 Power spectral density

the PSD is concentrated within a fairly narrow band of frequencies, so this signal is a narrow band signal.

Figure 4.4 PSD of a narrow band random signal

A signal which contains many frequencies has a PSD as shown in Figure 4.5, and is a broad band signal.

Figure 4.5 PSD of a broad band random signal

4.3 Window functions The discontinuities at the end of each set of 2n data points have an effect on the ability of the FFT algorithm to resolve frequencies. A PSD of a single sinewave formed as described above is shown in Figure 4.8. The vertical axis is plotted on a log scale to emphasise small amplitudes. It can be seen that the frequency of the sinewave is correctly determined, but the analysis shows a number of adjacent frequencies which are not really present in the signal. These spurious frequencies are side lobes, and the phenomenon is known as leakage.

©safe technology limited Volume 3 4-3 Vol. 3 Section 4 Issue: 1 Date: 31.10.02 Power spectral density

The magnitude of the side lobes can be reduced by a technique called spectral windowing. In time domain windowing, each section of the signal is multiplied by a window function before the Fourier coefficients are calculated. For example, a random signal passed through a triangular window function would look as shown in Figure 4.6.

Figure 4.6 An example of a window function

Window functions suppress the side lobes, but at the expense of making the peaks in the spectrum broader. Of the many window functions, two are of particular interest to mechanical engineers. The Hanning window is a cosine function and is shown in Figure 4.7. It is very successful at suppressing spurious side lobes, but produces a rather broad central lobe. It is built into many commercial signal analysers.

Hanning window 10% cosine window Figure 4.7 Hanning and 10% cosine window functions

Volume 3 4-4 ©safe technology limited Vol. 3 Section 4 Issue: 1 Date: 31.10.02 Power spectral density

A 10% cosine taper is used in some signal processing software for analysis of random signals. It produces a narrower central lobe and therefore is better able to resolve closely adjacent frequencies, but its side lobe suppression is less than the Hanning window. If an accurate measure of amplitude is required, the PSD must be corrected for the effect of the window 1 function by scaling the vertical axis of the PSD. Typical factors are /0.375 for the Hanning window, and 1 /0.875 for the 10% cosine. These will usually be built into analysis software. Figure 4.8 shows the effect of a 10% cosine taper on the PSD of a 10 Hz sine wave. Note that the vertical axis is a log axis, covering a range from 0.1 to 105. When plotted on a linear vertical axis the effect is much less dramatic (Figure 4.9).

No window 10% cosine taper

Figure 4.8 Effect of windowing on the psd of a 10 Hz sine wave (log axes)

No window 10% cosine taper

Figure 4.9 Effect of windowing on the psd of a 10 Hz sine wave (linear axes)

Figure 4.10 shows the effect of the 10% cosine taper on the PSD of a narrow band random signal.

©safe technology limited Volume 3 4-5 Vol. 3 Section 4 Issue: 1 Date: 31.10.02 Power spectral density

No window 10% cosine taper

Figure 4.10 Effect of windowing on the PSD of a narrow band gaussian signal (log axes)

Using computer software to calculate a PSD, the length of the data sections, 2n, must be selected. A large value of 2n means that the frequencies are closely spaced, so the peaks in the PSD are narrow. A smaller value of 2n produces lower broader peaks with less ability to resolve closely adjacent frequencies. Figures 4.11 and 4.12 show the effect of the length of data section on the PSD of a constant amplitude sine wave, plotted as linear and log axes.

2n = 2048 2n = 512 2n = 128

Figure 4.11 effect of window length on the psd of a 10 Hz sine wave (linear axes)

Volume 3 4-6 ©safe technology limited Vol. 3 Section 4 Issue: 1 Date: 31.10.02 Power spectral density

2n = 2048 2n = 512 2n = 128

Figure 4.12 Effect of window length on the PSD of a 10 Hz sine wave (log axes)

The length of the data sections determines the number of such sections used to obtain the average values of the ordinates of the PSD. This may be important for short lengths of signal. If the mean value of the estimate of any ordinate in the PSD is µ, and the standard deviation of this estimate is σ, then the σ ratio of /µ is related to the number of data sections, L.

σ 1 = µ L

2 The variation in the estimates of any spectral ordinate form a χk distribution, and for this distribution the ratio

σ 2 1 = = µ K L

so the number of degrees of freedom,

k = 2L

The confidence limits for the estimates of the ordinates in the PSD can be calculated from standard tables. For example, if a signal contains 40 000 data points, and the sections are of length 2n = 2048, then approximately 20 sections of data will be analysed. The number of degrees of freedom, k = 2L = 2 40, and from tables of the χk distribution, there is 90% confidence that the true value will lie between 0.7 and 1.5 times the value obtained in the analysis. Irrespective of the length of the data section of 2n data points, the area under the PSD should be the same, as this represents the variance of the signal. Figure 4.13 shows the integral of the three PSD's illustrated in Figure 4.11. It can be seen that the final value obtained for the integral is almost identical in each case.

©safe technology limited Volume 3 4-7 Vol. 3 Section 4 Issue: 1 Date: 31.10.02 Power spectral density

Figure 4.13 Integral of the psd for different window sizes

Two final comments should be made before ending this section on the computer calculation of PSD's. It is usual for each section of the signal to overlap the preceding section. The 10% cosine taper would use a 10% overlap, the Hanning taper would use a 50% overlap, and so on. A signal with a non-zero mean value would produce a very large peak in the PSD at zero Hz. This may dominate the display of the PSD, and may hinder mathematical interpretation. The zero Hz component may be eliminated by subtracting the mean of the signal from each data point before analysis. This process is sometimes called normalisation. To summarise Sections 4.1 – 4.3: The maximum frequency that can be displayed in the PSD is equal to the Nyquist frequency of the signal, i.e. half the sampling rate used for the signal. The resolution of the PSD is determined by the number of data values used for each spectral estimate. For example, sections of 256 data values will produce 128 Fourier coefficients, and therefore 128 points on the PSD display. The fineness of peaks and the resolution of detail is determined by the length of these data sections. However, the statistical validity of the results is determined by the number of data sections used to provide the average spectral estimate. For short signals, therefore, there is a trade-off between spectral resolution and statistical accuracy, and for all signals, the computer analysis of any signal will be quicker if short data sections are used. A fine resolution will produce higher, sharper peaks in the display, but maintain more or less the same area under the peak. The units of the vertical axis are (signal units)2 /Hz. This emphasises that the display is a density diagram, and the relevant information is obtained from the area under the diagram, not from the vertical-axis ordinates themselves. The original method of estimating PSD's using band pass filters emphasises this point. So also do displays at differing spectral resolutions.

4.4 Calculation of signal properties from the PSD A number of signal properties can be calculated by taking moments of area of the PSD about the zero Hz axis.

Volume 3 4-8 ©safe technology limited Vol. 3 Section 4 Issue: 1 Date: 31.10.02 Power spectral density

n mn = ⌡⌠Ax Figure 4.14 Definition of moments of the PSD

Integrating the PSD provides the variance of the signal, and taking its square root gives the rms of the signal: 1 rms (m )2 σ = o

The number of positive slope zero crossings per unit time is

1 m  2 2

λο =  m   o

th where mn denotes the n moment. The number of peaks per unit time is

1 m  4 2 = µ  m   2

The irregularity factor of a signal is defined as the number of positive slope zero crossings divided by the number of peaks. It is calculated from the PSD using the above equations. Section 3 briefly touched on the fact that for a Fourier analysis to be valid, the signal must obey certain statistical rules. A PSD program will produce a PSD whether the signal obeys these rules or not. It is then the responsibility of the user to ensure that the analysis is valid.

For the PSD to be valid, the signal should be:

RANDOM: the probability of the occurrence of a given value can be estimated, but the actual value cannot be predicted.

STATIONARY: the statistical properties of the signal (the mean, standard deviation, etc) do not vary with absolute time, in other words different sections of the signal should produce the same estimates of mean, standard deviation, etc

©safe technology limited Volume 3 4-9 Vol. 3 Section 4 Issue: 1 Date: 31.10.02 Power spectral density

ERGODIC: the statistical estimates made within a section of a signal are the same as those made from the section as a whole, and, from the condition of stationarity, are the same as those made from any other section of the signal.

Real measured signals start and end, so they are never strictly stationary or ergodic, so the signal should obey the rules within reasonable limits.

4.5 References Of the many books on signal processing, the following is very much oriented to mechanical engineers and practical applications. D Brook and RJ Wynne SIGNAL PROCESSING - PRINCIPLES AND APPLICATIONS Edward Arnold paperback 1988

Also recommended, as in Section 3, is Newland, D E AN INTRODUCTION TO RANDOM VIBRATION AND SPECTRAL ANALYSIS Longman (2nd ed)

Volume 3 4-10 ©safe technology limited Vol. 3 Section 4 Issue: 1 Date: 31.10.02 Frequency response functions

5 Frequency response functions 5.1 Introduction In Section 2 the characteristics of digital filters were displayed as gain and phase diagrams. The graphs were obtained by comparing the output from the filter with the input. These diagrams form part of the frequency response function of the filter. Frequency response functions, or transfer functions, have a wide range of uses in mechanical engineering. For example, if measurements are made of the forces into a vehicle suspension, and the response of the vehicle is measured at some point on the body, then a frequency response analysis can be used to show the extent to which various frequencies are damped by the suspension, and which frequencies, if any, appear as in the body. The process of determining frequency response characteristics by comparing the input and output of a system, is called frequency response function analysis. Consider a mechanical system such as a suspension, subjected to a single time-varying input x(t), and producing a time-varying output or response y(t). Many such systems can be described approximately by a linear differential equation of the form

dy dn-1y dny ay + a + ... + a + a 0 1 dt n-1 dtn-1 n dtn

dx dm-1x dmx = bx + b + ... + b + b 0 1 dt m-1 dtm-1 m dtm

and are called linear systems. This section describes linear frequency response function analysis. A system subject to a single input x(t) and a single output y(t) is shown below.

Figure 5.1 Mechanical system with a single input and a single output

If x(t) and y(t) are sine waves, then

x(t) = x0 sin ωt

and y(t) = y0 sin (ωt - φ)

where x0 is the amplitude of the input sine wave

y0 is the amplitude of the output sine wave and φ denotes a phase change between the input and output. y From the description of digital filters, the gain of the system is 0 and shows the extent to which the x0 system increases or reduces the amplitude of the input. The phase angle φ is the phase difference between the output and input, and shows the time delay which the system applies to the input.

©safe technology limited Volume 3 5-1 Vol. 3 Section 5 Issue: 1 Date: 31.10.02

Frequency response functions

The gain and the phase change are two aspects of the frequency response function, H(ω) which is defined so that the output signal y(t) can be obtained from the input signal x(t) using y(t) = H(ω). x(t) ...(5.1)

For mathematical convenience, H(ω) is expressed as a complex function, described by its real and imaginary parts, H(ω) = A(ω) + i B(ω) where i = -1 ...(5.2)

The frequency response function H is written as H(ω) to indicate that H can have different values at different values of frequency ω. The absolute value of H(ω) is equal to A2 + B2. The frequency response function H(ω) is defined so that the gain of the system is made equal to the absolute value of H(ω) i.e gain = A2 + B2 ...(5.3)

The frequency response function is also defined so that the phase change of the system is given by :

B tan φ = ...(5.4) A

5.2 Calculation of frequency response function Section 3 showed that complex signals can be considered as the summation of a number of sine waves. The frequency response function for complex signals could be calculated by determining the gain and phase change for each sine wave separately, and super-imposing the results. Superimposition is possible because linear systems are being considered, but the analysis would be laborious. Fourier analysis is a more efficient method of analysing complex signals, and it has been shown that real measured signals, which do not repeat at regular intervals, can be analysed by averaging the spectral estimates from a number of shorter sections of the signal. The PSD shows the way in which the amplitudes in a signal vary with frequency, and so PSD's could be formed of both the input signal x(t) and the response signal y(t). The cross spectral density, or CSD, of y(t) with respect to x(t) describes the extent to which common frequencies exist between the two signals, and also describes the phase relationship between the signals at different frequencies. The frequency response function as a function of frequency, H(ω), can be obtained by the relationship

CSD of y(t) and x(t) frequency response function = PSD of x(t)

As the CSD is a complex function, the frequency response function H(ω) is also complex, and as such is not of direct use. However a number of real functions can be obtained from the frequency response function. It is important to note that no conclusions can be drawn concerning frequencies which are not present in the input signal. This may seem an obvious comment, but the displays of gain, phase change, and coherence will not provide any indication that certain frequencies were not present. Reference should always be made to the PSD of the input signal when interpreting frequency response diagrams.

5.3 Gain The gain of the system is the absolute value of the frequency response function

i.e gain = |H(ω)|

Volume 3 5-2 ©safe technology limited Vol. 2 Section 5 Issue: 1 Date: 31.10.02

Frequency response functions

Figure 5.2 Gain diagram for a system resonating at 10 Hz

From (5.2) and (5.3) 0.5 gain (ω) = ( Re2 H(ω) + Im2 H(ω) )

where Re denotes the real part of the function, and Im denotes the imaginary part. The gain will have a value greater than unity at those frequencies for which the amplitudes are increased, and less than unity when the amplitudes are reduced.

5.4 Phase change The phase change between the output and input, φ(ω), from (4), can be obtained from

Im H(ω) φ(ω) = tan-1   Re H(ω)

Phase angle is usually displayed with the convention that an increasing phase lag is denoted by a positive slope of φ versus frequency.

Figure 5.3 Phase diagram with non-linear characteristics

Section 2 describes phase diagrams and their interpretation more fully.

©safe technology limited Volume 3 5-3 Vol. 3 Section 5 Issue: 1 Date: 31.10.02

Frequency response functions

5.5 Coherence A third aspect of the frequency response function is the coherence function. It shows the extent to which the output is really produced from a single input by a system described by a linear differential equation. A coherence value of 1 between two chosen frequencies shows that between these frequencies the system was behaving as a linear system, and the output between these frequencies could be produced by a linear response to the input. Figure 5.4 shows the coherence function for a band-pass Butterworth filter, obtained by a frequency response function analysis of the unfiltered input signal and the filtered output signal.

Figure 5.4 Simple butterworth filter system

The Coherence is equal to unity, indicating that the output signal was produced only from the input signal, and that the filter can be described by a linear differential equation.

Figure 5.5 Coherence diagram for a band-pass butterworth filter

A coherence value of less than unity could be the result of noise introduced by the system, or it could indicate that the output signal was not the response to the input signal alone, but other inputs were also contributing to the output signal. For the system in Figure 5.6, white noise of small amplitude is being added to the filtered signal.

Volume 3 5-4 ©safe technology limited Vol. 2 Section 5 Issue: 1 Date: 31.10.02

Frequency response functions

Figure 5.6 Filter system with added noise

The coherence diagram now shows values of less than unity, particularly at the lower and higher frequencies where the amplitudes in the original filtered signal were also small. In this case, the coherence diagram indicates that the final output signal was not produced only by filtering a single input signal.

Figure 5.7 Coherence diagram for the system shown in Figure 5.6

©safe technology limited Volume 3 5-5 Vol. 3 Section 5 Issue: 1 Date: 31.10.02

Frequency response functions

Volume 3 5-6 ©safe technology limited Vol. 2 Section 5 Issue: 1 Date: 31.10.02

Time-at-level & probability density

6 Time-at-level & probability density

6.1 Introduction The time-at-level and probability density analyses are used to determine the amplitude content of a signal. This is sometimes referred to as time-domain analysis. Signal statistics may also be calculated from these analysis methods.

6.2 Time-at-level analysis The time-at-level analysis determines the amount of time a signal spends between any two values. Figure 6.1 shows a section of a random signal, with its amplitude divided into a number of horizontal bars, or levels. The interval between adjacent levels is often called a bin.

Figure 6.1 Amplitude levels, or 'bins'

The time taken by the signal to cross a bin is ∆t, and if the time intervals ∆t are calculated for the complete signal the result will be the total time the signal spends in this bin.

Figure 6.2 Summation of time intervals ∆t for one bin

A similar analysis can be carried out for each bin, and the result may be plotted as a time-at-level histogram, where the length of each bar in the histogram shows the time spent in that bin.

©safe technology limited Volume 3 6-1 Vol. 3 Section 6 Issue: 1 Date: 31.10.02

Time-at-level & probability density

Signal Time-at-level

Figure 6.3 Time-at-level histogram

Time-at-level histograms are dependant on the number of bins used to form them. If the signal amplitudes had been divided into twice as many bands, then the amount of time spent in each band would be about halved, and the bars in the histogram would be only half as long. This would make comparison of time-at-level results for different signals, analysed with different numbers of bins, quite difficult. The histogram is therefore converted into a density diagram by dividing the ordinate of each element in the histogram by the bin width. The histogram is then independent of the bin width used for the analysis, and it is valid to plot the histogram as a continuous function.

Figure 6.4 Time-at-level density diagram

Turning the diagram through 90o produces a conventional time-at-level diagram. Because it is a density diagram, the area under the curve between any two limits represents the time the signal spent between these two limits - in Figure 6.5 it represents the time the signal spent between two given temperatures.

Volume 3 6-2 ©safe technology limited Vol. 3 Section 6 Issue: 1 Date: 31.10.02

Time-at-level & probability density

∆A

Figure 6.5 Time-at-level density diagram

The total area under the curve represents the total time for the signal, and can be obtained by integrating the time-at-level density diagram.

Time-at-level Integrated time-at-level

Figure 6.6 Time-at-level density diagram and its integral

In the right hand diagram in Figure 6.6, the vertical axis shows the total time the signal spends below any given value. A similar diagram, produced by integrating the time-at-level diagram from its right hand side, would show the total time the signal spends above any given value. The time-at-level density diagram may also be used to obtain signal statistics, by taking moments.

©safe technology limited Volume 3 6-3 Vol. 3 Section 6 Issue: 1 Date: 31.10.02

Time-at-level & probability density

n Mn = ⌡⌠(∆A.x )

Figure 6.7 Moments of the time-at-level density diagram

The mean value of the signal is given by the centroid of the area under the curve

⌡⌠(∆A.x) mean = ⌡⌠∆A

The standard deviation of the signal, σ, is calculated from the second moment of the time-at-level density distribution ⌡⌠(∆A.x2) σ = ⌡⌠∆A

As the higher moments give increasing emphasis to the extreme values in the distribution (and in the signal) they are used in extreme value prediction theory. If the time-at-level density distribution is symmetrical about the mean, the odd-numbered moments will be zero. The third moment defines how unsymmetrical, or skewed to one side, the distribution is, and is called the skewness.

6.3 Probability density The area under the time-at-level density distribution represents the total signal time. Dividing the distribution by this time will produce an area under the distribution of unity, and the area under the distribution between any two values will now be a proportion of unity. A proportion of unity is a probability, so the distribution is now a probability density distribution, or PDD. Its integral from the left hand end is the probability distribution function.

Volume 3 6-4 ©safe technology limited Vol. 3 Section 6 Issue: 1 Date: 31.10.02

Time-at-level & probability density

Probability density diagram (PDD) Integrated PDD

Figure 6.8 Probability density diagram

If the PDD is representative of a stationary signal (one where the statistical properties do not change with time) then it may be used to estimate the probability of a future value in the signal falling between any two values. This is important for random signals, where it is not possible to predict the value of the signal at a particular instant of time, but only to estimate the probability that at some time, t, the signal will fall between two specified limits.

Figure 6.9 Area under the pdd represents probability

6.4 Computer calculation Computer calculation of the time-at-level density diagram follows the process outlined above. The signal is divided into a number of bands or bins, and the length of time the signal spends in each band is calculated by summing the time intervals ∆t defined in Figure 6.2. These total times are then divided by the bin width to produce a time-at-level density diagram. If a probability density diagram is required, the ordinates are also divided by the total signal time. For long signals, and if a constant sample frequency is used to digitise the signal, a reasonable approximation can be obtained simply by counting the number of samples which fall within each bin. For shorter signals, or if a large number of bins is being used, or if the sample frequency is low compared to the signal frequency, it may be necessary to use linear or polynomial interpolation between samples.

6.5 Interpretation of results One important class of signals has a PDD defined by the equation

1 -(y-m)2/2σ2 p(y) = e 2πσ

©safe technology limited Volume 3 6-5 Vol. 3 Section 6 Issue: 1 Date: 31.10.02

Time-at-level & probability density

where m is the mean value of the signal σ is the standard deviation and y(t) is the signal

This produces the well-known bell-shaped PDD shown in Figure 6.10, and these processes are Gaussian or normal processes. Many natural phenomena, such as sea wave height, and wind speed, are approximately Gaussian over suitably chosen time periods. For these and other approximately Gaussian processes, when the mean and standard deviation have been calculated the Gaussian expression can be used to estimate the probability of future events in the signal.

Figure 6.10 Gaussian probability density distribution

For this distribution, standard tables show the probability of a data sample exceeding a particular value, i.e.

Figure 6.11 Probability of a sample exceeding a value

∞ prob (y > Y) = ⌡⌠ p(y) dy y

∞ 1 -(y-m)2/2σ2 = ⌠ e ⌡ 2πσ y

Volume 3 6-6 ©safe technology limited Vol. 3 Section 6 Issue: 1 Date: 31.10.02

Time-at-level & probability density

∞ 1 -z2/2 = ⌠ e dz ⌡ 2π z

Y - m = I(Z) where Z = σ

The integral I(Z) is tabluated in standard tables. Care must be taken in interpreting these tables to ensure that the correct distribution is being used - in the case above, to ensure that the distribution is y > Y, not y outside the range -Y to + Y. From values given in the tables, is is possible to calculate the probability of a data sample being between two limits, i.e. -Y < y < +Y

Figure 6.12 Probability of occurence for a gaussian PDD

One final commonly encountered PDD, for a constant amplitude sine wave, is shown in Figure 6.13.

Probability density distribution Probability density function

Figure 6.13 Probability density diagrams for a sine wave

©safe technology limited Volume 3 6-7 Vol. 3 Section 6 Issue: 1 Date: 31.10.02

Time-at-level & probability density

Volume 3 6-8 ©safe technology limited Vol. 3 Section 6 Issue: 1 Date: 31.10.02

Amplitude analysis

7 Amplitude analysis

7.1 Introduction Measurements of service histories - loads, strains, accelerations - are required so that general information on service loading can be obtained. For many applications, service histories are measured so that the fatigue life of specific components can be determined. In these instances the service histories must be analysed using methods which are relevant for fatigue analysis. Modern signal processing uses a cycle counting algorithm to extract these cycles quickly and accurately. However, cycle counting is a recent development, possible only with modern computer systems and solid state , and many other, less rigourous methods developed in earlier days are still encountered.

Figure 7.1 Some early methods of data presentation

Figure 7.1 shows two methods that have been used in the past in an attempt to present service loading in a way which was both concise and relevant for fatigue calculation. Some of these early methods will be described later in this chapter.

7.2 Rainflow cycle counting Fatigue cycles are closed stress-strain hysteresis loops.

Figure 7.2 A cycle is a closed stress-strain hysteresis loop

©safe technology limited Volume 3 7-1 Vol. 3 Section 7 Issue: 1 Date: 31.10.02

Amplitude analysis

However, Rainflow cycle counting can be applied to signals which are not strain histories, in order to obtain a concise description of the signal in terms which are relevant for fatigue, but not directly related to fatigue damage. This section develops an algorithm for Rainflow cycle counting by considering stress-strain hysteresis loops, but the final algorithm is then generally applicable to measured signals which are not strain signals. The closure of stress-strain hysteresis loops is quite complex, in that the loop tips can be formed from points in the signal which are widely separated by intermediate points. An algorithm is required which correctly determines the cycles present in a signal. As the tips of the cycle are formed by a peak and a valley in the signal, the intermediate data points between the peaks and valleys need not be considered. Consider the following sequence of peaks/valleys. The notation uses point A as the most recent data point, point B as the previous point and so on.

Figure 7.3

Because the range from A to B is greater than the range from B to C, a cycle is closed, and is represented by the range B to C. One criterion for cycle closure is then that the range between the two most recent peak/valleys must be greater than the preceding range, ie

range A to B > range B to C

Consider now the following sequences.

Figure 7.4

As before, the range from A to B is greater than that from B to C, so B to C is one cycle. However, the hysteresis loops show that another cycle has also been formed, with its tips at points D and E. This is closed because the range from A to D is greater than the range from D to E. This becomes clearer if, after closing the first cycle, from B to C, we then re-draw the signal as though this cycle had never existed.

Volume 3 7-2 ©safe technology limited Vol. 3 Section 7 Issue: 1 Date: 31.10.02

Amplitude analysis

Figure 7.5

From this example it is clear than the most recent data point can cause more than one cycle to close. The simple criterion developed from the first example, that a cycle is formed if

range A to B > range from B to C

can now be extended to

a cycle is formed if range A to B > range from B to C extract the cycle, reliable the data points

a cycle is formed if range A to B > range from B to C

and carry on this procedure until the cycle is not closed.

This is the basis of the cycle counting method published by Socie and Downing, and now used in most cycle counting programs. An example of the Fortran code for the algorithm is shown in Figure 7.6. It maintains a data array, or buffer, of peak/valleys which represents unclosed cycles.

INTEGER BUFFER (4096), INDEX, VALUE, RANGE, MEAN, X, Y

INDEX = 0 10 CONTINUE

call 'get next peak/valley', VALUE INDEX = INDEX + 1 BUFFER (INDEX) = VALUE 20 CONTINUE IF (INDEX.LT.3) THEN not enough points GOTO 10 to form a cycle ELSE X = ABS (BUFFER(INDEX) - BUFFER(INDEX - 1)) Y = ABS (BUFFER (INDEX - 1) - BUFFER(INDEX -2 )) IF (X.GE.Y) THEN c -- cycle has been closed RANGE = Y MEAN = (BUFFER(INDEX-1) - BUFFER(INDEX-2))/2

©safe technology limited Volume 3 7-3 Vol. 3 Section 7 Issue: 1 Date: 31.10.02

Amplitude analysis

c-- remove the cycle INDEX = INDEX - 2 BUFFER(INDEX) = BUFFER(INDEX+2) c-- see if this value closes any more cycles GOTO 20 ELSE GOTO 10 END IF END IF

Figure 7.6 Simple rainflow cycle algorithm

Two comments can be made. Firstly, as the signal usually comes from an A/D converter as integers, it is quicker for the processor to work with these integers. The equality in (X.GE.Y) also works more reliably with integers. Secondly, the 'ABS' (absolute) function can be eliminated with a little thought, and other changes will also speed up the code. The analysis is known almost universally as Rainflow cycle counting, or simply as Rainflow. The term Rainflow derives from an earlier algorithm, proposed by Endo and co-workers. The signal was turned through 90o, and rain was envisaged as falling on the signal and dropping from surface to surface. Various rules were proposed for what happened to the rain, and the resulting algorithm correctly extracted each half-cycle, which eventually paired with another half cycle to make a complete cycle. The Rainflow method was a most important development at the time, because it provided a genuine method of extracting real fatigue cycles. It has been replaced by the Socie-Downing algorithm. It is however interesting to note that the paper by Endo et al (Ref 1) includes not only the Rainflow algorithm but another algorithm more closely related to hysteresis loop closure, and providing the same result in terms of cycles counted. Other methods, such as reservoir and racetrack, have been proposed, which can be used for analysis of very short signals by hand. The Rainflow algorithm can now be applied to a short length of signal, and the cycles extracted simply by comparing successive ranges. The signal could be any measured parameter, not necessarily a strain signal.

The signal starts at point A (the absolute maximum point in the signal. The first 3 points are A, B, and C. Point D is the first point to consider. The range CD is greater than BC, so a cycle is closed. This is the cycle of range BC. Because the cycle was closed by a downward-going excursion, it is sometimes drawn as shown below. This shows more clearly that the cycle closed because CD was greater than BC.

Volume 3 7-4 ©safe technology limited Vol. 3 Section 7 Issue: 1 Date: 31.10.02

Amplitude analysis

The signal can now be redrawn with this cycle removed:

Taking each range in turn:

DE is less than AD, so no cycle is closed. EF is less than DE, so no cycle is closed. FG is less than EF, so no cycle is closed. GH is less than EF, so no cycle is closed. HI is less than GH, so no cycle is closed. IJ is less than HI, so no cycle is closed. EF is less than DE, so no cycle is closed.

JK is greater than IJ, so a cycle is closed. This is the cycle labelled 2, and has a range from I to J.

If cycle 2 is now removed:

then the range HK is also greater than GH, so point K closes another cycle, labelled 3. Point K has therefore closed two cycles. Cycle 3 can also be removed.

©safe technology limited Volume 3 7-5 Vol. 3 Section 7 Issue: 1 Date: 31.10.02

Amplitude analysis

The range KL is less than FK, so no cycle is closed. The range LM is greater than KL, so a cycle is closed. This has a range from K to L, and is labelled cycle 4. Removing this cycle leaves the following data points:

FM is greater than EF, so a cycle is closed with range EF, labelled 5. If this cycle is removed only three points remain, A, D and M, and these form the final cycle. The six cycles extracted are summarised below:

The cycles close in the order in which they are numbered, and the cycles have the following ranges:

cycle number cycle range 1 B-C 2 I-J 3 G-H 4 K-L 5 E-F 6 A-D (= D-M)

The results of the analysis can be displayed as a histogram or distribution of cycle ranges.

Volume 3 7-6 ©safe technology limited Vol. 3 Section 7 Issue: 1 Date: 31.10.02

Amplitude analysis

Figure 7.7 Histogram of cycle ranges

If the range histogram is normalised by dividing by the histogram bin size, a cycle density diagram is produced, where the area between any two range values represents the number of cycles with ranges between these two values.

Figure 7.8 Cycle density diagram

A major advantage of a cycle density diagram is that it is independent of the bin width used in the analysis. It may therefore be used to compare cycle distributions obtained at different times and by different programs. The cycle range distribution may also be integrated from the right hand side. This produces a cycle exceedence diagram, where the vertical axis shows the number of cycles which exceed a given range.

Figure 7.9 Cycle exceedence diagram

©safe technology limited Volume 3 7-7 Vol. 3 Section 7 Issue: 1 Date: 31.10.02

Amplitude analysis

The cycle range and mean may also be used to form a range-mean histogram of cycle distribution.

Figure 7.10 Definition of cycle range and mean

Figure 7.11 Cycle range-mean histogram

The simple Fortran code given earlier works only if the analysis can start at the absolute maximum (numerically greatest) point in the signal. It therefore requires that the complete signal is available and can be searched to determine the position of the absolute maximum data point. The analysis starts at this point, continues to the end of the signal, returns to the first point in the signal, and finishes at the absolute maximum again. (The largest cycle is formed by the signal maximum and minimum). This type of cycle counting is referred to as off-line analysis, and so the algorithm given in Figure 7.6 is an off-line Rainflow algorithm, because the complete signal must be captured before analysis can be attempted. Socie and Downing modified the simple algorithm in order to allow it to be used in on-line signal processing. If the first few data points in a signal are as shown below, then a number of cycles can be closed.

Volume 3 7-8 ©safe technology limited Vol. 3 Section 7 Issue: 1 Date: 31.10.02

Amplitude analysis

The two points X and Y, however, cannot be analysed, because the rest of the signal is not yet known. Consider the following possibilities.

In the first case, X and Y are the maximum and minimum values in the complete signal, and form the largest cycle. In the second case, X is the minimum value but a larger maximum, Z, occurs after Y, so X and Z form a cycle. The simple method must therefore be modified so that decisions on some cycles are postponed until the end of the signal. Each time a new maximum value occurs, the buffer of the unclosed data points in Figure 7.6 is blocked at the minimum value from further consideration, and a new minimum value blocks the data buffer at the previous maximum value. At the end of the signal, the data buffer of unclosed cycles takes the form shown below, and can be analysed using the simple algorithm to close the remaining cycles.

Figure 7.12 Buffer of unclosed cycles

The detailed rules for this method are given in Ref 2. Real signals contain large numbers of very small cycles. These will probably contribute little to the fatigue damage content of the signal, but will slow down the analysis, and distort the scaling of graphical displays. It is usual to set a hysteresis gate level which excludes very small cycles from the analysis, and a gate level of (say) 0.2% or 0.5% of the maximum anticipated signal range is often an appropriate value.

7.3 Level crossing analysis Level crossing analysis is intended to provide an indication of the amplitudes present in a signal. Perhaps its greatest strength is that the results can be post-processed to provide information on the peaks and valleys in the signal.

©safe technology limited Volume 3 7-9 Vol. 3 Section 7 Issue: 1 Date: 31.10.02

Amplitude analysis

It is an easy algorithm to program, and was one of the earliest techniques to be provided in mechanical signal analysers. The 'g' meter for measuring aircraft load spectra was a level crossing device. The amplitude of the signal is divided into a number of levels, or bins. The boundary between adjacent bins is a threshold. A count is made of the number of times the signal crosses each threshold. In DIN 45667, level crossing analysis is defined as 'a count of all positive slope threshold crossing when the signal is greater than zero, and a count of all negative slope threshold crossings when the signal is less then zero'. Most analogue to digital converters convert a signal into positive integers, so the algorithm can be implemented simply as a count of all positive slope threshold crossings.

Figure 7.13 Level crossing counting

Small fluctuations in a signal could cause a large number of level crossings if they coincided with a threshold, but not otherwise. This anomaly can be avoided by specifying a reset level, or gate level. A positive slope threshold crossing is then counted only if the signal has also crossed an adjacent threshold with positive slope.

Level crossing analysis produces a matrix of threshold counts

Volume 3 7-10 ©safe technology limited Vol. 3 Section 7 Issue: 1 Date: 31.10.02

Amplitude analysis

LEVEL NUMBER OF LEVEL CROSSINGS 10 0 9 1 8 5 7 35 6 100 5 300 4 120 3 35 2 5 1 0

The level crossing distribution can be analysed to provide a distribution of peaks and valleys. In the example above, if the signal crosses level 8 five times, and level 9 once, then there must be 4 peaks between levels 8 and 9.

7.4 Peak and valley counting Peak-valley counting simply counts the number of peaks and valleys which fall within specified bands. A gate level may be set to exclude small fluctuations in the signal.

Figure 7.14 Peak-valley counting

The results may be displayed as a peak-valley occurrence diagram.

This may be integrated to give an exceedence diagram, which shows the number of peaks or valleys which exceed a specified value.

©safe technology limited Volume 3 7-11 Vol. 3 Section 7 Issue: 1 Date: 31.10.02

Amplitude analysis

Figure 7.15 Peak-valley exceedence diagram

These exceedence diagrams were once used to compare the severity of different type of service duty, and can be of value in industries where signals are usually of a similar pattern but differ in scaling. A range-only rainflow cycle histogram will usually provide a much better comparison of severity in fatigue terms. To obtain approximate cycle information from peak/valley exceedence diagrams, it is sometimes assumed that cycles are formed by peaks and valleys of equal probability (See Figure 15).

7.5 Range counting Range counting immediately preceded Rainflow counting as a fatigue analysis algorithm. A range is formed between each peak and valley.

Figure 16 Range counting

With this type of analysis, a range was assumed to be half a cycle, and so fatigue damage could be calculated. Consider however the following section of a signal.

Volume 3 7-12 ©safe technology limited Vol. 3 Section 7 Issue: 1 Date: 31.10.02

Amplitude analysis

Figure 17 Simple range counting - effect on large ranges

The large ranges, such as the range from A to B, are broken up by range analysis into a larger number of smaller ranges. In terms of fatigue, this means that the larger cycles, such as A-B-C, are missed, and replaced by a larger number of smaller cycles. A gate level may be set to exclude the smaller ranges, and so form a result closer to real Rainflow cycle counting. However, if the gate is made too small, it will not be effective, and the larger ranges will be broken up. If the gate is too large, larger ranges which may cause fatigue damage will be excluded.

Figure 7.18 Range counting compared with rainflow cycle counting

Watson and Dabell (3) showed that the fatigue damage calculated from range counting varied with gate level, and that the optimum value of gate level varied from signal to signal. They also showed that range counting always produced a lower fatigue damage value than Rainflow cycle counting.

7.6 Summary It is now generally accepted that Rainflow cycle counting is the most appropriate method of signal analysis for interpretation of fatigue lives. Cycle range histograms, or range-mean histograms, provide a valid method of comparing signals. Section 3 shows how Rainflow cycle histograms can be post- processed to provide fatigue life estimates. The older methods such as level crossing and peak/valley matrices, are much less valid than Rainflow counting for fatigue life calculation, or for the 'eyeball' comparison of signals for fatigue purposes.

©safe technology limited Volume 3 7-13 Vol. 3 Section 7 Issue: 1 Date: 31.10.02

Amplitude analysis

Volume 3 7-14 ©safe technology limited Vol. 3 Section 7 Issue: 1 Date: 31.10.02

The data chain

8 The data chain

8.1 Introduction Mechanical engineering measurement and analysis is used to investigate the behaviour of engineering components and structures. It may show where the maximum stress is, or what events cause the maximum stress. It may show the temperature extremes, the accelerations, or the deflections, and it may measure these parameters over long time-scales to provide a picture of the long term operating environment. The starting point for measurement is a transducer - a strain gauge, a thermocouple, an accelerometer or a deflection transducer. The end point will be a graph or table of values which the engineer can interpret. The graph may be a time history of the stress or temperature, or it may be the result of an analysis of the time history. In between the transducer and the result is a chain of electronic and mechanical equipment - the data chain. The output from a transducer may be a very small voltage which would be lost or corrupted if it was passed along a long cable. An amplifier may therefore be necessary to produce a reasonable voltage of (say) 1 to 10 volts. The amplified analogue signal may then be recorded on tape, and later digitised for computer analysis. Alternatively, the signal may be digitised immediately for storage on digital tape or computer disk. Some devices allow the signal to be digitised and analysed on-line, perhaps with some storage of digitised signals. This data chain is shown in Figure 8.1.

Figure 8.1 The data chain

8.2 Data recording - analogue or digital Analogue tape has been used to record measured signals since MIRA first interfaced a tape recorder to an instrumentation system. The data is stored as frequency-modulated (FM) data, in that the variable amplitude signal is used to modulate a carrier frequency so that the data may be stored in constant amplitude variable frequency form. For computer analysis the data must at some stage be replayed and digitised. Although FM tape has some advantages, it is probably true to say that it will be superseded for many engineering applications by digital data storage. FM tape systems are limited in the number of channels that can be recorded - 14 channels being typical - and are relatively expensive. The channel calibration information must be recorded and used to replay the data. The signal-to-noise ratio is

©safe technology limited Volume 3 8-1 Vol. 3 Section 8 Issue: 1 Date: 31.10.02

The data chain

limited compared with digital systems, which means that the user must try to ensure that each channel is scaled to use as much of the available range as possible, and this requires prior knowledge of the amplitudes which will be measured. And the later digitisation process adds another step to the data chain before the signals can be analysed. However, FM tape has a wide frequency bandwidth, and the data can be visualised in analogue form on an oscilloscope or paper chart recorder. Perhaps most important, the user is not commited in advance to a specific sample rate with consequent limiting frequency. Digital data storage is now available in many forms. Pulse code modulation (PCM) is a tape-orientated data encoding system which interleaves many data channels into a single digital data stream. Each analogue channel is digitised and the individual digital data channels are multiplexed into a signal channel, with a channel map to show the location of each channel in the data stream. PCM systems may allow different sample frequencies for each channel, so that rapidly varying signals can be sampled at high sample rates, whilst other channels, such as temperatures, can be sampled at much lower rates. A simple 5-channel PCM 'frame' with all channels sampled at the same sample rate is shown in Figure 8.2.

Figure 8.2 A simple PCM frame

The frame contains a number of data words at the beginning which are used to ensure data identification - these words may show the frame number, the number of data words in the frame, real time information, and some integrity information to show that, for example, the number of words in the frame is correct and that no channels have been lost. This is followed by one data word (one sample) for each channel. Each data sample may also contain some error check, one simple check being to set an additional bit if the data word contained an odd number of bits before it was written to tape. More complex data formats are required if some channels are sampled at a higher rate than the rest. This process is called super-commutation. In Figure 8.3, channel 3 is sampled twice as fast as the other channels.

Figure 8.3 PCM frame with super-commutation

Sub-commutation allows some channels to be sampled much less frequently by using one data word in the frame for several different channels. In Figure 8.4, channels 4 and 5 are sampled at half the frequency of the other channels.

Figure 8.4 PCM frame with sub-commutation

Combinations of super- and sub-commutation allow very complex data frames which make optimum use of space available for data recording, and computer programs may be required to design the frame format for particular applications. Although it was originally a tape-orientated format, PCM data streams can now be written directly to computer hard disk using a suitable interface card. The process of digitising the tape and it into a single bit-stream is carried out by the PCM system, so the computer processor is free to do real time graphics display.

Volume 3 8-2 ©safe technology limited Vol. 3 Section 8 Issue: 1 Date: 31.10.02

The data chain

The major advantage of PCM systems is their ability to format very large numbers of channels sampled at high frequencies into a single data stream. A typical application may be from 48 to 256 channels sampled at several thousand samples per second per channel. For less demanding applications than these, PCM systems are being superseded by computer-based systems using PC-compatible computers with commercial analogue-to-digital converters. Many mechanical engineering applications of up to 48 channels at up to 1000 samples/second/channel can be satisfied using PC-based systems with real-time data storage to hard disk combined with real-time graphics display of measured data from selected channels. Storage on hard disk means that the data is immediately available for display or analysis. As an example of hard disk capacity, a 100 MByte disk could store 1.7 hours of data from a sixteen channel system sampling each channel at 500 samples/second.

8.3 Short term recording or long term analysis Data acquisition onto data tape, for later replay and computer analysis, has been a standard method for engineering investigation. However, advances in electronics in the 1970's made other alternative strategies possible. The Datamyte of the 1970's was a small portable single channel rainflow cycle counter which digitised the analogue signal and carried out an on-line rainflow cycle count into a cycle matrix stored in battery-backed memory. The Datamyte was a major change in thinking and, although limited by the computing power and memory then available, allowed engineers to monitor the service environment of a component over long periods of time with a single low-cost device. Ease of use allowed it to be used by mechanical engineers without extensive knowledge of instrumentation techniques. A perceived limitation of the Datamyte was its inability to store significant lengths of recorded signals. This led to the wider concern about data quality in general, as once an event in the signal had been cycle counted and added to the cycle matrix, it was difficult to determine if it was a real event, or some form of data corruption. An early attempt to combine extensive on-line analysis with a significant data storage was a Johne+Reilhofer system supplied in 1982 for long term monitoring of wind turbines. This multi-channel system was based on 16-bit mini-computers. Measured signals were stored on disk for back-up onto tape, so that data quality could be assessed. In order to limit the quantity of stored data, recording was event-driven, in that data was stored only if certain signal criteria were satisfied. Rainflow cycle matrices were recorded on a daily basis, so that they could be assessed before being combined with previous cycle matrices. Successful use of this system over some years produced large quantities of data which were time-consuming to assess and process manually, and illustrated another essential requirement for on-line analysers, that of automated data access. In 1983 the Vehicle Instrumentation Group, now part of the Engineering Integrity Society, published a specification for an untended multi-channel data storage and long term on-line analysis device. Jaguar Cars developed the IVER in-vehicle system, and instrumentation companies produced the Swift, Vycom, Dynamonitor and other devices which offered various combinations of data storage and analysis capability. Using a modular approach, Somat introduced the extremely rugged S2100 field computers which could be configured easily by the user for different data storage and analysis applications. The advent of high speed 32-bit processors in PC-compatible computers, combined with rugged hard disks, offered an alternative approach, with advantages in standard operating systems and components, high channel counts, ease of programming and readily available post-processing software, but in a larger less rugged system with greater power requirements. These many different devices are attempts to resolve the conflicts which face the engineering investigator. Recording measured signals as time histories - the raw data - is obviously desirable. The engineer can assess data quality by visualising the signals, and can run a wide range of analysis software as an investigation proceeds. Conclusions on the behaviour of a component can be drawn simply by being able to see the signals. However, this process is time consuming for even short lengths of data, and is quite impractical if long term monitoring is required to derive duty cycles or investigate rare events, both because of the engineering time required and because of the vast quantity of data which must be stored. Modern on-line analysers avoid these disadvantages by allowing signals to be acquired and processed continuously over very long periods of time - months or years. Several different types of analysis may be run on each channel simultaneously, and recording of raw data can be event-driven so that only essential data is stored. They also provide instant access to analysis results. However, doubts about data quality do exist, and the methods of on-line analysis must still be selected in advance. With the systems now available, a combined approach is often adopted. In the early stages of an engineering investigation, a large number of channels may be recorded as raw data over short time periods, perhaps recording specific types of events which the component is known to experience. Off-

©safe technology limited Volume 3 8-3 Vol. 3 Section 8 Issue: 1 Date: 31.10.02

The data chain

line post-processing of these signals may be combined with supporting modal or finite element analysis to provide an understanding of the component behaviour. This short-term approach may then be supplemented by long term on-line analysis of a smaller number of channels to confirm the results over longer timescales. One aim of instrumentation designers is to develop devices which allow both the short term recording and long term analysis functions to be carried out by the same data acquisition system.

8.4 Analogue to digital conversion Sampling may be carried out using an analogue to digital converter, also called an ADC, or A to D'. An ADC takes samples of analogue signals at specified times, and converts the samples into binary digits for analysis or storage. The ADC converts an unknown signal value - usually a voltage - into digital data by comparing it with a reference voltage generated by a digital-to-analogue converter (DAC, or D to A). This first outputs a reference voltage equal to half full scale of the input signal. If the signal is greater than the reference voltage, the ADC sets one bit, and the DAC then increases its output to three-quarters of full scale: if the signal is less than half full scale, the bit is not set, and the DAC reduces its output to one-quarter full scale. This process of sucessive approximation continues until all the bits have been set. An 8-bit ADC has a resolution of 1 part in 256, on a scale of 0 to 255. If the full scale input is 10 volts, then 10 volts is equivalent to a value of 255 on the scale of the ADC. The operation of an 8-bit ADC sampling signal with a 3-volt instantaneous value, is shown below. 3 volts on a scale of 0 - 255 is equivalent to 76.5. If bit 1 is the most significant bit, and bit 8 is the least significant bit, the iteration procedure would be that 76.5

is less than 128 so set bit 1 to 0

is greater than (128-64 = 64) so set bit 2 to 1

is less than ( 64+32 = 96) so set bit 3 to 0

is less than ( 96-16 = 80) so set bit 4 to 0

is greater than ( 80- 8 = 72) so set bit 5 to 1

is greater than ( 72+ 4 = 76) so set bit 6 to 1

is less than ( 76+ 2 = 78) so set bit 7 to 0

is less than ( 78- 1 = 77) so set bit 8 to 0

The result of the conversion of 3 volts is therefore 01001100 = 76, which is within one integer of the required value of 76.5. Figure 8.5 shows this iteration procedure.

Volume 3 8-4 ©safe technology limited Vol. 3 Section 8 Issue: 1 Date: 31.10.02

The data chain

Figure 8.5 Iteration process for an ADC

The maximum resolution of the ADC is one integer. For a signal at full scale, this gives 1 part in 256 on an 8-bit ADC, or 0.4%. The value of 3 volts was resolved to an error of 0.5 in 76.5, an accuracy of around 0.7%. Commercial ADC's are available with bit-resolutions which vary from 8-bit to 22-bit or more, and the resolution may be user-selectable. For amplitude analysis 12 bit resolution is adequate, as it gives a potential accuracy of 1 part in 4096 at full scale. This is generally more accurate than most transducers, and allows reasonable flexibility in selection of full scale values. Frequency domain analysis, which is based on squared data values, may require higher resolution.

8.5 Sequential sample and hold The iteration procedure used by the ADC takes a finite time. The example in Figure 8.5 assumed that the signal voltage was being held constant while the iteration took place. Signal value is not usually constant however, and this may lead to conversion errors. In the following example, taken from Ref 1, an input voltage of 1.2 volts is applied to a 3-bit ADC with a full scale of 8 volts.

Figure 8.6 Error caused by the signal varying during sampling

At the first clock pulse, the ADC compares the input voltage with the reference value of 4 volts. Because the input voltage is less than half the reference, the first bit is set to 0. At the second clock pulse, the input voltage has now increased. The ADC sets the second bit to 1 because the input voltage is greater than the reference voltage of 2.

©safe technology limited Volume 3 8-5 Vol. 3 Section 8 Issue: 1 Date: 31.10.02

The data chain

At the third clock pulse, the input is still greater than the new reference value of 1 volt, so the third bit is set to 1, giving a converted value of 011, or 3. As can be seen from Figure 8.6, the input value has risen from about 1 to 6 during conversion, and the converted value does not agree with the value at the start or at the end of the conversion. This problem can be solved by using 'sample and hold' on the signal being measured. This holds the input voltage constant (often across a capacitor) while the conversion is taking place. The conversion procedure is shown in Figure 8.7.

Figure 8.7 Simple sample-and-hold

The first bit is not set because the signal, now represented by the horizontal line, is below the reference value of 4. The second bit is not set because the signal is below the reference value of 2, and the third bit is set because the signal is below the reference value of 1. so the ADC would return a 3-bit value of 001, or 1. This type of sample-and-hold procedure is called sequential sample and hold in a multi-channel system, because each channel is held and sampled sequentially. It is the method most used in ground vehicle engineering measurement, on the basis that it is sufficiently accurate in most instances, and reasonable inexpensive.

8.6 Simultaneous sample and hold Consider now a multi-channel system. Using sequential sample-and-hold, the ADC will hold and sample each channel in turn.

Figure 8.8 Sequential sampling of channels

The time delay between samples taken from adjacent channels introduces a phase change between channels, because most post-processing software will assume that the first data point in each channel occurs at time = 0.

Volume 3 8-6 ©safe technology limited Vol. 3 Section 8 Issue: 1 Date: 31.10.02

The data chain

Figure 8.9 Phase change caused by sequential sampling

For applications where the precise phase relationship between individual channels is important (such as vehicle crash testing) it may be necessary to hold and store the analogue voltages from all channels at the same instant of time, then let the ADC convert each channel in turn. This system is known as simultaneous sample-and-hold. It is usually expensive in comparison with sequential sample-and-hold.

8.7 Reference The section on ADC operation was based on the following: Caxton C Foster REAL TIME PROGRAMMING - NEGLECTED TOPICS Addison-Wesley 'Joy of Science' paperback 1981 ISBN 0-201-01937-X

©safe technology limited Volume 3 8-7 Vol. 3 Section 8 Issue: 1 Date: 31.10.02

The data chain

Volume 3 8-8 ©safe technology limited Vol. 3 Section 8 Issue: 1 Date: 31.10.02

Signal statistics

9 Signal statistics

Maximum value The maximum value in the signal ymax

Minimum value The minimum value in the signal ymin

Mean value The mean or average value in the signal y

For a signal y(t) the mean value is

T 1 y = y(t). dt T ⌡⌠ 0

Mean square value The mean square value is the average value of y2(t)

T 1 mean square = y2 = y2(t). dt T ⌡⌠ 0

Root mean square (rms) The rms is the square root of the mean square value.

Standard deviation The standard deviation σ is defined as

T 1 2 = y(t) - y 2. dt σ T ⌡⌠() 0

where σ2 is the Variance of the signal.

The standard deviation is the square root of the average value of the square of (the signal minus the mean value of the signal)

For digitised signals - the mean value is the average value of the samples in the signal, the mean squared value is the average value of the square of each sample, the variance is the average value of the square of each term (sample - mean value), the standard deviation is the square root of the variance.

©safe technology limited Volume 3 9-1 Vol. 3 Section 9 Issue: 1 Date: 31.10.02 Ssignal statistics

Volume 3 9-2 ©safe technology limited Vol. 3 Section 9 Issue: 1 Date: 31.10.02