
Numerically Robust Implementations of Fast Recursive Least Squares Adaptive Filters using Interval Arithmetic Christopher Peter Callender, B .Sc. AMIEE. A thesis submitted for the degree of to the Faculty of Science at the University of Edinburgh. 1991 -1- University of Edinburgh Abstract of Thesis Name of Candidate Christopher Peter Callender Address Degree Ph.D. Date March 25, 1991 Title of Thesis Numerically Robust Implementations of Fast Recursive Least Squares Adaptive Filters using Interval Arithmetic Number of words in the main text of Thesis Approximately 28,000 Algorithms have been developed which perform least squares adaptive filtering with great computational efficiency. Unfortunately, the fast recursive least squares (RLS) algorithms all exhibit numerical instability due to finite precision computational errors, resulting in their failure to produce a useful solution after a short number of iterations. In this thesis, a new solution to this instability problem is considered, making use of interval arithmetic. By modifying the algorithm so that upper and lower bounds are placed on all quantities calculated, it is possible to obtain a measure of confidence in the solution calculated by a fast RLS algorithm and if it is subject to a high degree of inaccuracy due to finite precision computational errors, then the algo- rithm may be rescued, using a reinitialisation procedure. Simulation results show that the stabilised algorithms offer an accuracy of solution comparable with the standard recursive least squares algorithm. Both floating and fixed point implementations of the interval arithmetic method are simulated and long-term stability is demonstrated in both cases. A hardware verification of the simulation results is also performed, using a digital signal processor(DSP). The results from this indicate that, the stabilised fast 'RLS algorithms are suitable for a number of applications requiring high speed, real time adaptive filtering. A design study for a very large scale integration (VLSI) technology coprocessor, which provides hardware support for interval multiplication, is also considered. This device would enable the hardware realisation of a fast RLS algorithm to operate at far greater speed than that obtained by performing interval multiplication using a DSP. Finally, the results presented in this thesis are summarised and the achievements and limitations of the work are identified. Areas for further research are suggested. Acknowledgements There are many people who deserve thanks for the assistance and support which have made the work of this thesis possible. I would like to thank all of the members of the signal processing group, both past and present, for their helpful dis- cussions and comments on my work. They have helped to clarify many of my ideas. Special thanks must go to Professor Cohn Cowan. In his role as my supervisor, he has contributed greatly to the project and his guidance has been very much appreci- ated. Similarly, I am most grateful to Dr. Bernie Muigrew for initially being my second supervisor and for taking over when Professor Cowan left the department. Thanks must also go to Professor Peter Grant for his encouragement and advice during my time in the group. Finally, I would like to make it clear how much I appreciate the support of my parents during the time spent on this work. Contents Chapter 1:Introduction 2 1.1. Adaptive Filters - Structures and Applications 2 1.2. Families of Adaptive Algorithms ................ 5 1.3. Applications of Adaptive Filters ..................15 1.3.1. Prediction ..............................17 1.3.2. Noise Cancellation ........................17 1.3.3. System Identification ......................18 1.3.4. Inverse Modelling ........................19 1.4. Organisation of Thesis ........................19 Chapter 2:Least Squares Algorithms for Adaptive Filtering 22 2.1. Introduction .................................22 2.2. The Least Squares Problem for Linear Transversal Adaptive Filtering ...............................23 2.3. The Conventional Recursive Least Squares Algorithm 28 2.4. Data Windows ...............................29 2.5. Computational Complexity .....................31 2.6. The Fast Kalman Algorithm .....................32 2.7. The Fast A Posteriori Error Sequential Technique 39 2.8. The Fast Transversal Filters Algorithm ............41 2.9. Comparison of the Least Squares Algorithms 44 -v - 2.10. Numerical Instability 45 2.10.1. Normalised Algorithms ...................47 2.10.2. Lattice Algorithms .......................48 2.10.3. Stabilisation by Regular Reinitialisation 49 2.10.4. Error Feedback .........................50 2.1.1. Conclusions ...............................51 Chapter 3:Interval Arithmetic ................................ 53 3.1. Introduction ................................ 53 3.2. Interval Numbers ............................. 54 3.3. Scalar Interval Arithmetic ...................... 54 3.4. Scalar Interval Arithmetic with a Finite Precision Pro- cessor ........................................ 55 3.5. Vector Interval Arithmetic .....................60 3.6. Application of Interval Arithmetic to the Fast RLS Algorithms ....................................61 3.7. Choice of Design Parameters for the Interval Fast RLS Algorithm .....................................63 3.8. Conclusions ................................ 65 Chapter 4:Interval Algorithms - Software Simulations .............67 4.1. Introduction ................................67 4.2. System Identification .........................68 4.3. Divergence of the FAEST, Fast Kalman and FTF Algo- rithms ......................................... 71 4.4. FTF Algorithm Using Rescue Variable ............73 4.5. FTF Performance Using Interval Arithmetic 82 - vi - 4.6. Fixed Point Implementation of the FTF Algorithm 86 4.7. Fixed Point Interval FTF Performance ............88 4.8. Application of Interval Algorithms to Stationary and on-Stationary Equalisation .......................93 4.8.1. Performance for a Stationary Channel .........98 4.8.2. Performance for a Fading Channel ...........98 4.9. Conclusions ................................100 Chapter 5:Interval Algorithms - Hardware Implementation .........103 5.1. Introduction ................................103 5.2. Implementing the Algorithm on a TMS320C25 104 5.2.1. Macros to Perform Interval Arithmetic on a TMS320C25 ...................................105 5.2.2. The FTF Algorithm on a TMS320C25 .........106 5.3. Test Configuration ...........................107 5.3.2. Equaliser Arrangement ....................110 5.3.3. Measurement of Results ...................110 5.4. Results ....................................113 5.4.1. Eye Diagrams ...........................113 5.4.2. Filter Error .............................114 5.5. Speed of Operation ..........................114 5.6. Conclusions ................................120 Chapter 6:An Interval Arithmetic Coprocessor for the TMS320C25 123 6.1. Introduction ................................123 6.2. The SARI Toolset ...........................125 6.3. Functions of the Coprocessor 128 6.4. Design of the Interval Multiplier .................129 6.5. Top Level Design of the Coprocessor . ............139 6.6. Feasibility of the Design .......................139 Chapter 7:Conclusions .....................................143 7.1. Achievements of the Work .....................143 7.2. Limitations and Areas for Future Work ............145 References ...............................................147 Appendix A - Publications arising from this work ..................159 Appendix B - Simulation software .............................197 Appendix C - TMS320C25 implementation - assembly language code .. 301 Appendix D - TMS320C25 implementation - circuit diagrams .........332 Appendix E - Int&rval arithmetic coprocessor - VHDL behavioural description ............................................... 343 Abbreviations ADC Analogue to Digital Converter ASAP As Soon As Possible DAC Digital to Analogue Converter dB deciBel DSP Digital Signal Processor EOC End Of Conversion EVR Eigenvalue Ratio FAEST Fast A Posteriori Error Sequential Technique FIR Finite Impulse Response FK Fast Kalman FTF Fast Transversal Filters HF High Frequency HR Infinite Impulse Response ISI Intersymbol Interference LS Least Squares LMS Least Mean Squares MLSE Maximum Likelihood Sequence Estimator RLS Recursive Least Squares SNR Signal to Noise Ratio SOC Start Of Conversion VHDL VHSIC Hardware Description Language VHSIC Very High Speed Integrated Circuit VLSI Very Large Scale Integration VSE VHDL Support Environment Principal Symbols (k) Forward prediction coefficients for fast RLS algorithms k (k) Backward prediction coefficients for fast RLS algorithms (k) Kalman gain vector '(k) Extended (N + 1 th order) Kalman gain vector (k) Alternative Kalman gain vector Alternative extended (N + 1 th order) Kalman gain vector d (k) k th adaptive filter desired response input e b (k) Backward a priori prediction error ef(k) Forward a priori prediction error e (k) A priori filter error h (k) j -th coefficient of FIR filter LL (k) Vector [h0(k)h1(k) h_1(k)f J0(k) Unwindowed least squares cost function J1(k) Exponentially windowed least squares cost function J 2(k) Least squares cost function with initial condition k Time index for sampled data signals N Length of adaptive filter r(k) Autocorrelation matrix for least squares algorithms Crosscorrelation vector for least squares algorithms x(k) k th input to adaptive filter
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages357 Page
-
File Size-