Methods and Identification

Mohd Aftar Abu Bakar

Thesis submitted for the degree of Doctor of Philosophy in the School of Mathematical Sciences at The University of Adelaide (Faculty of Engineering, Computer and Mathematical Sciences)

School of Mathematical Sciences

August 2016 Acknowledgments

I would like express my appreciation and acknowledgment to various people who have help and support me in the process of completing this Ph.D. thesis. First of all to my supervisors, Dr Andrew Metcalfe and Dr David Green, who keep supervising me through this adventurous and tough journey. I only could arrived at this point through their excellent mentoring, where they spend countless of hours reading my reports, and give some feedback and ideas on solving my thesis problems. Most importantly my wife, Noratiqah, who has been the backbone, supporting me mentally and emotionally through this journey. Also to my loving parents, Abu Bakar and Zaiton, who keep reminding me to finish the thesis. To my loving son, Muhammad Lutfi, who being the motivation for me to finally complete this thesis. I would like to express my gratitude to Universiti Kebangsaan Malaysia and Ministry of Higher Education, Malaysia for the scholarship which make it possible for me to do my Ph.D. in The University of Adelaide. Also to the School of Mathematical Sciences, The University of Adelaide, on accepting me as their postgraduate student and for their supports and services during the whole time I spend there.

i Abstract

I begin with a brief introduction to dynamic , the identification of system from records of input and output, and also wave energy converters which provide case studies to motivate the research. The dynamic systems discussed are categorized as linear or nonlinear dynamic systems. I present brief reviews of strate- gies for identification of dynamic systems which cover the history and also the areas of applications. The discretization of differential equations for dynamic systems is a recurrent theme and I consider forward, backward and central differences in detail for linear systems. The estimation techniques discussed are the principle of , the Kalman filter and spectral analysis. Several system identification techniques for nonlinear dynamic systems in the and in the domain are presented and compared. The main focus of the thesis is estimation methods based on . I present some introduction to the wavelet transforms, which cover both continuous and dis- crete wavelet transforms. Wavelet methods for system identification of linear and nonlinear dynamic systems are discussed. Throughout this research, I have published four research articles guided by my supervisors. The first article discusses the wavelet based technique for linear system, and the technique was compared to the spectral analysis technique. The second article compare two types of wave energy converters, where the heaving buoy wave energy converter (HBWEC) is modelled as a linear system and the oscillating flap wave energy converter (OFWEC) as a nonlinear system. The technique for system identification of nonlinear dynamic systems have been applied on the OFWEC model. Unscented Kalman filter have been discussed in the third article where the nonlinear OFWEC system have been used as the case study. A

ii wavelet approach for nonlinear system identification has been discussed in the fourth article together with the probing technique. The probing technique was used to find the generalized frequency response functions of the nonlinear dynamic systems based on the nonlinear autoregressive with exogenous input (ARX) model. Both technique were compared for two weakly nonlinear oscillators, the Duffing and the Van der Pol. Once again, we selected the OFWEC system as a case study. Contents

Acknowledgments i

Abstract ii

Statement of Originality vii

Preamble ix

1 Introduction 1 1.1 Applications Considered ...... 3

2 Background to Dynamic Systems 6 2.1 Introduction ...... 6 2.2 Dynamic Systems ...... 7 2.2.1 Linear Dynamic Systems ...... 10 2.2.2 Nonlinear Dynamic Systems ...... 12 2.3 System Identification ...... 15

3 Linear System Identification 19 3.1 Introduction ...... 19 3.2 Discrete Time ...... 22 3.3 Least Squares Method ...... 25 3.4 ...... 27 3.5 Spectral Analysis ...... 29 3.5.1 Fourier Transform ...... 30 3.5.2 Response amplitude operator ...... 31

iv 3.6 Summary ...... 34

4 Nonlinear system identification in time domain 36 4.1 Introduction ...... 36 4.2 Discrete Time Simulation ...... 37 4.3 models ...... 41 4.4 Unscented Kalman Filter ...... 53 4.5 Summary ...... 68

5 Nonlinear system identification in frequency domain 69 5.1 Introduction ...... 69 5.2 Bendat’s nonlinear system identification ...... 70 5.3 Generalized Frequency Response Function ...... 74 5.4 Summary ...... 76

6 Wavelets and System Identification 77 6.1 Wavelet Transforms ...... 77 6.2 Continuous Wavelet Transforms ...... 80 6.3 Discrete Wavelet Transforms ...... 83 6.4 Linear system analysis using wavelet ...... 87 6.4.1 Wavelet Response Amplitude Operator Estimation ...... 88 6.5 Nonlinear system identification by Wavelet Ridge ...... 91 6.5.1 Instantaneous Modal Parameters ...... 92 6.5.2 Wavelet Ridge ...... 94 6.6 Summary ...... 96

7 Synthesis 97 7.1 Comparison of Spectral and Wavelet of for Linear Systems ...... 97 7.2 Comparison of Heaving Buoy and Oscillating Flap Wave Energy Con- verters ...... 124 7.3 Unscented Kalman Filtering for Wave Energy Converters System Identification ...... 144 7.4 Comparison of Autoregressive Spectral and Wavelet Characteriza- tions of Nonlinear Oscillators ...... 154

8 Conclusions 185

Bibliography 188 Statement of Originality

I, Mohd Aftar Abu Bakar, certify that this work contains no material which has been accepted for the award of any other degree or diploma in my name, in any university or other tertiary institution and, to the best of my knowledge and belief, contains no material previously published or written by another person, except where due reference has been made in the text. In addition, I certify that no part of this work will, in the future, be used in a submission in my name, for any other degree or diploma in any university or other tertiary institution without the prior approval of the University of Adelaide and where applicable, any partner institution responsible for the joint-award of this degree. I give consent to this copy of my thesis when deposited in the University Library, being made available for loan and photocopying, subject to the provisions of the Copyright Act 1968. I acknowledge acknowledges that copyright of published works contained within this thesis resides with the copyright holder(s) of those works. I also give permission for the digital version of my thesis to be made available on the web, via the Universitys digital research repository, the Library Search and also through web search engines, unless permission has been granted by the University to restrict access for a period of time.

SIGNED: ...... DATE: ......

vii Published Works

M.A.A. Bakar, D.A. Green, and A.V. Metcalfe. Comparison of spectral and wavelet estimators of transfer function for linear systems. East Asian Journal on Applied , 2(3):214-237, 2012.

M.A.A. Bakar, D.A. Green, A.V. Metcalfe, and G. Najafian. Comparison of heaving buoy and oscillating flap wave energy converters. In AIP Conference Proceedings: Proceedings of the 20th National Symposium on Mathematical Sciences, 1522: 86- 101, 2013.

M.A.A. Bakar, D.A. Green, A.V. Metcalfe, and N.M. Ariff. Unscented Kalman filtering for wave energy converters system identification. In AIP Conference Pro- ceedings: Proceedings of the 3rd International Conference on Mathematical Sciences, 1602: 304-310, 2014.

M.A.A. Bakar, N.M. Ariff, D.A. Green and A.V. Metcalfe. Comparison of autore- gressive spectral and wavelet characterizations of nonlinear oscillators. Submitted to East Asian Journal on , 2016. Preamble

This thesis has been submitted to the University of Adelaide for the degree of Doctor of Philosophy. According to the University’s Specification for Thesis, a Doctoral thesis may comprise,

a combination of conventional written narrative presented as typescript and publications that have been published and/or submitted for publication and/or text in manuscripts, and this thesis takes this form. The thesis has been divided into eight chapters. The first chapter is a brief introduction to: dynamic systems; the identification of system parameters from records of input and output; and also wave energy converters which provide case studies to motivate the research. In the second chapter I discuss dynamic systems, which can be divided into linear and nonlinear dynamic systems. I also present brief reviews of strategies for identification of dynamic systems which cover the history and also the areas of applications. In the third chapter, I consider linear systems. The discretization of differential equations for dynamic systems is a recurrent theme and I consider forward, backward and central differences in detail for linear systems. Then I consider estimation techniques including: least squares, the Kalman filter and spectral analysis. The fourth and fifth chapter discussed several system identification techniques for nonlinear dynamic systems in time domain and frequency domain, respectively. The sixth chapter starts with an introduction on the wavelet transforms. This cover both continuous and discrete wavelet transforms. Several wavelet methods for

ix system identification of dynamic systems are discussed here. The seventh chapter presents four published papers from this research which form the main component of this thesis. The outline of the paper is given for each paper. All the papers are presented in the format they were printed. In the last chapter, I discuss the conclusions from this research. Together with the conclusions, I also suggest possible potential future research following this thesis and also on system identification generally. Chapter 1

Introduction

Dynamics is one of the branches of physics which deals with motion. The study of dynamic systems started in the Greek era, famously by Aristotle who defined motion as the actuality of a potentiality. Aristotle classified motion as natural, voluntary and forced [58] and hence proposed that any object moved because of the acts of force on it, either visible or invisible. Galileo Galilei who is known as the father of modern physics and modern as- tronomy, set the foundations for understanding the motion of objects on the earth’s surface. He formulated the basic law of falling bodies with his famous leaning tower of Pisa , where he demonstrated that the descent time of a falling object was independent of the object’s mass. This has been the basis of Newton’s laws of gravity. Galileo’s greatest contribution is the concept of inertia, where the velocity of a moving object will remain constant unless an external force (e.g. frictional force) acts on it. This concept was used by Newton to formulate his first law of motion. Newton, who was a key figure in emergence of modern science, formulated the famous laws of motion and the law of gravity. However, it is Gottfried Leib- niz who defined and elaborated on the concept of the scientific term ’dynamics’, which is being used in the modern sciences today. Other notable scientists who have made significant contributions to the study of dynamics before the 1900’s are Kepler (Kepler’s laws), Descartes, Cavalieri and Fermat. The study of dynamic systems looks at how one state develops into another state over the course of time. For example, in evaluating the mass-spring system

1 CHAPTER 1. INTRODUCTION 2 or fluid flow in pipes. Dynamic systems can be any system whose behaviour or characteristics can be observed over some interval of time. Some of the complex dynamic systems now being applied to interdisciplinary studies, from human movement systems or transport systems to economic systems. Most of these dynamic systems are complex and therefore have to be studied and analyzed further so that their respective behaviour can be understood. By understanding the system behaviour information to inform decision making related to the control, management, acquisition or transformation of the system may be gained [17]. System identification is an area which deals with building, identifying or mea- suring the of a system. The term ”identification” was first coined by Zadeh [138] for the problem of determining the input-output relationship of systems, now an essential study approach in model estimation of dynamic sys- tems. Previously, it was known as system characterization, system measurement or system evaluation in the control community. One purpose of system identification is to diagnose the properties of a system, where for example, the aim is to identify the system’s values which can be used to design a control strategy [6]. System identification is a very broad area with a variety of different methods and approaches. The methods will depend on the character or the behaviour of the sys- tem models, where some of the characteristics are linear, nonlinear or hybrid. Some of the system identification techniques and their applications have been discussed in [82, 11] and by several survey papers [6, 2, 44, 70, 83]. Usually, a mathematical model is used as the representation of a dynamic system, which provides a basis for analysis and for engineering design. From the mathemati- cal model, decisions can be proposed and actions can be evaluated. This can be done by forecasting the response from the mathematical model and then by evaluating the performance. This helps in prototyping and concept evaluation, while reducing risk and providing assessment in safety aspects. By using a mathematical model, the cost can be reduced, since a system can be evaluated by a model with less cost than the actual system. It is also much safer since we can asses any potential dangers and take precautions before the real system CHAPTER 1. INTRODUCTION 3 is run. From the mathematical model, we can simulate a system using a computer, which is generally much faster than a trial run of the actual system. This thesis discusses methods for identifying dynamic systems from measure- ments of inputs and outputs. There are various system identification techniques that are presented and discussed in this thesis, as applied to linear and nonlinear dynamic systems. These methods work in either the time domain, the frequency domain or both the time and frequency domains. This thesis also discusses predic- tion technique for dynamic system’s responses given a dynamic system model and its inputs. The methods based on wavelets are the main focus of this work. Together with the discussion of the wavelet transforms and the wavelet methods for system iden- tification, this research also compares wavelet methods with other time domain or frequency domain system identification techniques in the literature. The comparison is performed by applying these techniques to the model of a spring mass damper system for identification of linear dynamic systems, to the Duffing and the Van der Pol systems for the identification of nonlinear dynamic systems. Two types of wave energy converters are considered for the case studies known as the heaving buoy wave energy converter (HBWEC) and the oscillating flap wave energy converter (OFWEC). From the model of the wave energy converters, the responses have been simulated using methods discussed later in this thesis.

1.1 Applications Considered

The methods discussed here are generally applicable, but the context of this work is in the field of wave energy converters. Each year, demand and consumption of energy especially in fuel and electricity have increased globally. In 2008, the global energy consumption was around 1.5 × 105 TW hour per year and it is predicted to rise to approximately 2.3 × 105 TW hour per year in 2035 [50, 28]. According to the Worldwatch Institute [135], there are still two billion people who have not been pro- vided with proper electric facilities. This demand is rising from developing countries which are doubling their needs every eight years. The fossil fuels and nuclear power CHAPTER 1. INTRODUCTION 4 cannot support this energy demand. The ongoing petroleum crises combined with global warming has added pressure to push governments and industries to look for alternative sources of energy. These alternative energy sources should be clean with low environmental impact even in the case of spillage or some other accident. The energy resource should also be sustainable and readily available so that no country can monopolize it. Another highly prioritized value would be the cost effective- ness in considering any alternative energy. By looking at these requirements, only renewable energy sources can satisfy these conditions. There are many types of renewable energy sources such as hydro energy, solar energy, wind energy and ocean energy. Each of these energy sources have their own advantages and disadvantages. The suitability of each renewable energy source mostly depends on the resource availability. However, a major issue arises in har- nessing renewable energy, since production needs to be sustainable. The processes associated with collecting this renewable energy, transforming the energy and then to supplying the energy in a usable form present many issues. Consider the renewable energy sources mentioned above, the suitability of which obviously depends on the geographical aspects of the location from where the energy will be harnessed. For hydro energy, an ample supply of water is required from a river, lake or reservoir with gravitational potential that can generate hydroelectricity. Alternatively, ocean energy will only be suitable for coastal areas. Wind is available everywhere around the world but higher altitudes and coastal areas are preferred as the location for wind farms since they have a higher density of wind movement compared to other areas. For solar energy, long daylight hours and sunny weather will optimize the amount of energy that can be generated. Solar, wind and ocean waves are related to each other. Wind is created by the sun, while the ocean wave is generated by the wind. Even though wave power potential is lower than the wind and solar power, it is more persistent and spatially concentrated [37]. The earth’s surface is covered 70% with water and 98% of that is ocean. Therefore a vast renewable energy supply is available. Ocean power plants also do not need to take land space, since the open sea is preferable to the shoreline. Ocean power is likely to have less environmental impact than wind power while some CHAPTER 1. INTRODUCTION 5 studies have shown that it also has a positive effect on the ocean ecosystem [77]. However, wave energy systems used to date are still not cost efficient. The device structures also face the possibility of being destroyed by the rough sea, which is the same problem as faced by other offshore structures such as drilling platforms. There is also the problem of predicting wave height, which is very important in determining the suitability of locations for power plants and also for the energy harvesting devices. System identification technique have been suggested as an alternative approach for hydrodynamic modelling given that the model can be determined from the mea- sured input and output from the system [30]. Given that most offshore struc- tures including the WECs are nonlinear, the system identification technique of non- linear systems provide a platform for investigation on the WECs. Chapter 2

Background to Dynamic Systems

2.1 Introduction

State space models for dynamic systems that are reliant on digital computers have been used in the aerospace industry since the 1960s to analyze automatic control systems. They have been used extensively in and control engi- neering [66, 44, 70, 83]. Other applications of dynamic systems are in economics and finance [26, 136]. In electrical engineering, it has been used for circuit analysis, simulation and design [105, 62], and in mechanical and civil engineering especially to study the dynamics of structures. Another application is in navigation systems such as the GPS. Physical processes are not precisely described by mathematical models, but a model is considered good if it could give accurate predictions and includes the main features of the process. There is a complete general theory of linear models which is adequate for many purposes. Linear systems are relatively easy to describe and ana- lyze because the steady state response to a sinusoidal input is at the same frequency and the gain and phase shift do not depend on the amplitude of the input. However, there are many other applications which require a nonlinear model. There is no encompassing analysis of nonlinear models and a variety of approaches is available. One of the approaches is to linearize a nonlinear dynamic system by taking local linear approximations. Based on my review of the developments in the analysis of modelling nonlinear systems up until 2010, there have been remark-

6 CHAPTER 2. BACKGROUND TO DYNAMIC SYSTEMS 7 able advancements in the research associated with nonlinear systems [70, 83]. This has been facilitated by digital computers which support the mathematical analysis through numerical computation and computer simulation [44].

2.2 Dynamic Systems

Dynamic systems essentially refers to any physical environment where position or state can be explained numerically. The dynamic system evolves with time and the dynamic system model relates the system’s current states to its past states. Dynamic systems can be either linear or nonlinear depending on their properties and behaviour. There is a complete general theory of linear models for dynamic systems, but in many applications a does not provide an adequate approximation and may not even indicate the general behaviour of the system. From a mathematical perspective, the differences between linear and nonlinear system depends on the model of the system itself. That is, whether it can be mod- elled by a linear or nonlinear equation. The details of the model’s equations explain how the systems interact and behave. Systems that behave linearly are usually easier to define and the exact solutions are relatively easy to find. Usually, for lin- ear system, the model can give an almost precise prediction on the system’s future behaviour. There are no interactions between the independent and the dependent variables in linear system. The state of linear systems can grow or decay expo- nentially, or even cycle periodically, by either decaying or growing in oscillations as observed in the mass-spring system. Most systems in this world do not behave linearly. Even though some do behave gradually and predictably, they might not fit a linear model. The most common nonlinear behaviors are classified in terms of chaos, multistability, amplitude death, aperiodic solutions or solitons [121, 3, 113, 111]. These behavior of a nonlinear system is usually unpredictable or sometimes chaotic. There are generally no exact solutions for nonlinear dynamic systems and therefore, we have to redefine what we consider to be a solution. The nonlinear equations can take many different forms, where the nonlinear term in the equation make the model nonlinear. CHAPTER 2. BACKGROUND TO DYNAMIC SYSTEMS 8

Dynamic systems can be modelled mathematically, thus allowing the systems to be simulated and analyzed without the requirement of having to build the real system. This saves the expense of building the real system and also saves time since the model can be simulated in less time compared to the real system. The model can describe how the system performs, and hence, provides the opportunity to understand and study the system’s dynamics and processes prior to construction. However, even though a chosen model replicates the same values, states and posi- tions of a real system, it is only a model and not the exact model for the system. Therefore, there may be more than one model for a single system, which can be used to investigate various aspects. The main variables of a dynamic system model are the output from the system which can be measured, in response to the inputs which are external to the model. The inputs usually can be determined or selected and can be controlled. For complex systems, the inputs can be outputs from other models of systems. From the concept of signal processing, the inputs are transformed by a system into outputs. One common approach in viewing and describing this process is by using something known as the transfer function and the state space. Using the transfer function approach, the process is described by considering how exponential inputs are transformed into exponential outputs. Alternatively, using the state space approach, this process involves the states as intermediate variables, but the ultimate aim is to describe how inputs lead to outputs. Dynamic systems can be studied in both continuous and discrete time. An ideal- ized dynamic system can be modelled in continuous time by a differential equation (DE), or in discrete time by a difference equation obtained from the differential equa- tion. The discrete forms can be thought of as approximations to the idealized linear system in as much as the derivatives are approximated by finite differences. An exact representation is obtained if the input is assumed constant over the interval. A discrete model is as realistic for a as a continuous model. Recent works with dynamic systems generally rely on digital computing. Given sets of discrete data and numerical function values, discrete time models are usually more practical and preferred for systems analysis using computation in digital computers. CHAPTER 2. BACKGROUND TO DYNAMIC SYSTEMS 9

In practice, dynamic variables are usually measured with an electronic apparatus and continuous signals are digitized to give a time series. Some of the signals that are discretely sampled are audio signals which are commonly digitally sampled for recording or communication, and video signal which are sampled in time by discrete frames. By taking a sufficiently small sampling interval, the discrete model can be made arbitrarily close to the continuous model. As an example, the analog-to-digital (A/D) converter which converts a continuous quantity to a discrete digital number works at megahertz (MHz) rates. Thus, at very high sampling rates, where the sampling intervals are very small, a physical system which is in continuous time is approximated by a discrete model. However, there is some limitation to discrete sampling, known as aliasing. Alias- ing is an effect that occurs when the discretely sample signal is insufficient at captur- ing the changes in the signal. This effect occurs if the signal frequency is the same as the sampling frequency. Following Nyquist’s theorem, aliasing can be avoided if the sampling frequency is at least twice the highest frequency present in the signal, known as the Nyquist frequency. In practice, anti-alias filters are used together with the analog to digital converter to ensure aliasing is eliminated during signal sampling [35]. In continuous time, the dynamic system model takes the form of a differential equation

x˙ t = f(x(t), p, t), (2.2.1) where x is a vector of systems variables, f is a function of the continuous time system variables and p is a vector of system parameters. For a discrete time dynamic system, the model is in the form of a difference equation

x[t + 1] = g(x[t], p, t), (2.2.2) where g is a function of the discrete time system variables which map the vector x to the next time step and p is a vector of system parameters. CHAPTER 2. BACKGROUND TO DYNAMIC SYSTEMS 10

2.2.1 Linear Dynamic Systems

The equation of motion for a linear dynamic system involves only polynomial func- tions of degree one. The system variables are simple and do not involve nontrivial functions such as squares, square roots, absolute values or threshold functions [116]. Systems can be considered linear if they satisfy the following two properties:

1. Additivity Property

A [x1 + x2] = A [x1] + A [x2] , (2.2.3)

2. Homogeneity Property A [cx] = cA [x] , (2.2.4) where A [·] is the system. This is known as the superposition principle. A linear system always has a unique solution, which in the time domain can be described in terms of its impulse response, h(·), through convolution. Impulse response function is the response or output when a dynamic system is forced by an unit impulse signal or Dirac function, defined as

δ(t) = 0 for all t 6= 0, (2.2.5) and Z ∞ δ(t)dt = 1. (2.2.6) −∞ Due to the unit impulse signal, the impulse response will be h(t). Hence, if the unit impulse is δ(t−τ), then the response will be h(t−τ). Since the system is linear, then if the impulse is multiply by x(τ), then the output will be h(t − τ)x(τ). Therefore, from the definition of the Dirac function, the relationship between the input, x(t) and the output, y(t) in continuous time is

Z ∞ y(t) = h(t − τ)x(τ)dτ, (2.2.7) −∞ and in discrete time, ∞ X y[t] = h[t − k]x[k]. (2.2.8) k=−∞ CHAPTER 2. BACKGROUND TO DYNAMIC SYSTEMS 11

Figure 2.2.1: Linear system in time domain

This relationship is shown in Fig. 2.2.1. Furthermore, Eqn. 2.2.7 and 2.2.8 can be written as a convolution denoted by

y(t) = h(t) ∗ x(t). (2.2.9)

The signals, x(t) and y(t) can be transformed into its frequency representation, X(ω) and Y (ω), respectively, through the Fourier transform. The same applies to the impulse response function, h(t), where its Fourier transform is H(ω), known as the transfer function. The convolution can be written in frequency domain as a simple Y (ω) = H(ω)X(ω). (2.2.10)

Another property of linear systems is that the response to a disturbance at a given frequency occurs at the same frequency. The response is also proportional to the amplitude of the disturbance and the response to several disturbances is equal to the sum of the responses to each individual disturbance. Given these additional prop- erties, the analysis of linear system is generally much easier to handle in frequency domain than in time domain. A linear dynamic system can also be described using the state space equations

dx = A (t) x (t) + B (t) u (t) , (2.2.11) dt y (t) = C (t) x (t) + D (t) u (t) , (2.2.12) where x(t) is the state vector, u(t) is the input or control vector and y(t) is the output or observation vector. Matrices A, B, C and D are known as the dynamics, input, output and feedthrough matrices, respectively. Eqns. 2.2.11 and 2.2.12 are called the dynamic equations and measurement equations, respectively. The system is called autonomous if there is no input. Usually, the feedthrough matrix, D = 0. CHAPTER 2. BACKGROUND TO DYNAMIC SYSTEMS 12

When u and y are scalar, the system is called a single-input single-output (SISO) system. Otherwise it is called multiple-input multiple-output (MIMO) system. As example, a single degree of freedom (SDOF) oscillating linear system such as the damped mass-spring system can generally be modelled using a second order differential equation such as

my¨t + cy˙t + kyt = xt, (2.2.13) where y is the response or output at time t, x is the input or force at time t, m is the mass, c is the damping coefficient and k is the stiffness. In simplest form, the mass is allowed to move only in a single direction, the model can also be written as

x y¨ + 2ζω y˙ + ω2y = t , (2.2.14) t n t n t m where c ζ = √ (2.2.15) 2 mk is the damping ratio and r k ω = (2.2.16) n m is the natural frequency.

2.2.2 Nonlinear Dynamic Systems

Linear equations have been used extensively as an approximation to many nonlin- ear dynamic systems. Even though linearizations do perform effectively, there are limitations which can cause some important nonlinear behaviour to be missed. For nonlinear dynamic systems, the equations of motion contain nontrivial functions such as squares, cubes, square roots, product across different system variables or threshold functions. Most dynamical systems are actually nonlinear dynamic systems and the pro- cesses are usually nonstationary processes. According to Kerschen et al. [70], com- mon types of nonlinearity are:

• geometric nonlinearity, which is caused by large displacements of the structures or large deformations of flexible elastic structures. Examples such as slender CHAPTER 2. BACKGROUND TO DYNAMIC SYSTEMS 13

structures in civil or mechanical applications and tensile structures such as cables,

• inertia nonlinearity, which corresponds to nonlinear terms that contain veloc- ities or accelerations in the equation of motion,

• nonlinear material behaviour, such as observed material when undergoes non- linear elasticity, plasticity or viscoelasticity,

• damping dissipation, such as the dry friction effects and structural damping,

• nonlinearity due to boundary conditions, such as free surface in fluids and clearances.

The main difference between linear and nonlinear dynamic systems is with the superposition principle. The superposition principle, which requires linearity and homogeneity, is not applicable to the nonlinear dynamic systems. The nonlinear dynamic systems may also have multiple isolated equilibrium points, whereas linear systems only have one equilibrium point. This that the solutions for nonlinear dynamic systems could be multiple, while for linear dynamic systems, the solution is always unique. Nonlinear dynamic systems may also exhibit behaviours such as limit-cycle, bifurcation and chaos [3]. The nonlinear dynamic state is unstable, hence it can go to infinity in finite time, which is not possible for a linear dynamic system. Given a sinusoidal input, the output from a nonlinear dynamic system may contain multiple harmonics and sub-harmonics with various amplitudes and phase differences. There are many methods that can be used to study nonlinear systems such as the linearization approach, nonlinear extension of the concept of shapes, and perturbation methods such as the method of averaging, the Lindstedt-Poincare tech- nique and the method of multiple scales. Even though for some nonlinear systems, a linear model may be a satisfactory approximation, such models will still have some limitations [30]. That is why many other techniques have been proposed to analyze nonlinear dynamic systems. However, there is still no general method that can be CHAPTER 2. BACKGROUND TO DYNAMIC SYSTEMS 14 used to analyze all types of dynamic systems or wide classes of nonlinear dynamic systems mainly because of their highly individualistic nature. Given the recent advancements in computer and software technology, some pre- viously unsolvable nonlinear problems have been successfully approached. This has led to a different way on how dynamic systems can be viewed and solved. The com- putational advancement make complex calculations easier and faster. The computer technology provides ways for interactive modelling and simulation of the dynamic systems [73]. Instead of focusing on the quantitative aspects (numerically), there are many computer packages that can provide ways of accessing qualitative aspects of nonlinear dynamic systems that provide a better understanding of the nonlinear behaviours.

Examples of Nonlinear Dynamic Systems

Most of the works in this thesis on nonlinear dynamic systems have been applied on the Duffing and Van der Pol oscillators given that both are consider as simple nonlinear systems. Both can be modelled as single degree of freedom nonlinear systems which can be linearized by some linearization techniques. The system model becomes a linear system model such as the previous linear mass spring damper system by removing the nonlinear part from the differential equation, which may be done in some circumstances. For a Duffing system, one nonlinear behaviour is the jump phenomena, where the steady state behaviours change dramatically due to a transition from one localized stable solution to another localized stable solution. Other behaviours observed for the Duffing system are the local and global bifurcations which result in chaotic responses [67]. The Duffing nonlinear systems can be modelled as

2 my¨ + cy˙ + (k + k3y )y = u, (2.2.17) where u and y are the input and output of the system respectively, m is the mass, c is the linear viscous damping coefficient, k is the linear elastic stiffness coefficient and k3 is the nonlinear feedback cubic stiffness coefficient. If k3 = 0, then Eqn. 2.2.17 CHAPTER 2. BACKGROUND TO DYNAMIC SYSTEMS 15 will be a basic SDOF linear model. The Van der Pol oscillator was introduced by Balthasar van der Pol [126, 127], to model the behaviour of nonlinear vacuum tube circuits or electrical circuits with a triode valve. It can be modelled by a second order differential equation

y¨ + µ y2 − 1 y˙ + y = u, (2.2.18) where µ > 0 is the nonlinear damping coefficient. Without the nonlinear friction term, µ (y2 − 1)y ˙, Eqn. 2.2.18 will be a simple harmonic oscillator such that

y¨ + y = u. (2.2.19)

This nonlinear friction term depends on the amplitude of the oscillator. If the oscillator amplitude is large, then this nonlinear term will be positive. Hence, the oscillations are damped, resulting in decaying motion. If the oscillations are small, the nonlinear term will be negative, a behaviour known as anti-damping. This will cause an amplification of the motion. In other words, energy will be generated at low amplitudes and dissipated at high amplitudes. The oscillator is a self-sustaining oscillator since energy will be supplied into the system if the oscillations are small and removed from the system if the oscillations are large. Since this oscillator is stable, it is known as a relaxation-oscillator or in other words, Van der Pol is a system that exhibits limit cycle oscillations. Other extensions of the Van der Pol oscillator demonstrate quasiperiodicity, elementary bifurcations [19], and chaos. The Van der Pol oscillator model has been used to describe the action potentials of biological neurons [39], to generate electrocardiography (ECG) like signals [69], to model resonant tunneling diode circuits [115] and to simulate two plates in a geological fault [24].

2.3 System Identification

The purpose of system identification is to determine a mathematical relation between the observed behaviours or responses of the system (outputs) and the external in- fluences or forces on the system (inputs). The system can be described by using the CHAPTER 2. BACKGROUND TO DYNAMIC SYSTEMS 16 mathematical models since the dynamic behavior of a system or process is either observed in either the time domain or the frequency domain. The system identification problem is an important aspect in many fields, espe- cially in engineering, mathematics, statistics, economics and physical sciences. Since dynamic systems can be modeled by mathematical models and usually approached by mathematical methodologies, they are also called modeling problems or time series analysis problems by many researchers [90]. Important aspects in system identification are system modelling and estimations, which can be divided into prediction or forecasting of the state of a system, which is also known as the response or output of a system. There are various techniques of system identification, where those techniques will depend on whether the dynamic system model is linear or nonlinear. There are nonparametric approaches which try to estimate a generic model for the system, by considering the system’s step responses and impulse responses or frequency re- sponses. The parametric approach on the other hand estimates parameters from a specified model. According to Ljung [82], there are several important steps in building a model of a . To build a system model, requires data from the systems (input/output data), the candidate models and the rules for selecting the best model from the candidates. The chosen model then needs to be validated based on the data, the prior information on the system and also the purpose of the system. There is no such thing as the perfect model that can fully describe a system, but as long as the model adequately describes the aspects that interest us, then the model can be considered as a good model. , econometric and time series analysis are among the core areas of system identification, where many early works on system identification were done based on these fields [44]. Statistical techniques were important for such thing as information extraction, parameter estimation, predictions and validations. By using statistical methods, mathematical models of dynamical systems can be built from observed input and output data. One important method is the method of least squares, which was introduced in the early 19th century [76]. The least CHAPTER 2. BACKGROUND TO DYNAMIC SYSTEMS 17 squares method enables us to obtain a line of best fit for a , and hence a proposed best model for summarizing that data. Correlation and are some of the most commonly used techniques. Both of these techniques are useful in describing the relationships between variables, and the relationships as proposed for the two quantifications are very important in system identification. Another major approach in the development of the theory and methodology of system identification is the prediction-error identification based on minimizing a parameter-dependent criterion, which is closely related to a time series analysis [5, 41]. This approach, pioneered by Astr¨omet˚ al. [5], applies the maximum likelihood framework for estimating parameters for difference equation methods. Based on the concept of bias and error for an estimated transfer function [81, 82], system identification was viewed as a design problem [44]. The years from the 1975 to 1985 were the golden years of system identification in the engineering fields. Most of the methods introduced during these times were based on a prediction-error criteria and input-output models. These were also helped along by the advancement in computer technology with faster computational time and the development of specialized software for system identification. Ljung [82] separated the concepts of parametric model structure from the choice of identification criterion. The concept of a parametric model structure provided the platform for computing predictions and parameter-dependent prediction errors. The other concept, which is the choice of the identification criterion, focuses on the prediction errors and the parameter vector. Later in the 1990s, the system identification field shifted their attention to frequency domain identification, closed-loop identification, the use of orthogonal basis functions (such as wavelet function) for identification, methods for quantifying model uncertainty, errors-in-variables identification, and the nonlinear systems identification [44]. The system identification methods usually require both the input and output data, such as for example in the least squares method. However, there are meth- ods that can be based on the information from the output data only, such as the frequency domain decomposition method [21]. Even though a technique based on input-output data would likely provide more accurate solutions or results, there are CHAPTER 2. BACKGROUND TO DYNAMIC SYSTEMS 18 times when the input data is not accessible and cannot be obtained. Hence, the technique which is based on only the output signal can be used. Systems engineers use these principles of the design of for system identifications [85, 31]. In proposing a model for a dynamic system, the process of designing the experiment is crucial to ensure the unknown model parameters will be accurately estimated and verified [27]. The quality of the observed signals (input and output) also play an important role in ensuring the quality of the identification. The theory of optimal experimental design provides a way of selecting inputs that yield maximally precise estimators [46, 129]. The most active area in system identification nowadays is in the identification of nonlinear models. Most of the research on nonlinear system identification have been focusing on finding the best identification technique that can cover a variety of different nonlinear systems. According to Ljung [83], other active issues regard- ing nonlinear system identification are based on parameterizations of the nonlinear models of dynamic systems; stability of predictions and for the nonlinear models; identifying whether a nonlinear system operates in a closed loop and find- ing effective data-based nonlinearity tests for dynamical systems [83]. There is also growing research interest on model reduction which aims to identify the simplest nonlinear models for a nonlinear system. The current technological infrastructure has provided the computing require- ments to handle a large volume and a variety of data formats with various data mining tools. This propels us into a new area for system identification on modelling complex systems. Some of the research in this area are the Just-in-Time models [29] and the Model-on-Demand concept [110]. As suggested by Ljung [83], Bayesian networks and sensor networks could also be applied in this area. Chapter 3

Linear System Identification

3.1 Introduction

Linear dynamic systems are a very important class of dynamic systems. Even though most dynamic systems are nonlinear, the linearization of those systems can give some information about a particular nonlinear system and also provide the starting point for system identification. Given its simplicity compared to the nonlinear system, several techniques proposed for system identification of nonlinear system are based on an extension of linear system identification techniques. System identification of linear systems can be considered as the most developed area in this field [6]. This chapter discussed several known techniques for linear system identification. One of the main aims in system identification is to predict the response or output of the systems. Based on the descriptions or prior information of the system, its responses are simulated given various types of input, force or disturbance. Various methods have been proposed for predicting the output, where most of the methods are based on model building techniques. Various model fitting techniques have been applied in system identification to extract information from the input and output data of the system. Model reduction, given its close relationship with model fitting, is also another core area in system identification. Model reduction is useful in optimizing the system model. Two tech- niques that are relevant to system identification are the expectation-maximization(EM) algorithm [33] and regularization techniques [55].

19 CHAPTER 3. LINEAR 20

Another important technique discussed here is the least squares method, which is well studied in the area of mathematical statistics, time series analysis and econo- metrics. It was first introduced and published by Legendre [76]. It is one of the earliest and simplest techniques for system identification used to find the relationship between the input and output data measured from a linear system. Time series analysis for system identification is primarily concerned with ob- taining the mathematical model from or for time series data. The introduction of autoregressive and models in 1920s by Udny Yule and the develop- ment of the theory of stationary processes in 1930s to 1940s have been the important part in the new era of modern time series analysis [32]. A more formal approach to autoregression (AR) and the autoregressive moving average (ARMA) was presented in 1951 by Peter Whitle [132], and later were popularized by George E. P. Box and Gwilym Jenkins in their book [20]. This seminal book also details the importance on system identification. The details include models that have since gained their reputations in the identification of SISO systems. These are the AR and ARMA models together with the autoregressive model with exogenous inputs (ARX) model and autoregressive moving average model with exogenous inputs (ARMAX) model. Yet another technique discussed here is the Kalman filter technique. In statistical forecasting, the Kalman filter is similar to the least squares method of forecasting, since the idea of the Kalman filter is based on the recursive least squares filter. The recursive least squares (RLS) filter, which was developed by Plackett [104], proposed a technique to find the filter coefficients for minimizing a weighted cost function. Even though it is fast to converge, it is computationally costly. The difference between the Kalman filter with the least squares method is that the Kalman filter relaxes the assumption that the model coefficients have to be stationary, hence making it preferred for nonstationary linear models [92]. The Kalman filter was named after Rudolf Emil Kalman, who is the primary de- veloper of this digital technique [65, 49]. This technique has been applied in many areas, with the first application by Stanley F. Schmiidt at the National Aeronau- tics and Space Administration (NASA) to solve trajectory estimation and control problems for the Apollo program [89, 49]. Other applications include time series CHAPTER 3. LINEAR SYSTEM IDENTIFICATION 21 analysis [92], economy [54], navigation and tracking [51], system control [114] and image processing [108]. Filtering typically assumes that the parameters of a dynamic system are known and aims to estimate the state from current and past observations. Producing the predictions based on a filtering technique is typically one or two steps ahead, and is really useful for on-line estimation. For system identification of a linear system, the Kalman filter can be used to estimate the states of the system, such as, its displace- ment (response, location or output), velocity or acceleration. The Kalman filter has also been used for parameter estimation, especially for time-varying parameters which are practical for the on-line identification of a system. The final method discussed here is the spectral analysis method. The other tech- niques introduced before are time domain techniques. However, the spectral analysis method is a frequency domain technique. In this method, the signal, which is in time domain is transformed into its spectrum, which is the frequency composition of the signal. For example, in optics, there are frequencies associated with the range of colours which we see in a rainbow. There are also other frequency ranges in the electromagnetic radiation spectrum which are not visible to us such as x-rays and gamma-rays. Spectral analysis is important since all living things, structures and mechanical systems are sensitive to the frequency of any input signal. For example, exposure to high frequency electromagnetic radiation can harm living things, especially humans. Humans and animals are also sensitive to light and sound signals. Structures such as towers or bridges can be sensitive to certain vibration frequencies, which may affect their structural integrity. For example, the Millenium Bridge and the Tacoma Narrows Bridge in general have nonlinear mechanisms [43]. The Millenium Bridge was closed for modification on the day it was opened due to ”synchronous lateral excitation”. The Tacoma Narrow Bridge collapsed four months after construction due to ”aeroelastic flutter” caused by a 42 mph side wind causing resonance in a swaying motion. This is also why the Roman soldiers used to break step to cross bridges, presumably because of the resonance problems which can collapsing bridges. A sign on the Albert Bridge in London dating from 1873 warns marching soldiers CHAPTER 3. LINEAR SYSTEM IDENTIFICATION 22 to break step while crossing. , also known as Harmonic analysis, was named after Jean Bap- tiste Joseph Fourier. Some notable researchers in the early stages of the introduction of harmonic analysis are Leonhard Euler (trigonometric series), Daniel Bernoulli, Jean Le Rond d’Alambert (series in cosine functions), Joseph-Louis Lagrange (non- periodic function) and Marc Antoine Parseval des Chenes (Parseval theorem). The introduction of the periodogram, which gives an estimate of the spectral density of a signal (George Stokes and Arthur Schuster in the end of 19th century) has been a major milestone in time series analysis and Fourier analysis. In the 1940s and 1950s, the theory of nonparametric estimation of spectral densities was developed, where the statistical properties of the periodogram were derived and the first smoothed spectral was proposed by P.J. Daniell. Other contributors in the development of spectral estimators are , , Ralph Beebe Blackman, John Wilder Tukey and Edward James Hannan [48, 18, 52]. Spectral analysis has been widely applied to linear dynamical systems but most dynamical systems as already stated are actually nonlinear. For mathematical con- venience, many researchers make an assumption that the system is linear since most techniques are based on linear systems and methods for linear systems work rea- sonably well for nonlinear systems. In recent years, some researchers have tried to extend the theory to a much more general situation such as that given by the evolutionary spectra based approach, which works for linear and some nonlinear dynamical system [106].

3.2 Discrete Time Simulation

The first technique discussed here is discrete time simulation, which will be used for predicting the response of the system. Data from the system, such as the input and the output data or signals, is considered as one of the information of the system. From the measured input and output signals of a system, the parameters of the system model can be estimated, which provide the knowledge on the model struc- ture. However, those data is not always available, hence simulation can be used in CHAPTER 3. LINEAR SYSTEM IDENTIFICATION 23 providing it. In the previous chapter, we discussed the dynamic model of a SDOF linear system, which can be written as a second order differential equation,

2 y¨t + 2ζωny˙t + ωnyt = xt. (3.2.1)

An estimation of the response for this differential equation model can be achieved by using finite difference methods. There are three types of finite difference that are commonly used, forward, backward and central difference.

Let us denote function f at point ti by fi. By using the forward difference, the

first forward difference of function fi is

∆fi = fi+1 − fi, (3.2.2) and the kth forward difference can be written in the form

k k−1 k−1 ∆ fi = ∆ fi+1 − ∆ fi. (3.2.3)

The first derivative of function f at point t can be written as f (t + h) − f (t) f 0(t) = lim . (3.2.4) h→0 h

If we let h be the interval between ti and ti+1, which is fixed and nonzero, the first derivative can be approximated by f (t + h) − f (t) f (t ) − f (t ) = i+1 i h ti+1 − ti ∆f = i . (3.2.5) h Hence, the approximation to the first derivative by the forward difference can be written ∆f f 0(t ) ≈ i , (3.2.6) i h th where h = ti+1 − ti is the sampling interval. For the k derivative, we have ∆kf f (k)(t ) ≈ i . (3.2.7) i hk Similarly, the first backward difference is given as

∇fi = fi − fi−1, (3.2.8) CHAPTER 3. LINEAR SYSTEM IDENTIFICATION 24 and the kth backward difference as

k k−1 k−1 ∇ fi = ∇ fi − ∇ fi−1. (3.2.9)

From this, the approximation to the first derivative using the backward difference is given by ∇f f 0(t ) ≈ i , (3.2.10) i h and the kth derivative is ∇kf f (k)(t ) ≈ i . (3.2.11) i hk Finally, the first central difference is given as

δfi = f 1 − f 1 , (3.2.12) i+ 2 i− 2 and the kth central difference is

k k−1 k−1 δ fi = δ f 1 − δ f 1 . (3.2.13) i+ 2 i− 2

Hence, the approximation to the first derivative by the central difference is given by δf f 0(t ) ≈ i , (3.2.14) i h while the kth derivative is δkf f (k)(t ) ≈ i . (3.2.15) i hk It has been shown in many studies that the central difference approximation of derivatives is better than the forward and backward difference, for example, in terms of its order of accuracy [88]. By using the central difference, the approximation of the first derivative in Eqn. 3.2.1 can be given in the form y − y y˙ ≈ t+1 t−1 , (3.2.16) t 2∆ and the second derivative by y − 2y + y y¨ ≈ t+1 t t−1 , (3.2.17) t ∆2 where ∆ is the sampling interval. By substituting these approximations of the first and second derivative into Eqn. 3.2.1, we can estimate the response at time t given by 2 2 2 ∆ 2 − ∆ ωn ζωn∆ − 1 yt = xt−1 + yt−1 + yt−2. (3.2.18) 1 + ζωn∆ 1 + ζωn∆ 1 + ζωn∆ CHAPTER 3. LINEAR SYSTEM IDENTIFICATION 25

3.3 Least Squares Method

Given that the input and the output signals of the system is known, a technique known as the least squares method can be used to estimate the discretized impulse response, hi. For a single-input single-output (SISO) linear dynamic system, the relation between the input and output signals can be defined as

yt = h0ut + h1ut−1 + ... + hnut−n + t , (3.3.1)

where t = n − 1,...,N and t is discrete white noise (from Eqn. 2.2.8). In matrix form, Eqn. (3.3.1) can be written

Y = h · U +  , (3.3.2) where     yt h0          yt−1   h1  Y| =   , h| =   ,  .   .   .   .      yt−N+n hn .     ut ut−1 ··· ut−N+n t          ut−1 ut−2 ··· ut−N+n−1   t−1  U| =   and | =   .  . . .. .   .   . . . .   .      ut−n ut−n−1 ··· ut−N t−N+n

By using the least squares method, the impulse response, hi, can be estimated by

T T−1 hb = Y · U · U · U . (3.3.3)

Another model known as the autoregressive with external input (ARX) model is given by

yt = a1yt−1 + a2yt−2 + ... + apyt−p + b1ut−1 + ... + bnut−n + vt, (3.3.4)

where yt and ut is the output and input signals, respectively, and vt is the white noise. The ARX model can also be written as

A(z)yt = B(z)ut + vt, (3.3.5) CHAPTER 3. LINEAR SYSTEM IDENTIFICATION 26 where

−1 −2 −p A(z) = 1 + a1z + a2z + ... + apz ,

−1 −2 −3 −n B(z) = b1z + b2z + b3z + ... + bnz .

are the polynomials in the shift operator, z−1, for example

−1 z yt = yt−1, (3.3.6)

[6, 1]. A more general model, which considers additional filtering on the noise, is known as the autoregressive moving-average with external input (ARMAX) model. For the

ARMAX model, the input, ut, and output, yt, relationship for a linear system are given as

yt = xt + wt (3.3.7)

xt = G(z)ut (3.3.8)

wt = N(z)vt (3.3.9) where vt is white noise. Given here the transfer functions, B(z) G(z) = (3.3.10) A(z) D(z) N(z) = (3.3.11) A(z) where

−1 −2 −nd D(z) = 1 + d1z + d2z + ... + dnd z .

Hence the linear ARMAX model can be written as

A(z)yt = B(z)ut + D(z)vt (3.3.12)

This can be rewritten as

yt = − a1yt−1 − a2yt−2 − ... − anyt−n (3.3.13)

+ b1ut−1 + b2ut−2 + ... + bnut−n

+ dt + d1vt−1 + d2vt−2 + ... + dnvt−n CHAPTER 3. LINEAR SYSTEM IDENTIFICATION 27

[14]. The method can be used to estimate the parameters for ARX model given that the noise is zero-mean white noise. Meanwhile, for the ARMAX model, iterative method such as the extended least squares (ELS) have to be used to estimate those parameters.

3.4 Kalman Filter

The Kalman filter is an algorithm for estimating the states of a system, such as the system’s position or velocity, from the observations data. It does supports estimations of past, present, and future state’s of the system. It also can be used even when the precise nature of the modeled system is unknown [131]. The Kalman filter does work given the variance- matrices of random disturbance and noise corrupting the observations. Because the state can be defined to include parameter of the system, the Kalman filter can be used for system identification. The Kalman filter requires the dynamic system to be written in state space form. From the second order differential equation of the linear dynamic system as in Eqn. 3.2.1, let the response displacement be y = x1, the response velocity be y˙ = x2 and the input/force be x = u. Hence Eqn. 3.2.1 can be rewritten as a set of two first order differential equations

x˙ 1 = x2

2 x˙ 2 = −2ζωnx2 − ωnx1 + ut (3.4.1) and the output will be

y = x1. (3.4.2)

This is the standard form required for many software packages such as MATLAB and R for simulation. The state space form of a linear dynamic system as in Eqn. 3.2.1 can then be written as

x˙ = Ax + Bu, (3.4.3)

y = Cx + Du, (3.4.4) CHAPTER 3. LINEAR SYSTEM IDENTIFICATION 28 where Eqn. 3.4.3 is known as the dynamic equation and Eqn. 3.4.4 as the measure- ment equation. By substituting Eqn. 3.4.1 and 3.4.2, the state space form can be rewritten as         x˙ 1 0 1 x1 0   =     +   u, (3.4.5) 2 x˙ 2 −ωn −2ζωn x2 1   h i x1 y = 1 0   + [0] u. (3.4.6) x2 Given that there is noise in the dynamic process and observation process, the state space model for the Kalman filter will be written as

x˙ = Ax + Bu + w, (3.4.7)

y = Cx + Du + v, (3.4.8) where w ∼ N(0, Q) is the process noise and v ∼ N(0, R) is the observation noise. Both noise processes are assumed to be zero-mean white Gaussian noise with co- variance Q and R, respectively. We will use the discrete Kalman filter so that

xk+1 = Akxk + Buk + wk, (3.4.9)

yk = Cxk + Duk + vk, (3.4.10) where k is the time step. The Kalman filter algorithm consists of time update and measurement update equations. The time update equation predicts the (k + 1)th states and error covari- ance while the measurement update equation corrects the priori estimate from the time update equation to obtain an improved posteriori estimate [131]. The time update equations are given by

− xˆk+1 = Akxˆk + Buk, (3.4.11)

− | Pk+1 = AkPkAk + Qk, (3.4.12) while the measurement update equations consist of

− | − | −1 Kk = Pk Ck CkPk Ck + Rk , (3.4.13) − − xˆk = xˆk + Kk yk − Ckxˆk , (3.4.14)

− Pk = (I − KkCk) Pk , (3.4.15) CHAPTER 3. LINEAR SYSTEM IDENTIFICATION 29

where Kk is the Kalman gain matrix and I is the identity matrix. This algorithm begins with the measurement update, where we need to specify the initial estimates − − for the state, xˆk , and the error covariance, Pk . Next, both will be projected by the time update equations by using the updated state and error covariance. The projected state and covariance will then be used for the next step of the measurement update process.

3.5 Spectral Analysis

A time series which is in the time-domain can be described in terms of the spectrum which is in the frequency-domain by using a Fourier analysis. The representation of the data in terms of frequency provides some additional information for understand- ing the systems. This spectral transform could also expose some hidden information about the process of the system by looking at the signal from a different perspective. The Fourier transform uses sines and cosines as their basis functions are localized in frequency. Fourier analysis is usually used for stationary random processes where all the spectral components exist all the time, since the statistical properties of a stationary random process do not change over time. Therefore, there is no need to consider spectra which exist at a specific time. However, many processes are better modelled as nonstationary. One approach is by transforming the nonstationary process into a , for example by fitting a trend, but some nonstationary processes are too complex and this technique may be unable to solve these problems. In solving this, Dennis Gabor in 1946 introduced the Short Time Fourier Transform (STFT) to deal with nonstationary processes. Usually, a sufficient window or duration of data which is stationary is used for the analysis. However, there is a problem on how to determine the window size for the transform and it may still not replicate the real process since the process is assumed to be stationary while it is actually nonstationary. CHAPTER 3. LINEAR SYSTEM IDENTIFICATION 30

3.5.1 Fourier Transform

For a signal xt that is sampled at equal time intervals, {xt : t = ..., −1, 0, 1,...}, the discrete Fourier transform (DFT) is

∞ X −iωt X (ω) = xte , −π ≤ ω ≤ π (3.5.1) t=−∞ and the inverse Fourier transform is

Z π 1 iωt xt = X (ω) e dω, (3.5.2) 2π −π P∞ with the condition that the Fourier transform exist only if t=−∞ |xt| < ∞. Parse- val’s theorem gives us that Z X 2 2 xt = 2π X (ω) dω, (3.5.3)

R ∞ 2 where −∞ xt dt is the total variability of the signal, which physically can be consid- ered as the energy. Meanwhile, the spectral density of the process can be also calculated by the discrete Fourier transform of the autocovariance function of the signal, γ (k), which is defined as γ (τ) = E [(x (t) − µ)(x (t + τ) − µ)].

∞ 1 X Γ(ω) = γ (k) e−iωk , −π ≤ ω ≤ π. (3.5.4) 2π k=−∞

As before, the DFT is only defined if P |γ (k)| < ∞. This condition is only satisfied for a stationary process. The inverse Fourier transform is

Z π γ (k) = Γ(ω) eiωkdω. (3.5.5) −π If we let k = 0, then Z π γ (0) = Γ(ω) dω = σ2, (3.5.6) −π which that the area under the spectrum equals the variance of the process. Negative frequencies are physically equivalent to the positive frequencies. Thus we can work with the one-sided spectrum, Γ (ω), where 0 ≤ ω ≤ π, which is defined as twice the positive frequency. CHAPTER 3. LINEAR SYSTEM IDENTIFICATION 31

For a continuous random process, {xt : −∞ < t < ∞}, the spectrum is 1 Z ∞ Γ(ω) = γ (τ) e−iωτ dτ , −π < ω < π (3.5.7) 2π −∞ and the inverse transform is Z ∞ γ (τ) = Γ(ω) eiωτ dω. (3.5.8) −∞ The cross-spectrum can be calculated by ∞ X −iωτ Γxy (ω) = γxy (τ) e , (3.5.9) τ=−∞ or for a continuous process as Z ∞ −iωτ Γxy (ω) = γxy (τ) e dτ, (3.5.10) −∞ where γxy (τ) = E [(xt − µx)(yt+τ − µy)] is the cross-covariance function of the pro- cess.

3.5.2 Response amplitude operator

A linear dynamical system can be characterized by its gain and phase shift to a disturbance at specific frequencies. For many applications, the gain is the main interest, where it also known as the response amplitude operator (RAO). If the spectrum of the disturbance and the RAO is known, then the spectrum for the output can be calculated. This is important in investigating the response of a ship to sea states [91], and also can be applied for the response of some land based vehicles such as a car on dirt road or an aircraft on a runway. Also, by using the spectrum of the input and the output, the RAO can be estimated. Meanwhile, a sudden change in the spectrum of noise from a machine can be an early warning of a defect. This analysis, which is called the signature analysis, can prevent catastrophic failure [56]. The linear system can be illustrated in the frequency domain such as Fig. 3.5.1, where U (ω), Y (ω) and H (ω) are the Fourier transforms of u (t), y (t) and h (t) respectively. H (ω) is called the frequency response function and it only exists if the R ∞ linear system is stable, so that −∞ |h (τ)| dτ < ∞. By the convolution theorem, it is shown that Y (ω) = H (ω) U (ω) , (3.5.11) CHAPTER 3. LINEAR SYSTEM IDENTIFICATION 32

Figure 3.5.1: Linear system in frequency domain which is only true for linear systems. Therefore, the cross-spectrum can be written as

Γuy (ω) = H (ω)Γuu (ω) . (3.5.12)

Spectra can also be described using the relation,

∗ Γyy (ω) = H (ω) H (ω)Γuu (ω)

2 = |H (ω)| Γuu (ω) , (3.5.13) where Γuu is the energy spectrum and Γyy is the response spectrum obtained from the discrete Fourier transform of the input autocovariance γuu (τ), and response autocovariance γyy (τ) respectively. Let the input signal be given by

iωt ut = Ue , (3.5.14) where U is a real number representing the amplitude of the input 1. Similarly, let the response be given by

i(ωt+φ) yt = Y e , (3.5.15) where Y is a real number representing the amplitude of the response and φ is the phase shift given by the linear system. By substituting Eqn. 3.5.14 and 3.5.15 into the second order differential equation of the linear dynamical system Eqn. 3.2.1, we see that

2 i(ωt+φ) i(ωt+φ) 2 i(ωt+φ) i(ωt) −ω Y e + i2ζωnY ωe + ωnY e = Ue . (3.5.16)

Eqn. (3.5.16) leads to Y e−iφ = 2 2 , (3.5.17) U ωn − ω + i2ζωnω

1 In the context of spectral analysis, it is common to use ut for measured input, instead of xt. CHAPTER 3. LINEAR SYSTEM IDENTIFICATION 33 and the RAO or gain is given by

−iφ e G(ω) = 2 2 ωn − ω + i2ζωnω

1 = q . (3.5.18) 2 2 2 2 2 2 (ωn − ω ) + 4ζ ωnω The maximum value for the RAO is   2p 2 Gmax = 1/ 2ζωn 1 − ζ , (3.5.19)

p 2 2 when the frequency is equal to ωn (1 − 2ζ ). Furthermore, the phase shift is   −1 2ζωnω φ = tan − 2 2 . (3.5.20) ωn − ω

For a linear system with single input ut and single response yt, the RAO can be estimated by r Cyy Gc2 (ω) = , (3.5.21) Cuu where Cuu and Cyy are the sample spectra of the input and response respectively. However, this estimator is sensitive to noise. An alternative estimator, which is unaffected by noise on the response, is the ratio of the cross-spectrum of the input and the response to the spectrum of the input that is,

|Cuy| Gc1 (ω) = , (3.5.22) Cuu where Cuy is the sample cross-spectrum of the input and response. The cross- spectrum is the Fourier Transform of the cross-covariance function of the input and response time series (for example see [56]). Similarly, the estimator

Cyy Gc3 (ω) = , (3.5.23) |Cyu| is unaffected by noise on the input [13]. A related is the coherence, defined as

2 2 |Cuy| Gc cohd (ω) = = 1 . (3.5.24) CuuCyy 2 Gc2 The coherence can be thought of as the square of the correlation coefficient between the input and response over frequency, and its value is therefore between 0 and 1. It CHAPTER 3. LINEAR SYSTEM IDENTIFICATION 34 can be used to detect noise on the signals, compensated delays which are a sign of nonlinearity, and leakage which is usually caused by insufficient time series length.

If there is noise on both the input and the response, one strategy is to use Gc2, after making an allowance for the noise component in the computed Cuu and Cyy. The allowance considered here is to assume the high frequency component of the computed Cuu and Cyy is due to noise, and that the noise is white and has a flat spectrum. Computationally, this modification is implemented as a subtraction of

1 the average of the spectrum ordinates over the highest 20 of the frequency range from all the spectrum ordinates. This modification of Gc2 is denoted by s C− G (ω) = yy , (3.5.25) c4 − Cuu

− − where Cuu and Cyy are the modified spectrum estimates.

3.6 Summary

• All the techniques discussed here have been applied or used throughout this research. Even though most of the techniques are well known, it have to be discussed here as an introduction for reader who are new to this field.

• The discrete time simulation have mostly been used on linear and nonlinear dynamic system to simulate the data or signals of the systems. The simula- tion of the responses using the central difference is closer to the solution of the differential equation than that obtained using the forward or backward dif- ferences. The central difference is less sensitive to the sampling interval than the forward and backward differences. It is also have higher order of accuracy compared to the forward and backward differences.

• Even though the ordinary Kalman filter technique was not used in this re- search, but it was discussed here to provide the basis for the extension of Kalman filter, known as the extended Kalman filter and unscented Kalman filter, that will be discussed later. CHAPTER 3. LINEAR SYSTEM IDENTIFICATION 35

• For the response amplitude operator estimator, Gc1, is preferable if there is

noise on the response time series, while Gc3 is a preferred if there is noise on the input time series. If there is noise on both input and response signals, then

Gc4 can be used [9]. Chapter 4

Nonlinear system identification in time domain

4.1 Introduction

In the previous chapter, we discussed several time domain system identification techniques for linear dynamic systems. The implementation of the methods was in terms of the discrete time representation of the system through forward, backward and central differences. This technique is not only limited to identification of linear dynamic systems as it is applicable to weakly nonlinear oscillators. Therefore, in this chapter, we will first discuss the discrete time simulation of nonlinear dynamic system. The second technique that will be discussed here is the extension of the time series ARMAX model, known as the nonlinear autoregressive moving average model with external inputs (NARMAX). Some applications of this model are in forecasting [124, 42], system identification [34, 93, 57] and system control [112]. Previously, we have discussed the Kalman filter technique for system identifica- tion of linear systems. Various adaptations have been proposed for application of Kalman Filtering on nonlinear systems. One of the first was the extended Kalman Filter (EKF) which relied on a linearization about the current state values [81, 72]. EKF have been used for various applications including modelling of nonstation- ary time series [72], system identification [81], nonlinear structural identification

36 CHAPTER 4. NONLINEAR : TIME DOMAIN 37

[59, 79], economy [101] and many others. The linearization is a problem if the timestep intervals are big, which can cause instability to the linearized filter. Given the requirement of smaller timestep intervals, the computational effort for the cal- culations will be costly. The EKF also requires the derivation of Jacobian matrices which are quite complicated to derive [64]. An alternative approach to the EKF, known as the unscented Kalman Filter (UKF) has been found to give a better performance and is easier to implement [64, 63]. The UKF uses the same technique as the KF and also includes the unscented transformation which improves the convergence and accuracy of the estimation [63, 128]. The UKF is an algorithm for estimating the states of a nonlinear dynamic system forced by Gaussian white noise. The UKF tracks a small set of sample points, known as sigma points, through the nonlinear system and hence estimates the mean and covariance of the response. This enables an optimum averaging of one-step-ahead prediction and response to give the state estimator. Since the UKF does not require linearization, it is easier to use. There is no need to calculate the Jacobians or Hessians [64] which is more convenient compared to the EKF. The algorithm is easier to use and has a better performance in comparison to the EKF [63, 130], so it is generally preferable to the EKF.

4.2 Discrete Time Simulation

In this section, we will discuss the discrete time simulation of weakly nonlinear os- cillators and further the discussion in the previous chapter. We will present the application of the central difference equations technique on nonlinear dynamic sys- tems. Two nonlinear dynamic systems, the Duffing oscillator and the Van der Pol oscillator, have been chosen to show how this technique can be applied for identifi- cation of nonlinear dynamic systems. The simulated data presented in this section will be used later to assess system identification techniques that will be discussed later. CHAPTER 4. NONLINEAR : TIME DOMAIN 38 5 0 input −10 0 200 400 600 800 1000

Index

Figure 4.2.1: Samples of sinusoidal input time series for the Duffing oscillator

Example: Duffing oscillator

The Duffing nonlinear systems can be modelled as

2 my¨t + cy˙t + (k + k3yt )yt = ut. (4.2.1)

From the central differentiation method, Eqn. 4.2.1 can be rewritten as

y − 2y + y y − y m t+1 t t−1 + c t+1 t−1 + ky + k y3 = u . (4.2.2) ∆2 2∆ t 3 t t

By reducing each time index by one and rearranging the equation, the output at time t can be approximated by

4m − 2∆2k ∆c − 2m 2∆2k 2∆2 y = y + y − 3 y3 + u . (4.2.3) t 2m + ∆c t−1 2m + ∆c t−2 2m + ∆c t−1 2m + ∆c t−1

From Eqn. 4.2.1, let m = 1. Given that k > 0 and the amplitude of the response is minimal, then if k3 > 0, it will behave like a hardening spring and if k3 < 0, like a softening spring. For k < 0, it describes the dynamics of a point mass in a double well potential, which resembles a harmonic oscillator perturbed by a Gaussian noise. Consider a simple Duffing oscillator described by

3 y¨t +y ˙t + yt + k3yt = ut, (4.2.4)

Let the input be ut = 10 cos (0.1t/π), which is a sinusoidal signal with the amplitude 10 such as that shown in Fig. 4.2.1. The length of the samples are N = 1000 and the sampling interval is ∆ = 0.1 second. Three values for the nonlinear cubic stiffness parameter will be used to see how the Duffing oscilator responds. The values used are k3 = 0.02 for the weak nonlinear CHAPTER 4. NONLINEAR : TIME DOMAIN 39 4 5 0 0 response response −6 −10 0 200 400 600 800 1000 0 200 400 600 800 1000

Index Index

(a) linear, k3 = 0 (b) nonlinear, k3 = 0.02 2 10 0 0 response response −2 −15 0 200 400 600 800 1000 0 200 400 600 800 1000

Index Index

(c) nonlinear, k3 = 1 (d) nonlinear, k3 = −0.002

Figure 4.2.2: Response time series for Duffing oscillator with sinusoidal input

hardening spring case, k3 = 1 for the strong nonlinear hardening spring case and k3 = −0.002 for the softening spring case. The response for the linear system where k3 = 0, is also estimated to see how the Duffing oscillator differs from the linear oscillator. Fig. 4.2.2 shows the responses for the nonlinear Duffing oscillator and also for the linear system. There are no obvious lags for all the cases of Duffing oscillator compared to the linear case. The amplitude of the response does differ for

Duffing cases of k3 = 0.02 and 1, where it is less than the input and linear system response amplitude, while for k3 = −0.002, the response amplitude is greater than the others. This shows the characteristics of a hardening spring for k3 > 0 and a softening spring for k3 < 0. The nonlinearity effect also increases as the value of those nonlinear coefficients increases (for the hardening spring case).

Example: Van der Pol oscillator model

The Van der Pol system can be modelled as

2  y¨t + µ yt − 1 y˙t + yt = ut. (4.2.5) CHAPTER 4. NONLINEAR : TIME DOMAIN 40 2 2 1 1 0 0 response response −2 −2

0 200 400 600 800 1000 0 200 400 600 800 1000

time time

(a) µ = 0.01 (b) µ = 0.1 2 2 1 0 0 response response −2 −2

0 200 400 600 800 1000 0 200 400 600 800 1000

time time

(c) µ = 1 (d) µ = 5

Figure 4.2.3: Response time series for the Van der Pol oscillator forced by sinusoidal input

As in the previous case of the Duffing oscillator model, the central differences are used to discretize the Van der Pol oscillator model.

2 2 2 4 − 2∆ ∆µyt−1 − ∆µ − 2 2∆ yt = 2 yt−1 + 2 yt−2 + 2 ut−1. 2 + ∆µyt−1 − ∆µ 2 + ∆µyt−1 − ∆µ 2 + ∆µyt−1 − ∆µ (4.2.6) Given that µ > 0, we will compare the effects of different values of µ, with the same sinusoidal input as in the example of Duffing oscillator from the previous section. Here, we will consider µ = 0.01, 0.1, 1 and 5. The response time series for all four cases have been plotted in Fig. 4.2.3 and the samples between 100 to 150 seconds are shown in Fig. 4.2.4. From Fig. 4.2.3, it is shown that the response will reach a limit cycle where the amplitudes will not increase anymore. For these cases, the responses at limit cycle range from approximately -2 to 2. It is also shown that the nonlinear damping coefficient, µ, value does influence the time needed before the responses reach the limit cycle. The Van der Pol system with a higher value of µ will reach the limit cycle faster than the Van der Pol system with lower value of µ. Fig. 4.2.5 shows the impulse response of the Van der Pol oscillator for the previous four cases. The samples from 100 to 150 seconds for those impulse responses are plotted in Fig. 4.2.6. In Fig. 4.2.5, the length of the time series for all the four cases CHAPTER 4. NONLINEAR : TIME DOMAIN 41 2 0.3 1 0 0.0 response response −2 −0.3 100 110 120 130 140 150 100 110 120 130 140 150

time time

(a) µ = 0.01 (b) µ = 0.1 2 2 1 0 0 response response −2 −2

100 110 120 130 140 150 100 110 120 130 140 150

time time

(c) µ = 1 (d) µ = 5

Figure 4.2.4: Samples of response time series from 100 seconds to 150 seconds for the Van der Pol oscillator forced by sinusoidal input is not the same, so that we can see how fast the impulse response reaches the limit cycle. As on the previous example, the nonlinear damping coefficient values, µ, do influence the time needed for the responses to reach limit cycle and at limit cycle, the response amplitude ranges between -2 to 2.

4.3 Time series models

Most researchers in the field of system identification commonly approach the iden- tification problems by using the or the model. The grey box modelling (semi-physical modeling), are parameterizations problem where the physical insights of the system are known and the experimental data were used for parameters estimation [40]. Usually for grey box model, the model structure is al- ready known or identified, and the primary aim for the system identification is only on estimating the model’s parameters [60]. However, if no information of the model structure is available, then the black CHAPTER 4. NONLINEAR : TIME DOMAIN 42 2 2 1 1 0 0 response response −2 −2

0 500 1000 1500 2000 0 200 400 600 800 1000

time time

(a) µ = 0.01 (b) µ = 0.1 2 2 1 0 0 response response −2 −2

0 50 100 150 200 0 50 100 150 200

time time

(c) µ = 1 (d) µ = 5

Figure 4.2.5: Impulse response for the Van der Pol oscillator 2 1 0.01 0 response response −2 −0.02 100 110 120 130 140 150 100 110 120 130 140 150

time time

(a) µ = 0.01 (b) µ = 0.1 2 2 1 0 0 response response −2 −2

100 110 120 130 140 150 100 110 120 130 140 150

time time

(c) µ = 1 (d) µ = 5

Figure 4.2.6: Samples between 100 to 150 seconds for the impulse response of the Van der Pol oscillator CHAPTER 4. NONLINEAR : TIME DOMAIN 43 box modelling will be used. For the black-box model, the idea is to parameterize the function g(x, θ) in a flexible way, so that it can approximate any future functions g0(x). Usually, function expansion have been preferred, which is

X g(x, θ) = αigi(x) (4.3.1) where g(·, ·) is some nonlinear mapping from the regressor space to the output space

d parameterized by θ ∈ R , and gi is some basis functions [41]. Those terms in the linear and and nonlinear models can be identified using nonlinear autoregressive moving average model with exogenous inputs (NARMAX) methods [14]. The NARMAX methods are based on determining or identifying the rule or law that describes the behaviour of the system. The focus of this methods is on determining the form or the structure of the model and identified those terms in the model. The NARMAX methods tried to identify a simplest model that can be written without sacrificing the approximation accuracy. Given a sufficiently long record, the method will identify a linear model if the system under study is actually a linear system. This approach is flexible and can be used either as a grey or black box model. Those important terms in the model are identified from measured data from that system. The importance of those term will be ranked according to its affects and influence to that system, hence, make it easier to simplify that model without sacrificing the accuracy of that model [14]. The Volterra, block-structed and many neural-network models can be considered as subsets of the NARMAX model [14, 99]. Lots of works have been done based on the NARMAX model including those complex nonlinear system with chaos, bifur- cations and sub-harmonic characteristics [16, 14]. Most of early works on system identification of nonlinear systems were mostly CHAPTER 4. NONLINEAR : TIME DOMAIN 44 based on the Volterra series which can be written in discrete time form of

M X y(k) = h0 + h1(m1)u(k − m1)

m1=1 M M X X + h2(m1, m2)u(k − m1)u(k − m2)

m1=1 m2=1 M M M X X X + h3(m1, m2, m3)u(k − m1)u(k − m2)u(k − m3)

m1=1 m2=1 m3=1 + ..., (4.3.2) where u(k) and y(k) is the input and output at time k, respectively, with k = 1, 2,...,

th and hl(m1, . . . , ml) is the l order Volterra kernel or nonlinear impulse response. For linear system, it can be written as

M X y(k) = h0 + h1(m1)u(k − m1). (4.3.3)

m1=1 while for the simplest nonlinear system, it can be in the form of

M X y(k) = h0 + h1(m1)u(k − m1)

m1=1 M M X X + h2(m1, m2)u(k − m1)u(k − m2). (4.3.4)

m1=1 m2=1

The simplest methods to identify the two impulse responses, h1 and h2, in the previous equation is by using the correlation methods with Gaussian white noise is used as the inputs. However, the method can be applied with a general input provided the input can be measured. Other than the Volterra series, Weiner series has also been used for nonlinear system identification [133, 74]. Another important identification which is based on both the Volterra and Wiener series are the method proposed by Lee and Schetzen [75]. The Volterra series expands the current output as a series in terms of past inputs only, and for many nonlinear systems, including the Duffing and the Van der Pol oscillators, the Volterra series approximation will require an infeasible number of terms. Since the NARMAX model allow for the nonlinear lagged output terms, the identification will be easier since less nonlinear terms are required to be estimated. CHAPTER 4. NONLINEAR : TIME DOMAIN 45

Figure 4.3.1: SISO Hammerstein system, where G(·) and N(·) are linear transfer functions, and f(·) is a nonlinear function

Ding and Chen [34] discussed the parametric identification of Hammerstein sys- tem model by using two identification algorithms known as the iterative least squares and a recursive least squares. Hammerstein system is a nonlinear systems which con- sists of a memoryless nonlinear block followed by a linear time-invariant(LTI) block such as shown in Fig. 4.3.1. The Hammerstein model consists of a static single- valued nonlinear element, followed by a linear dynamic element, opposite to the Wiener model which the linear element is followed by the static nonlinear element. For nonlinear Hammerstein system, it can be modelled by the ARMAX model. For the Hammerstein system, only the system input, u(t), and the system output, y(t), which is corrupted by noise, w(t), are measurable, while the noise free output, x(t), and the output of the nonlinear block,u ˜(t) are unmeasurable. G(z) is the linear transfer function of the model and N(z) is the transfer function of the noise. The nonlinear part of the Hammerstein model is a mth order polynomial function of the measured input, u(t), given as

2 m f(u(t)) =u ˜(t) = c1u(t) + c2u (t) + ... + cmu (t), (4.3.5) or can also be written as

f(u(t)) =u ˜(t) = c1γ1(u(t)) + c2γ2(u(t)) + ... + cmγm(u(t)) m X = cjγj(u(t)). (4.3.6) j=1 where γj is a nonlinear function of known basis [25]. CHAPTER 4. NONLINEAR : TIME DOMAIN 46

The system identification framework based on the NARMAX model consists of five important steps:

1. Structure detection, where the important terms in the model for that system was first identified. The recommended technique is by using the ordinary least squares (OLS).

2. Parameter estimation, where the coefficients for those parameters in the model were estimated.

3. Model Validation, where the model unbiasedness was check and the model was validated whether it is suitable for that system.

4. Prediction, where the output was predicted by using the model.

5. Analysis, where the dynamical properties or behavior of that system will be characterized.

Since that the input for the linear part of the Hammerstein system in Fig. 4.3.1 isu ˜(t), which is the output of the nonlinear block of the system. Then, following Eqns. 3.3.10, 3.3.11 and 4.3.6, the Hammerstein system in Fig. 4.3.1 can be modelled by the nonlinear ARMAX model where the linear part of the system can be modelled by a linear ARMAX model

A(z)y(t) = B(z)˜u(t) + D(z)v(t) (4.3.7) and the nonlinear part is

u˜(t) = f(u(t)) = c1γ1(u(t)) + c2γ2(u(t)) + ... + cmγm(u(t)) (4.3.8)

Consider any nonzero and finite constant α, identification technique could not distin- guish pair of f(u(t)) and G(z) with pair of αf(u(t)) and G(z)/α. Hence, as suggested by [7, 34], the first coefficient of the nonlinear function f(u(t)) in Eqn. 4.3.8, c1 was set equal to 1.

The identification of the parameters ai, bi, ci and di in the nonlinear ARMAX can be done by many methods. One of the methods discussed by Ding and Chen, [34] is CHAPTER 4. NONLINEAR : TIME DOMAIN 47 by using a method known as iterative algorithm. Equation 4.3.7 can be written as

n n n X X Xd y(t) = − aiy (t − i) + biu˜ (t − i) + div (t − i) + v(t) (4.3.9) i=1 i=1 i=1 and inserting Eqn. 4.3.8

n n m n X X X Xd y(t) = − aiy (t − i) + bi cjγj (u (t − i)) + div (t − i) + v(t) (4.3.10) i=1 i=1 j=1 i=1 It can be rewritten in matrix form as

| y(t) = ϕ0(t)θ + v(t) (4.3.11) where θ is the parameter vector and ϕ0(t) is the information vector, defined as   a     ψ(t)    c1b       v(t − 1)      c2b     n0   n0 θ =   ∈ R , ϕ0(t) = v(t − 2) , ∈ R (4.3.12)  .     .   .     .       cmb      v(t − nd) d a, b and d is a column vector of parameters ai, bi and di, respectively,       a1 b1 d1              a2   b2   d2  a =   , b =   , d =   (4.3.13)  .   .   .   .   .   .       

an bn dnd and vector   ψ0(t)    ψ (t)   1  n0 ψ(t) =   ∈ R , n0 = (m + 1)n + nd (4.3.14)  .   .    ψm(t) where     −y(t − 1) γj(u(t − 1))          −y(t − 2)   γj(u(t − 2))  ψ (t) =   , ψ (t) =   , j = 1, 2, , . . . , m (4.3.15) 0  .  j  .   .   .      −y(t − n) γj(u(t − n)) CHAPTER 4. NONLINEAR : TIME DOMAIN 48

Let the estimate of the parameter vector, θ, is denoted by θˆ and since the noise, v(t), is a white noise, then the output can be predicted by

| ˆ yˆ(t) = ϕ0(t)θ (4.3.16)

The output prediction error or the cost function can be calculated

t X 2 J(θˆ) = [y(i) − yˆ(i)] i=t−p+1 t h i2 X | ˆ = y(i) − ϕ0(i)θ (4.3.17) i=t−p+1 where p is the data length which much more greater than n0. Let   y(t)      y(t − 1)  Y(t) =   , (4.3.18)  .   .    y(t − p + 1) and   | ϕ0(t)    |   ϕ0(t − 1)  Φ0(t) =   . (4.3.19)  .   .    | ϕ0(t − p + 1) Hence, the cost function can be rewritten as

ˆ h ˆi| h ˆi J(θ) = Y(t) − Φ0(t)θ Y(t) − Φ0(t)θ 2 ˆ = Y(t) − Φ0(t)θ (4.3.20)

The estimate of the parameter vector can be calculated by using the least square estimate given as

ˆ | −1 | θ = [Φ0(t)Φ0(t)] Φ0(t)Y(t) (4.3.21) ˆ which will minimize the cost function, J(θ), given that the information vector, ϕ0(t), persistently excited.

There is a problem with solving Eqn. 4.3.21, since Φ0(t) contains the unknown noise terms v(t − i), i = 1, 2, . . . , nd. Hence, an algorithm known as the HARMAX CHAPTER 4. NONLINEAR : TIME DOMAIN 49 least-squares iterative algorithm is introduced [34]. From Eqn. 4.3.11, the estimate of the noise at kth iteration is

| ˆ vˆk(t − i) = y(t − i) − ϕˆ k−1(t − i)θk−1 (4.3.22) with   ψ(t)      vˆk(t − 1)    ˆ   ϕk(t) =  vˆk(t − 2)  , (4.3.23)    .   .    vˆk(t − nd) Hence, from Eqn. 4.3.21, the kth iterative solution of θ can be computed by

ˆ | −1 | θk = [Φk(t)Φk(t)] Φk(t)Y(t) (4.3.24) for k = 1, 2, 3,..., where   | ϕˆ k(t)    |   ϕˆ k(t − 1)  Φk(t) =   . (4.3.25)  .   .    | ϕˆ k(t − p + 1)

ˆ This algorithm can be initialized by taking θ0 = 0n0 or some really small value. Given before this that the first coefficient of the nonlinear function f(u(t)) in

Eqn. 4.3.8 is set equal to c1 = 1. Hence, the estimates aˆ can be read from the first th ˆ th ˆ th n elements, b from the (n + 1) element, and d from the last nd elements of vector θˆ. For cˆ, its jth coefficient can be estimated by n ˆ 1 X θjn+i cˆ = ; j = 2, 3, . . . , m (4.3.26) j n ˆ i=1 bi or since c1 was assumed equal to 1, n ˆ 1 X θjn+i cˆ = ; j = 2, 3, . . . , m (4.3.27) j n ˆ i=1 θn+i

ˆ th ˆ where θi is the i element of vector θ. CHAPTER 4. NONLINEAR : TIME DOMAIN 50

Given here an example of Hammerstein system which is modelled as Eqn. 4.3.7 and 4.3.8 where

A(z) = 1 − 0.8z−1 + 0.6z−2

B(z) = 0.9z−1 + 0.7z−2

D(z) = 1 − 0.3z−1

u˜(t) = u(t) + 0.4u2(t) + 0.2u3(t)

Hence, the NARMAX model for this example can be written as

y(t) = 0.8y(t − 1) − 0.6y(t − 2) + 0.9˜u(t − 1) + 0.7˜u(t − 2)

+v(t) − 0.3v(t − 1)

= 0.8y(t − 1) − 0.6y(t − 2) + 0.9u(t − 1) + 0.7u(t − 2)

+0.36u2(t − 1) + 0.28u2(t − 2) + 0.18u3(t − 1)

+0.14u3(t − 2) + v(t) − 0.3v(t − 1) (4.3.28)

Let the input, u(t), be a Gaussian white noise with mean 0 and 1 and the actual process noise, v(t), also a Gaussian white noise with mean 0 and standard deviation 0.5. By using the HARMAX least-squares iterative algorithm introduced by Ding and Chen [34], we start the algorithm by setting the initial guess for the parameters that we wish to identify,   aˆ1      aˆ2     ˆ   cˆ1b1      0.000001  ˆ   cˆ1b2       0.000001  ˆ  ˆ    θ0 =  cˆ2b1  =  .  (4.3.29)    .   ˆ     cˆ2b2      0.000001  ˆ   cˆ3b1     ˆ   cˆ3b2    ˆ d1 and also the initial estimate of the process noise,v ˆ0(t), which was assumed a Gaus- sian white noise with mean 0 and standard deviation 1. CHAPTER 4. NONLINEAR : TIME DOMAIN 51

By doing the algorithm up to 50 repetition, the expected values for the param- eters is     aˆ1 −0.7991898          aˆ2   0.5988583       ˆ     cˆ1b1   0.9062746       ˆ     cˆ1b2   0.7007817      ˆ  ˆ    θ50 =  cˆ2b1  =  0.3597824  (4.3.30)      ˆ     cˆ2b2   0.2817756       ˆ     cˆ3b1   0.1773261       ˆ     cˆ3b2   0.1420315      ˆ d1 −0.2897881 and since c1 was assume equal to 1, then from Eqn. 4.3.27 2 ˆ 1 X θ2(2)+i cˆ = = 0.399539 2 2 ˆ i=1 θ2+i 2 ˆ 1 X θ3(2)+i cˆ = = 0.1991703 3 2 ˆ i=1 θ2+i Compare to the actual values for the parameters, where     a1 −0.8          a2   0.8           c1b1   0.9           c1b2   0.7          θ =  c2b1  =  0.36  (4.3.31)          c2b2   0.28           c3b1   0.18           c3b2   0.14      d1 −0.3 and

c2 = 0.4, c3 = 0.2

From this example, we can see that the estimated values are close to the actual values and it does converge quickly. CHAPTER 4. NONLINEAR : TIME DOMAIN 52

For a more general NARMAX model, it can be written as y(t) = F (y(t − 1), y(t − 2), . . . , y(t − ny), u(t − k), u(t − k − 1), . . . , u(t − k − nu + 1)) (4.3.32) where F [·] is the nonlinear function, y(t) is the noise free output, u(t) is the noise free input, ny and nu is the order of the output and the input, respectively [78]. Let

V1 = y(t − 1),

V2 = y(t − 2), . . . = .

Vny = y(t − ny),

Vny+1 = u(t − k),

Vny+2 = u(t − k − 1), . . . = .

Vny+nu = u(t − k − nu + 1),

hence, Eqn. 4.3.32 can be written as  y(t) = F V1,V2,...,Vny ,Vny+1,Vny+2,...,Vny+nu (4.3.33)

The nonlinear function, F [·], in Eqn. 4.3.33 can be written as a function of polyno- mial expansion n n n Xs Xs Xs y(t) = θiVi + θi,jViVj + ... i=1 i=1 j=1 n n n n Xs Xs Xs Xs ... θi,j, . . . , θm,nViVj ...VmVn (4.3.34) i=1 j=1 m=1 n=1 where ns = ny + nu. In practice, the output usually corrupted by noise, hence the measured output is given as z(t) = y(t) + e(t) (4.3.35) where e(t) is the measurement noise. From Eqn. 4.3.32, 4.3.34 and 4.3.35, the measured output can be written as

l z(t) = F [z(t − 1), z(t − 2), . . . , z(t − ny), u(t − k), u(t − k − 1),...,

u(t − k − nu + 1), e(t − 1), . . . , e(t − ny)] + e(t) (4.3.36) CHAPTER 4. NONLINEAR : TIME DOMAIN 53 where the superscript l on the nonlinear function, F [·], is the degree of nonlinearity. For application, it is more convenience to use

l z(t) = F [z(t − 1), . . . , z(t − nz), u(t − k),...,

u(t − k − nu + 1), e(t − 1), . . . , e(t − ne)] + e(t) (4.3.37)

where nz and ne are the orders of the measured output, z(t), and measured noise, e(t), respectively [112].

4.4 Unscented Kalman Filter

Same as the KF, the EKF and UKF are used to estimate the dynamic state of the system. By defining the response of the system as the state of the system, estima- tion of the response can be done. Also, by adding another state as the estimated parameter, the UKF can be used for estimating unknown parameters. In the UKF, the state distribution is also propagated by the Gaussian as for the KF and EKF, but represented by using a set of chosen sample points, known as sigma points. The sigma points capture the true mean and co- variance of the Gaussian random variable up to the second order in a simple and effective way compared to the EKF. The posterior mean and covariance calculated by propagating the sigma points through the nonlinear dynamic system is accurate up to the third order for any nonlinearity, compared to only first order achieved by the EKF [130]. The implementation requires us to set the initial state values and the initial covariance, which requires some information about the of the states. If data are limited, imprecise estimates can lead to longer convergence time. General nonlinear systems can be represented using the discrete time equations

xk = f (xk−1, uk−1, vk−1) (4.4.1)

zk = h (xk, uk, wk) , (4.4.2) where x ∈ Rnx is the system state, u ∈ Rnu the input signal and z ∈ Rnz the observation, while v ∈ Rnv and w ∈ Rnw is the process and observation noise, CHAPTER 4. NONLINEAR : TIME DOMAIN 54 respectively. The linear or nonlinear dynamic system model f and h are assumed known and are not necessarily continuous [68]. The augmented state at time k is defined as   xk   a   xk =  vk  (4.4.3)   wk, where the dimension is N = nx + nv + nw, and the convariance is   Px 0 0   a   Pk =  0 Pv 0  . (4.4.4)   0 0 Pw

First, at k = 0, the state is initialized with   xb0   a a   xb0 = E [x ] = E  0  , (4.4.5)   0   Px 0 0   a a a a a |   P0 = E [(x − xb0)(x − xb0) ] =  0 Pv 0  . (4.4.6)   0 0 Pw

For k = 1, 2,..., ∞, the 2N + 1 sigma points are given as

 a  xbk−1 i = 0  Xa = xa + γ pPa  i = 1,...,N (4.4.7) i,k−1 bk−1 k−1 i   xa − γ pPa  i = N + 1,..., 2N,  bk−1 k−1 i where pPa  is the ith row or column of the matrix square root of Pa . k−1 i k−1 γ is a scaling parameter defined as √ γ = N + λ , where (4.4.8)

λ = α2 (N + κ) − N. (4.4.9)

α determines the spread of the sigma points and κ is the secondary scaling parameter. Values of α are usually between 0.0001 and 1, while κ is set to 0 for state estimation CHAPTER 4. NONLINEAR : TIME DOMAIN 55 and 3 − N for parameter estimation [125]. The sigma point matrix is defined by

 x  Xi,k−1   Xa =  v  , (4.4.10) i,k−1  Xi,k−1    w Xi,k−1 where the ith column contains the ith sigma point for the state, process noise and observation noise (denoted by superscripts x, v and w, respectively). The next step is called the time update, where the sigma points are used to find the predicted state estimate and covariance. By using the state function, f, the sigma points are transformed and given as

x x v  Xi,k|k−1 = f Xi,k−1, Xi,k−1, uk−1 , (4.4.11) for i = 0, 1,..., 2N. The predicted state estimate and covariance can be computed by

2N − X m x xbk = Wi Xi,k|k−1 (4.4.12) i=0 2N X P − = W c Xx − x− Xx − x−|, (4.4.13) xk i i,k|k−1 bk i,k|k−1 bk i=0

m c where Wi and Wi are the weights defined as λ W m = (4.4.14) 0 λ + N λ W c = + 1 − α2 + β (4.4.15) 0 λ + N 1 W m = W c = (4.4.16) i i 2 (N + λ) for i = 1, 2,..., 2N. For a Gaussian distribution, β = 2 gives an optimal result [130, 68]. x Next is the measurement update step where the new sigma points, Xi,k|k−1 will be transformed by the function h, together with the observation noise sigma points, w Xi,k−1, and the input signal, uk,

x w  Zi,k|k−1 = h Xi,k|k−1, Xi,k−1, uk . (4.4.17) CHAPTER 4. NONLINEAR : TIME DOMAIN 56

Then the predicted observation and its covariance can be calculated by

2N − X m zbk = Wi Zi,k|k−1 (4.4.18) i=0 2N X P − = W c Z − z− Z − z−|. (4.4.19) zk i i,k|k−1 bk i,k|k−1 bk i=0 The cross covariance is given by

2N X P − = W c Xx − x− Z − z−|. (4.4.20) xkzk i i,k|k−1 bk i,k|k−1 bk i=0 Finally, the UKF state estimate and its covariance at time k is given by

− − xbk = xbk + Kk zk − zbk (4.4.21) P = P − − K P −K|, (4.4.22) xk xk k zk k where the Kalman gain is given by

K = P − P −−1 . (4.4.23) k xkzk zk

Example: Linear oscillator model

Consider a linear mass on a spring model

2 y¨t + 2ζΩy ˙t + Ω yt = ut. (4.4.24)

Let Ω = 1, and we want to estimate the value of ζ. Assume that the sampling interval,∆ = 1, the linear mass spring model can be written in state space equation form as

1 1 − x3[k − 1] 1 x1[k] = x1[k − 1] − x2[k − 1] + u[k − 1] 1 + x3[k − 1] 1 + x3[k − 1] 1 + x3[k − 1]

x2[k] = x1[k − 1]

x3[k] = x3[k − 1] (4.4.25)

z[k] = x1[k], (4.4.26) CHAPTER 4. NONLINEAR : TIME DOMAIN 57 where

x1[k] = y[k]

x2[k] = y[k − 1]

ζb[k] = x3[k] (4.4.27)

and ut ∼ N(0, 1). Let the initial values be   0     x0 =  0  (4.4.28)   0.1 and   10 0 0     P0 =  0 10 0  , (4.4.29)   0 0 10 while the process and measurement noise are vt ∼ N(0, 0.1) and wt ∼ N(0, 0.1), respectively. The length of the signals is N = 100 and the sampling interval is ∆ = 1. The simulated response signals are estimated by the central difference where ζ = 0.4. Figure 4.4.1 show that the UKF does give a good estimate of the state for this linear oscillator model. Meanwhile, the estimation of the parameter by using the UKF does work well for this linear oscillator model where Fig. 4.4.2 show that the estimates does converge quickly to its actual value. The UKF can also be used for estimation of more than one parameter. Consider the same linear system as in Eqn. 4.4.24, but with unknown Ω and ζ. The linear mass spring model can be written in state space equation form as

2 2 − x4[k − 1] 1 − x3[k − 1]x4[k − 1] x1[k] = x1[k − 1] − x2[k − 1] 1 + x3[k − 1]x4[k − 1] 1 + x3[k − 1]x4[k − 1] 1 + u[k − 1] 1 + x3[k − 1]x4[k − 1]

x2[k] = x1[k − 1]

x3[k] = x3[k − 1]

x4[k] = x4[k − 1] (4.4.30) CHAPTER 4. NONLINEAR : TIME DOMAIN 58

State estimation 3 1 −1 amplitude −3

0 100 200 300 400 500

time

Figure 4.4.1: State estimation for linear oscillator

Parameter estimation 1.5 amplitude 0.5

0 200 400 600 800 1000

time

Figure 4.4.2: Parameter estimation for linear oscillator CHAPTER 4. NONLINEAR : TIME DOMAIN 59

z[k] = x1[k], (4.4.31) where

x1[k] = y[k],

x2[k] = y[k − 1],

x3[k] = ζb[k],

x4[k] = Ω[b k], (4.4.32)

while ut ∼ N(0, 1) and uncorrelated in time. Notice that an additional state, x4[k] has been added to the state space equations above compared to the previous case of the same linear system. The new state, x4[k] is used to estimate the other unknown parameter, Ω[k]. Let the initial values be   0      0  x0 =   (4.4.33)    0.1    3 and   10 0 0 0      0 10 0 0  P0 =   , (4.4.34)    0 0 10 0    0 0 0 10 while the process and measurement noise are vt ∼ N(0, 0.1) and wt ∼ N(0, 0.1), respectively. The length of the signals and the sampling interval is the same as before, where N = 1000 and ∆ = 1. The simulated response signals are estimated by the central difference where ζ = 0.4 and Ω = 1. Figure 4.4.3 show the UKF state estimation for this linear oscillator with two unknown parameters. Meanwhile, both unknown parameters does quickly estimated by the UKF technique such as shown in Fig. 4.4.4. CHAPTER 4. NONLINEAR : TIME DOMAIN 60

State estimation 3 1 −1 amplitude −3

0 100 200 300 400 500

time

Figure 4.4.3: State estimation for linear oscillator with two unknown parameters, Ω and ζ

Parameter estimation − zeta 1.5 amplitude 0.5

0 200 400 600 800 1000

time

Parameter estimation − omega 4 3 2 1 amplitude 0

0 200 400 600 800 1000

time

Figure 4.4.4: Parameters estimation for linear oscillator with two unknown param- eters, Ω and ζ CHAPTER 4. NONLINEAR : TIME DOMAIN 61

Example: Duffing oscillator

The Duffing nonlinear systems can be modelled as

3 my¨t + cy˙t + kyt + k3yt = ut, (4.4.35)

For this Duffing system, the state equations can be written as

4m − 2∆2k ∆c − 2m x [k] = x [k − 1] + x [k − 1] 1 2m + ∆c 1 2m + ∆c 2 2∆2k 2∆2 − 3 x3[k − 1] + u[k − 1] 2m + ∆c 1 2m + ∆c

x2[k] = x1[k − 1], (4.4.36) and the measurement equation is

z[k] = x1[k], (4.4.37) where

x1[k] = y[k]

x2[k] = y[k − 1] (4.4.38)

Since we also want to estimate the nonlinear parameter, k3, we add 4m − 2∆2k ∆c − 2m x [k] = x [k − 1] + x [k − 1] 1 2m + ∆c 1 2m + ∆c 2 2∆2x [k − 1] 2∆2 − 3 x3[k − 1] + u[k − 1] 2m + ∆c 1 2m + ∆c

x2[k] = x1[k − 1]

x3[k] = x3[k − 1], (4.4.39) where

kb3[k] = x3[k]. (4.4.40)

The values of the coefficents are given by

m = 1, c = 1, k = 1,

k3 = 3 . CHAPTER 4. NONLINEAR : TIME DOMAIN 62

State estimation 1.0 0.0 amplitude −1.0 0 100 200 300 400 500

time

Figure 4.4.5: State estimation for Duffing oscillator

Wave surface data is used as the input for the system. The wave surface elevation time series were simulated for water depth of 10 meter with the significant wave height is 2 meter and mean zero-up crossing wave period is 14 second. The type of wave surface amplitude is non-deterministic spectral amplitude which means the simulated waves have random amplitudes and phases. The sampling interval for this time series is ∆ = 0.305175781 seconds with the length of the time series is about 3051.75781 second, consist of 10000 records. The simulation of the wave surface elevation data was done following Najafian et al. [94]. The initial values are given by   0     x0 =  0  (4.4.41)   5 and   10 0 0     P0 =  0 10 0  , (4.4.42)   0 0 10 while the process and measurement noise are vt ∼ N(0, 0.1) and wt ∼ N(0, 0.1), respectively. Once again, we tried to estimate more than one parameter by using the UKF. Consider the case where the stiffness parameter, k, is unknown. By using the UKF, CHAPTER 4. NONLINEAR : TIME DOMAIN 63

Parameter estimation 3.0 2.0 amplitude 1.0 0 2000 4000 6000 8000 10000

time

Figure 4.4.6: Parameter estimation for Duffing oscillator

we can estimate both the k and k3 parameters by introducing another state, x4[k], to the state equation in Eqn. 4.4.36. Hence Eqn. 4.4.36 can be rewritten as 4m − 2∆2x [k − 1] ∆c − 2m x [k] = 4 x [k − 1] + x [k − 1] 1 2m + ∆c 1 2m + ∆c 2 2∆2x [k − 1] 2∆2 − 3 x3[k − 1] + u[k − 1] 2m + ∆c 1 2m + ∆c

x2[k] = x1[k − 1]

x3[k] = x3[k − 1]

x4[k] = x4[k − 1], (4.4.43) where

bk3[k] = x3[k] (4.4.44) and

bk[k] = x4[k]. (4.4.45)

The same values for all the parameters such as the previous duffing example have been used where

m = 1, c = 1, k = 1,

k3 = 3 . CHAPTER 4. NONLINEAR : TIME DOMAIN 64

State estimation 1.0 0.0 amplitude −1.0 0 100 200 300 400 500

time

Figure 4.4.7: State estimation for Duffing oscillator with two unknown parameters, k and k3

The same wave surface data is also used as the input for the system. The initial values are given by   0      0  x0 =   (4.4.46)    5    3 and   10 0 0 0      0 10 0 0  P0 =   , (4.4.47)    0 0 10 0    0 0 0 10 while the process and measurement noise are vt ∼ N(0, 0.1) and wt ∼ N(0, 0.1), respectively.

Example: Van der Pol oscillator

For the Van der Pol system which can be modelled as

2  y¨t + µ yt − 1 y˙t + yt = ut, (4.4.48) CHAPTER 4. NONLINEAR : TIME DOMAIN 65

Parameter estimation − k3 5.0 4.0 amplitude 3.0

0 2000 4000 6000 8000 10000

time

Parameter estimation − k 3.0 2.0 amplitude 1.0

0 2000 4000 6000 8000 10000

time

Figure 4.4.8: Parameters estimation for Duffing oscillator with two unknown pa- rameters, k and k3 CHAPTER 4. NONLINEAR : TIME DOMAIN 66 its state equations can be written as 2 4 − 2∆ ∆µx1[k − 1] − ∆µ − 2 2 x1[k] = 2 x1[k − 1] + 2 x2[k − 1] 2 + ∆µx1[k − 1] − ∆µ 2 + ∆µx1[k − 1] − ∆µ 2∆2 + 2 u[k − 1] 2 + ∆µx1[k − 1] − ∆µ x2[k] = x1[k − 1] (4.4.49) and the measurement equation is

z[k] = x1[k], (4.4.50) where

x1[k] = y[k] (4.4.51) and

x2[k] = y[k − 1]. (4.4.52)

Let the values for µ = 2. Since we want to estimate the nonlinear parameter, µ, we add

x3[k] = x3[k − 1] (4.4.53) to the state equation in Eqn. 4.4.49. Hence Eqn. 4.4.49 redefined as 4 − 2∆2 x1[k] = 2 x1[k − 1] 2 + ∆x3[k − 1]x1[k − 1] − ∆x3[k − 1]

∆x3[k − 1]x1[k − 1] − ∆x3[k − 1] − 2 2 + 2 x2[k − 1] 2 + ∆x3[k − 1]x1[k − 1] − ∆x3[k − 1] 2∆2 + 2 u[k − 1] 2 + ∆x3[k − 1]x1[k − 1] − ∆x3[k − 1] x2[k] = x1[k − 1]

x3[k] = x3[k − 1], (4.4.54) where

µb[k] = x3[k]. (4.4.55) The initial values are given by   0     x0 =  0  (4.4.56)   1.5 CHAPTER 4. NONLINEAR : TIME DOMAIN 67

State estimation 2 0 amplitude −2

0 50 100 150 200 250

time

Figure 4.4.9: State estimation for Van der Pol oscillator

Figure 4.4.10: Parameter estimation for Van der Pol oscillator and   10 0 0     P0 =  0 10 0  . (4.4.57)   0 0 10 From Fig. 4.4.9 and 4.4.10, it is shown that the UKF technique provide a good estimation of the response signals and the unknown parameters for the Van der Pol nonlinear oscillator model. CHAPTER 4. NONLINEAR : TIME DOMAIN 68

4.5 Summary

• When feasible, the central difference have been used to simulate the responses of linear and nonlinear dynamic systems. The central difference method gives accurate results when compared with the Runge-Kutta method for solving the differential equations in the continuous time.

• In principle, the NARX model can be used to build a model for nonlinear dynamic systems in terms of polynomials of lagged responses and inputs, pro- vided such a representation does converge to the system response.[8]. However, even if it does converge, the number of such terms needed for a reasonable ap- proximation may be excessive.

• From the application of UKF on the linear and nonlinear dynamic system, it can be concluded that UKF performed well on estimating the responses and its unknown parameters. Since the convergence of the UKF algorithm was quite fast, it will be suitable for on-line identification of nonlinear dynamic systems. However, the UKF technique require some information on the de- scriptive statistic of the states (time series data) to set up the initial state values and the initial covariance. If initial values are poorly chosen, the per- formance deteriorates. We have used the UKF for system identification of nonlinear oscilating flap wave energy converter and it does show some promis- ing results [10]. Chapter 5

Nonlinear system identification in frequency domain

5.1 Introduction

Spectral analysis method discussed in Chapter 3 will only suitable for analyzing a linear dynamic system. The response amplitude operator is not sufficient to analyze the nonlinear dynamic system. Hence, the nonlinear characteristics of the nonlinear dynamic system can not be described by the spectral analysis technique discussed previously. In this chapter, we will discuss two methods for identification of nonlinear dy- namic systems in the frequency domain. The first method is the method proposed by Julius S. Bendat. This method is based on the spectral analysis technique. By remodelled the single-input single-output nonlinear system model into a parallel lin- ear and nonlinear systems with single input and single output. Bendat et al. have shown the application of this identification system on the Duffing, Van der Pol and also on the nonlinear drift force model [12, 13]. Other application of this technique is on the identification of moored structural system [95]. The second method is the generalized frequency response function (GFRF). This method is based on the spectral analysis technique and the NARMAX model. Some applications of this technique are on identification of nonlinear dynamic system that consist of quadratic and cubic nonlinearity [80], multi input multi output nonlinear

69 CHAPTER 5. NONLINEAR : FREQUENCY DOMAIN 70

Figure 5.2.1: SISO model with nonlinear feedback systems [122] and nonlinear Volterra systems [61].

5.2 Bendat’s nonlinear system identification

Single input single output (SISO) nonlinear system model with nonlinear feedback can be illustrated as in Fig. 5.2.1. In mathematical form, it can be written in frequency domain as

Y (ω) = H (ω)[X (ω) − Z (ω)]

= H (ω) X (ω) − H (ω) Z (ω) (5.2.1) where

Y (ω) Output/response in frequency domain, X(ω) Input in frequency domain, H(ω) Linear system FRF,

z(y) nonlinear feedback function of yt = nonlinear system output. Z(ω) z(y) in frequency domain,

Bendat et al. [12, 11, 13] proposed a technique to identify the linear system frequency response function, H(ω), from the observed input and output data. This technique also provide a way to analyze the frequency properties of the known or unknown nonlinear feedback system. According to Bendat et al. [13], the nonlin- ear system such as in Fig. 5.2.1 can be redefined as a mathematical reverse SISO nonlinear model without feedback such as shown in Fig. 5.2.2 where CHAPTER 5. NONLINEAR : FREQUENCY DOMAIN 71

Figure 5.2.2: Reverse SISO nonlinear model without feedback

xt mathematical total output = measured input,

yt mathematical input = measured output, −1 A1(ω) = [H(ω)] reciprocal of the linear subsystem

z(y) nonlinear feedback function of yt = nonlinear subsystem output.

Mathematically, from Eqn. 5.2.1

1 X (ω) = Y (ω) + Z (ω) (5.2.2) H (ω)

Let −1 A1 (ω) = [H (ω)] (5.2.3) and since

Z (ω) = HNL (ω) Y (ω) (5.2.4) where HNL (ω) is assumed as the nonlinear transfer function, hence

X (ω) = A1 (ω) Y (ω) + HNL (ω) Y (ω)

= [A1 (ω) + HNL (ω)] Y (ω) (5.2.5)

The model can then be modelled as a reverse SISO model with parallel linear and nonlinear subsystems such as in Fig. 5.2.3 where

nt mathematical output noise, g(y) nonlinear system,

y2,t = g [yt] input to linear system A2(ω) CHAPTER 5. NONLINEAR : FREQUENCY DOMAIN 72

Figure 5.2.3: Reverse SISO model with parallel linear and nonlinear subsystems

In frequency domain, the relationship between both the Fourier transform of the input and response in Fig. 5.2.3 are given as

X1(ω) = A1(ω)Y1(ω), (5.2.6)

X2(ω) = A2(ω)Y2(ω), (5.2.7)

X(ω) = X1(ω) + X2(ω) + N(ω). (5.2.8)

The next step is to compute the mutually uncorrelated output, {Ui(ω); i = 1, 2}, from the correlated output, {Yi(ω); i = 1, 2}, by using the ordered conditioned spec- tral density function (SDF) defined as

Ui(ω) = Yi·(i−1)!(ω), (5.2.9) where

U1(ω) = Y1(ω),

Yj·r!(ω) = Yj·(r−1)!(ω) − Lrj(ω)Yr·(r−1)!(ω); j > r,

Grj·(r−1)!(ω) Lrj(ω) = , (5.2.10) Grr·(r−1)!(ω) for r = 1, 2, . . . , j−1 and j = 1, 2, . . . , q+1 where q is the number of the uncorrelated input. Gii(ω) and Gij(ω) here is the auto spectrum of yi,t and cross-spectrum of yi,t and yj,t, respectively. Therefore for the nonlinear subsystem model, the uncorrelated input can be computed from

U1(ω) = Y1(ω), (5.2.11)

U2(ω) = Y2·1(ω) = Y2(ω) − L12(ω)Y1(ω), (5.2.12) CHAPTER 5. NONLINEAR : FREQUENCY DOMAIN 73

Figure 5.2.4: Revised reverse SISO model with uncorrelated responses where G12(ω) L12(ω) = . (5.2.13) G11(ω)

Since the uncorrelated responses, {Ui(ω)} being used, then the it will now pass through the new linear systems Lix(ω) instead of Ai(ω) such as shown in Fig. 5.2.4. This new linear subsystem is defined as

Gix·(i−1)!(ω) Lix(ω) = Gii·(i−1)!(ω) G (ω) = uix . (5.2.14) Guiui (ω) Hence, for this analysis,

Gu1x(ω) L1x(ω) = , (5.2.15) Gu1u1 (ω)

Gu2x(ω) L2x(ω) = . (5.2.16) Gu2u2 (ω)

The optimum linear subsystems {Ai(ω)} can then be computed by

A2(ω) = L2x(ω), (5.2.17)

G12(ω) A1(ω) = L2x(ω) − A2(ω). (5.2.18) G11(ω)

Hence the FRF are

−1 H1(ω) = [A1(ω)] , (5.2.19)

−1 H2(ω) = [A2(ω)] , (5.2.20) CHAPTER 5. NONLINEAR : FREQUENCY DOMAIN 74

The percentage of the input spectrum, X(ω), due to each of the uncorrelated input can be calculated by the coherence defined as

2 2 |Gu1x(ω)| γu1x(ω) = , (5.2.21) Gu1u1 (ω)Gxx(ω) 2 2 |Gu2x(ω)| γu2x(ω) = , (5.2.22) Gu2u2 (ω)Gxx(ω) (5.2.23) and the multiple coherence function is

2 2 2 γx:y = γu1x + γu2x. (5.2.24)

5.3 Generalized Frequency Response Function

Generalized frequency response function (GFRF) can be used to analyze a multi degree of freedom dynamic system. One of the advantages of the GFRF is in de- tecting the coupling between modes in the system. By using a method known as the probing method, we can obtain the analytical expression for the frequency response function for nonlinear dynamic system. The method has been discussed by Billings and Tsang [15] and the application of the probing method for nonlinear dynamic model has been shown by Metcalfe et al. [91]. In this research, this technique have been applied to the nonlinear ARX model of the Duffing and the Van der Pol weakly nonlinear oscillators [8]. First step on computing the nonlinear frequency response is by letting the input, u(t) be a sum of K exponentials

K X iωkt u(k) = Ake , (5.3.1) k=1 where Ak are the amplitudes. Meanwhile, the output can be expressed as

∞ X y(t) = yn(t), (5.3.2) n=1 where the nth order output is

K K X X i(ωk1 +···+ωkn )t yn(t) = ··· [Ak1 ··· Akn Hn (ωk1 , . . . , ωkn )] e , (5.3.3)

k1=1 kn=1 CHAPTER 5. NONLINEAR : FREQUENCY DOMAIN 75

where Hn is the nonlinear transfer function of order n. By setting K = n and Ak = 1 for all k = 1, 2, . . . , n, Eqn. 5.3.1 and 5.3.3 can be written as n X u(k) = eiωkt, (5.3.4) k=1 and n n X X i(ωk1 +···+ωkn )t yn(t) = ··· Hn (fk1 , . . . , fkn ) e . (5.3.5)

k1=1 kn=1 For an example, consider a nonlinear dynamic system which can be modelled by the differential equation as

2 γ3u(t) =y ˙(t) + γ1y(t) + γ2y (t), (5.3.6) where γ1, γ2 and γ3 are constants.

Let K = 1, Ak = 1 and ωk = ω, then from Eqn. 5.3.1, the first probing input is

u(t) = eiωt. (5.3.7)

With n = 1, the first probing output given from Eqn. 5.3.5 as

iωt y(t) = H1(ω)e . (5.3.8)

Hence

iωt y˙(t) = iωH1(ω)e . (5.3.9)

Substituting Eqns. 5.3.7-5.3.9 into Eqn. 5.3.6 gives

iωt iωt iωt  iωt2 γ3e = iωH1(ω)e + γ1H1(ω)e + γ2 H1(ω)e , (5.3.10) and equating coefficients eiωt gives

γ3 H1(ω) = . (5.3.11) iω + γ1

Next, for the second probing input, let K = 2 and Ak = 1 for all k, then from Eqn. 5.3.1 u(t) = eiω1t + eiω2t, (5.3.12) and with n = 2, then from Eqn. 5.3.5

iω1t iω2t y(t) = H1(ω1)e + H1(ω2)e

i(ω1+ω2)t +2!H2(ω1, ω2)e

i2ω1t i2ω2t +H2(ω1, ω1)e + H2(ω2, ω2)e . (5.3.13) CHAPTER 5. NONLINEAR : FREQUENCY DOMAIN 76

Hence

iω1t iω2t y˙(t) = iω1H1(ω1)e + iω2H1(ω2)e

i(ω1+ω2)t +i2! (ω1 + ω2) H2(ω1, ω2)e

i2ω1t +i2ω1H2(ω1, ω1)e

i2ω2t +i2ω2H2(ω2, ω2)e . (5.3.14)

Substituting Eqns. 5.3.12-5.3.14 into Eqn. 5.3.6 and equating coefficients 2!ei(ω1+ω2)t gives

γ2H1(ω1)H1(ω2) H2 (ω1, ω2) = − i(ω1 + ω2) + ω1 γ2 = − H1(ω1)H1(ω2)H1(ω1 + ω2). (5.3.15) γ3

5.4 Summary

• We have shown the estimated RAO for the linear subsystem from the Ben- dat’s nonlinear system identification technique is almost the same as the RAO from the linear case [10]. It is also shown that the Bendat’s nonlinear system identification technique can be used to estimate the nonlinear coefficient by averaging the nonlinear impedance function for the nonlinear dynamic model.

• We have used the probing technique to the nonlinear ARX model of the Duffing and the Van der Pol weakly nonlinear oscillators [8]. The frequency response functions from the probing method appears to give a useful frequency domain characterization of the systems. Chapter 6

Wavelets and System Identification

6.1 Wavelet Transforms

The wavelet concept was established in the late 1970s by Jean Morlet, a geophysi- cist, who tried to decompose seismic signals into “wavelets of constant shape”. To- gether with Grossman, they defined the concept of a wavelet and suggested the wavelet name, which means little or small waves. In 1985, Yves Meyer developed the mathematical theory of wavelets. During the 1980s, researchers rediscovered Al- fred Haars’s basis orthogonal function which was introduced in 1909. The first and probably simplest wavelet is known as the Haar Wavelet. Stephane Mallat, who was supervised by Meyer, applied the wavelet concepts in his doctoral research on mul- tiresolution analysis. He then constructed the Discrete Wavelet Transform (DWT) and the DWT algorithm by using a signal processing approach which is simpler and faster. By using Mallat’s work, Ingrid Daubechies in the late 1980s constructed a set of wavelet orthonormal basis functions known as the Daubechies Wavelets, which provide one of the biggest milestones in the development of wavelets [86]. Wavelets transform the functions or data which is in the time-domain into the time-frequency domain. In wavelets, the frequency term is usually referred as scale, which differentiate it from the Fourier transform. It works for both stationary and nonstationary signals, so we can identify the spectra of signals at a specific time.

77 CHAPTER 6. WAVELETS AND SYSTEM IDENTIFICATION 78

This interesting feature of wavelets is what makes them more versatile than the usual Fourier analysis. Wavelet transforms are also computationally fast compared to Fourier trans- forms. For example, the execution time for the DWT is proportional to n (size of data set) while for the Fast Fourier Transform, it is of order n log n [97]. Same as the Fourier transform which has an infinite set of basis functions for Fourier series, wavelets also have an infinite number of basis functions known as mother wavelets. Other important features of wavelets are the sparsity of the wavelet representation (known as the coefficients), the efficiency in terms of storage and the ability to analyze functions and time series at different scales. These features have an advantage for analyzing big samples of data where we can just analyze the coefficients of the most significant scale, instead of analyzing the whole data. We can also drop the coefficients of any scales which might not significantly contribute to the analysis. Thus, this will reduce the computational times and storage. Wavelets have been used in solving differential equations, statistics, turbulence analysis, image processing and signal processing [53]. Some of the applications of wavelets in statistics occur in the area of , , time series analysis, variance stabilization and in solving the smoothing problem. In this chapter, we will discuss the application of wavelet for system identification of dynamic systems. Since the wavelet is localized in the time-scale domain, certain information can be accessed immediately from the wavelet representation of the time series. The wavelets ability to detect even a very weak signal by using its local amplification and compression capability has been advantageous in analyzing such systems. Also, by using the wavelet transform, the random property of chaotic response can be observed even for a very short time series data [137]. Wavelets have been used extensively for system identification of linear and non- linear dynamic systems [120]. For example, Zheng et al. [139] used the wavelet trans- form to investigate the recognition of chaos. Methods based on wavelets have been proposed by Kitada which accurately determine the hysteresis curves and damping coefficients of structural system [71]. Meanwhile, Pernot and Lamarque [103] have CHAPTER 6. WAVELETS AND SYSTEM IDENTIFICATION 79 computed the transient responses of parametrically excited dynamical systems by using the wavelet transform and have also used it for a stability analysis of linear systems. Gouttebroze [47] used the wavelet identification technique to identify the characteristics of structural system by analyzing the amplitude and phase of the wavelet transform for vibration data. A procedure to identify the zero memory non- linear discrete structural systems has been proposed by Ghanem and Romeo. This technique is used to estimate the parameter of a priori known dynamical model and also can be used to identified the classes of nonlinear models [45]. By using the wavelet transform, Wong and Chen [134] have studied the nonlinear and chaotic behaviour of a structural system. Argoul and Le [4] proposed an instantaneous in- dicators to characterize the nonlinear behaviour of mechanical structures by using continuous Cauchy wavelet. Yu et al. [137] have applied the wavelet transform techniques to analyze the nonlinear dynamical system of ship rolling and heave-roll coupling. It is also been shown that wavelet can be applied on the NARMAX model [14]. Percival and Walden [102] divided wavelet transform into continuous wavelet transform(CWT) and discrete wavelet transform(DWT). Generally, the CWT of the signal x (t) at time b and scale a can be defined as

Z ∞   1 ∗ t − b Tx (a, b) = x (t) ψ dt (6.1.1) a −∞ a where ψ (·) is known as the wavelet function [102]. Details on the CWT will be discussed in the next section. Meanwhile, for DWT, it can be formulated from the CWT or on its own. The DWT of the signal is called as the DWT coefficients, which consists of wavelet co- efficients, {Wj,n} and scaling coefficients, {Vj,n}. Both of these coefficients can be obtained by using the pyramid algorithm where the computation of the DWT of the signal can be done faster and easier than the fast Fourier transform (FFT). There are many types of DWT other than the basic DWT, for example the maximal over- lap discrete wavelet transforms (MODWT), the discrete wavelet packet transforms (DWPT) and the maximal overlap discrete wavelet packet transforms (MODWPT). Same as the Fourier transform, both the CWT and DWT preserve the energy of CHAPTER 6. WAVELETS AND SYSTEM IDENTIFICATION 80 the process where for the CWT it can be shown that Z ∞ Z ∞ Z ∞  2 1 2 da x (t) dt = Tx (a, b) db 2 (6.1.2) −∞ Cψ 0 −∞ a where Z ∞ |Ψ(ω)| Cψ = dω (6.1.3) 0 ω and Ψ (ω)is the Fourier transform of ψ (·). Meanwhile, for the DWT, it can be shown that N−1 J−1 Nj −1 Nj −1 X 2 X X 2 X 2 xt = Wj,n + Vn (6.1.4) t=0 j=1 n=0 n=0 j where J = log2 N and Nj = N/2 . Wavelet variance can be used to decompose energy of the time series. The wavelet variance partitioning the variance of the time series, which is related to the power spectrum based on the Fourier transform. Since wavelet is localized in scale-time domain, the evolution of a process could be analyzed especially for the nonstationary processes.

6.2 Continuous Wavelet Transforms

Mother wavelet or analyzing wavelet is a wavelet that been rescaled by factor a and centered at time b. In mathematical form, it is written as 1 t − b ψ (t) = ψ (6.2.1) (a,b) a a The mother wavelet explains how weighted averages of certain other functions vary from one averaging period to the next period [102]. The wavelet has the properties where:

R ∞ 1. −∞ ψ (t) dt = 0

R ∞ 2 2. −∞ ψ (t) dt = 1

The continuous wavelet transform of signal x(t) by wavelet ψ is given as

Tx (a, b) = x (t) , ψ(a,b) (t) 1 Z ∞ t − b = x (t) ψ∗ dt (6.2.2) a −∞ a CHAPTER 6. WAVELETS AND SYSTEM IDENTIFICATION 81

The value of Tx (a, b), known as the wavelet coefficients give an information regarding function (or signal) f(x) at scale a around point/time b. Hence, by using the wavelet transform, we can have the time-scale information of a function or signal at time b and scale a. A wavelet is admissible if

Z ∞ Ψ(ω) Cψ = dω (6.2.3) 0 ω satisfies the requirement of 0 < Cψ < ∞ where Ψ (ω) is the Fourier transform of wavelet function, ψ (t). This condition ensure that the function (or signal), could be reconstructed from its wavelet transform iff

Z ∞ x2 (t)dt < ∞ (6.2.4) −∞ From this condition, the inverse CWT is defined as

1 Z ∞ 1 Z ∞ b − t  da √ x (t) = Tx (a, b) ψ db 2 (6.2.5) Cψ 0 a −∞ a a Same as the Parseval’s theorem in the Fourier transform, the CWT preserve the energy of the signal where it is shown that

Z ∞ Z ∞ Z ∞  2 1 2 da x (t) dt = Tx (a, b) db 2 (6.2.6) −∞ Cψ 0 −∞ a Multiresolution analysis is an important feature in wavelet transform. Even though it is described in discrete context and usually been done by using discrete wavelet transform, it can still be done by using CWT. For multiresolution analysis, the scaling coefficients, which is the information of function (or signal) x(t), at time b and scale a, can be obtained by

Sx (a, b) = x (t) , ϕ(a,b) (6.2.7)

where ϕ(a,b) is known as the scaling function or father wavelet. The relationship between wavelet function and scaling function in Fourier (frequency) domain is given by Z ∞ du ϕ˜ (ξ) = ψ˜ (uξ) (6.2.8) a u CHAPTER 6. WAVELETS AND SYSTEM IDENTIFICATION 82

There are various wavelet function that can be used for CWT. There are complex and real-valued mother wavelet. One of the simplest complex-valued mother wavelet is Cauchy wavelets 1 1 ψC (x) = (6.2.9) 2π (1 − ix)2 where the scaling function is

1 1 ϕC (x) = (6.2.10) 2π 1 − ix

In Fourier domain, wavelet and scaling functions can be written as

ψ˜C (ξ) = ξe−xiΘ(ξ) (6.2.11) and ϕ˜C (ξ) = e−xiΘ(ξ) (6.2.12) respectively. There are also higher-order Cauchy wavelets where the mother wavelet is ξN ψ˜C (ξ) = e−xiΘ(ξ) (6.2.13) N (N − 1)! and its scaling function is

 ξ2 ξN−1  ϕ˜C (ξ) = 1 + ξ + + ... + e−xiΘ(ξ) (6.2.14) N 2 (N − 1)!

The difference between each higher-order Cauchy wavelets is the wavelet numbers of vanishing moments where the higher the order of the Causcy wavelets, the more oscillatory the wavelet become. Another type of mother wavelet that always be used is the Morlet wavelet defined as  x2  ψ (x) = exp − exp (iω x) (6.2.15) 2σ2 0 where σ and ω0 are parameters that controlling the size of the wavelet envelope and oscillations, respectively. Morlet wavelet is a modulated Gaussian function. Since the integral of Morlet wavelet is not zero, it does not satisfy the requirement of wavelet function which is to be admissible. However, if ω0 is assume large (greater than 5), the integral will be almost zero, hence it can be considered as one of the wavelet. CHAPTER 6. WAVELETS AND SYSTEM IDENTIFICATION 83

Real-valued wavelets usually been used in discrete wavelet transform (DWT). However, it still can be used for CWT. For example there is Laplacians of Gaussian (LOG) wavelet defined as

 x2   x2  ψ (x) = −∇ exp − = 1 − x2 exp − (6.2.16) 2 2 where ∇ is the Laplacian operator and nth derivative of Gaussian (DOG) wavelet

dn  x2  ψ (x) = − exp − (6.2.17) n dxn 2 which has n vanishing moments. The wavelet coefficients is defined in the time-scale domain. However, by simple manipulation, it can be written in term of time-frequency domain by this relation- ship, f f = c , (6.2.18) a a · ∆ where fa is the frequency (in Hz) related to scale a, fc is the wavelet central frequency (in Hz), a is the scale and ∆ is the signal sampling interval (in s).

6.3 Discrete Wavelet Transforms

The DWT of a signal of length N yields the DWT coefficients, consisting of J − 1 levels of wavelet coefficients {Wj,k} and J − 1 levels of scaling coefficients {Vj,k}, j where j = 0,...,J − 1, J = log2 (N), k = 0,...,Nj − 1 and Nj = 2 . A simpler and faster technique to perform the discrete wavelet transform, known as the DWT pyramid algorithm, was introduced by Mallat[87]. By using the DWT pyramid algorithm, the coefficients can be calculated by

L−1 X Wj,k = ψlVj+1,2k+1−l mod Nj+1 , (6.3.1) l=0 L−1 X Vj,k = ϕlVj+1,2k+1−l mod Nj+1 , (6.3.2) l=0 where ψ is the wavelet filter, ϕ is the scaling filter, L is the width of the filter and

VJ,k = xk, with {xk; k = 0, 1, 2,...,N − 1} the time series data. In this analysis, the CHAPTER 6. WAVELETS AND SYSTEM IDENTIFICATION 84

Haar wavelet is used where  √ Haar  −1/ 2 l = 0 ψl = √ , (6.3.3)  1/ 2 l = 1  √ Haar  1/ 2 l = 0 ϕl = √ . (6.3.4)  1/ 2 l = 1

Consider a time series {xt; t = 1, 2, 3,...,N}. The DWT of {xt} can be repre- sented by the sequence

DWT h i x = x˜1 x˜2 x˜3 x˜4 ··· x˜N , (6.3.5)

wherex ˜1 = V0,0 is the vector of scaling coefficients at level 0,x ˜2 = W0,0 is the vector of wavelet coefficients at level 0,x ˜3 = W1,0 andx ˜4 = W1,1 are the first and second wavelet coefficients at level 1, andx ˜N = WJ−1,NJ−1−1 are the final wavelet coefficients at level J − 1. An interesting feature of the wavelet transform is that the signal can be recovered from its wavelet and scaling coefficients by the inverse discrete wavelet transform (IDWT). It can be performed by using Mallat’s pyramid algorithm defined as

L−1 X V = ψ W ↑ j,k l j−1,k+l mod Nj l=0 L−1 X + ϕ V ↑ , (6.3.6) l j−1,k+l mod Nj l=0 where  ↑  0 k = 0, 2,...,Nj+1 Wj,k = , (6.3.7)  W k−1 k = 1, 3,...,Nj+1 − 1 j, 2  ↑  0 k = 0, 2,...,Nj+1 Vj,k = . (6.3.8)  V k−1 k = 1, 3,...,Nj+1 − 1 j, 2

For an example, consider a random Gaussian signal, xi ∼ N (0, 1) with the signal length is N = 8. CHAPTER 6. WAVELETS AND SYSTEM IDENTIFICATION 85

i xi i xi 0 -0.9760792 4 0.821988 1 -0.08558464 5 -0.5261697 2 1.569871 6 -1.323208 3 2.144348 7 0.05187701

Table 6.3.1: Example: The time series

Figure 6.3.1: Example: Plot of the time series

By using the Haar wavelet, the signal or time series can be transformed into

th j level wavelet coefficients, {Wj,k}, and scaling coefficients such as shown in Table 6.3.2. The plot of the wavelet coefficients were shown in Fig. 6.3.2. Those coefficient values and the plot were computed by using R wavethresh package, developed by G.P. Nason from the University of Bristol, UK [96]. CHAPTER 6. WAVELETS AND SYSTEM IDENTIFICATION 86

Wavelet Coefficients Scaling Coefficients

W = √1 (x − x ) -0.6296747 V = √1 (x + x ) -0.7507097 2,0 2 0 1 2,0 2 0 1 W = √1 (x − x ) -0.4062168 V = √1 (x + x ) 2.6263498 2,1 2 2 3 2,1 2 2 3 W = √1 (x − x ) 0.9532914 V = √1 (x + x ) 0.2091751 2,2 2 4 5 2,2 2 4 5 W = √1 (x − x ) -0.9723321 V = √1 (x + x ) -0.8989669 2,3 2 6 7 2,3 2 6 7 W = √1 (V − V ) -2.3879417 V = √1 (V + V ) 1.3262779 1,0 2 2,0 2,1 1,0 2 2,0 2,1 W = √1 (V − V ) 0.7835747 V = √1 (V + V ) -0.4877565 1,1 2 2,2 2,3 1,1 2 2,2 2,3 W = √1 (V − V ) 1.282716 V = √1 (V + V ) 0.5929242 0,0 2 1,0 1,1 0,0 2 1,0 1,1 Table 6.3.2: Example: jth level wavelet and scaling coefficients

Figure 6.3.2: Example: Plot of the DWT coefficients

From the wavelet coefficients and the scales, it tell us whether the time series is changing rapidly (if the level 2 wavelet coefficients have the significant value) or the time series is changing slowly (if the level 0 wavelet coefficients have the significant CHAPTER 6. WAVELETS AND SYSTEM IDENTIFICATION 87

j frequency range  1 1  0 16 , 8 = [0.0625, 0.125]Hz  1 1  1 8 , 4 = [0.125, 0.25]Hz  1 1  2 4 , 2 = [0.25, 0.5]Hz Table 6.3.3: Frequency range for each level of wavelet coefficients values). This rapid changes can be assume as having a higher frequency while the slower changes as the low frequency. Therefore, from this time series, we could say that the most significant wavelet coefficients is the first level 1 wavelet coefficients. This coefficients is telling us how much changes in the time series during the first 4 data. Meanwhile, if we looking at each level, for level 2, the coefficients is higher for the last two level 2 wavelet coefficients compared with the the first level 2 wavelet coefficients. Specifically, we can also look in terms of frequency bands. The wavelet filter and the scaling filter are associated with high pass and low pass filters respectively. The DWT decomposition halves the time resolution but doubling the frequency resolution through a procedure known as the subband coding, where the frequency band of the signal spans only half the previous frequency band. This procedure is repeated for each level.

j th In other words, if the signal is sampled at 1/∆Hz, then the Nj = 2 j level h 1/∆ 1/∆ i wavelet coefficients represents the signal in the 2J−j+1 , 2J−j range. Therefore, for this example, table 6.3.3 shown the frequency range for each level of wavelet coeffi- cients. From this table, it shown that the level 2 represent the signal in the highest frequency range while level 0 for the lowest frequency range.

6.4 Linear system analysis using wavelet

Here we will discussed the wavelet techniques on identification of linear dynamic systems. The first technique is the wavelet technique on estimating the response amplitude operator of the linear dynamic systems. This method was based on the estimation technique of the impulse response function (IRF) proposed by Robertson CHAPTER 6. WAVELETS AND SYSTEM IDENTIFICATION 88 et al. [109] from the DWT of the input and output signal of the linear dynamic systems. Given wavelet ability to denoising signals, we have proposed a technique which can be used for system identification of linear dynamic systems, where the signals is corrupted with noise either on the input or the output signals.

6.4.1 Wavelet Response Amplitude Operator Estimation

Following Newland [100],

DWT DWT yn = ∆h · u , (6.4.1) where h i DWT ˜ ˜ ˜ ˜ ˜ h = h1 h2 h3 h4 ··· hn ,   u˜1      u˜2       u˜3/2    DWT   u =  u˜4/2  .      u˜5/4     .   .    J−1 u˜n/2 Here hDWT is the DWT of the IRF, uDWT is the DWT of the input time series

{un−θ, 1≤θ ≤ n}, yn is the response at time n and ∆ is the sampling interval. For N − n + 1 responses, Eq. (6.4.1) can be written in matrix form

Y = ∆hDWT · UDWT , (6.4.2) where

h i Y = yn yn+1 ··· yN , h i DWT ˜ ˜ ˜ h = h1 h2 ··· hn , h i DWT DWT DWT DWT U = un un+1 ··· uN ,

DWT and ui is the DWT of {ui−θ, 1 ≤ θ ≤ n}. If the number of inputs is N = 64 and n = 16, the number of responses that can be calculated by equation 6.4.2 is CHAPTER 6. WAVELETS AND SYSTEM IDENTIFICATION 89

N − n + 1 = 49. Thus Eq. (6.4.2) can be rewritten as h˜ ˜ ˜ i [y16 y17 ··· y64] = ∆ h1 h2 ··· h16

 (16) (17) (64)  u˜1 u˜1 ··· u˜1    (16) (17) (64)   u˜2 u˜2 ··· u˜2     (16) (17) (64)   u˜3 /2u ˜3 /2 ··· u˜3 /2    ×  (16) (17) (64)  . (6.4.3)  u˜4 /2u ˜4 /2 ··· u˜4 /2     (16) (17) (64)   u˜5 /4u ˜5 /4 ··· u˜5 /4     . . .. .   . . . .   (16) (17) (64)  u˜16 /8u ˜16 /8 ··· u˜16 /8 Robertson et al. [109] proposed a method to estimate the IRF when the inputs and responses of a linear dynamic system are known, using the above relation. From Eq. (6.4.2), the DWT of the IRF can be estimated by

1 T  T −1 hDWT = Y · UDWT · UDWT · UDWT . (6.4.4) ∆ Thus the IRF can be found by the inverse DWT of the terms hDWT/2J — i.e.

 DWT J {hi, i = 1, 2, . . . , n} = IDWT h /2 , (6.4.5) where J = log2 n. However, this operation cannot be directly implemented if we use the wavelet packages in MATLAB or in R sofware [107]. Some alteration should be made to Eq. (6.4.1-6.4.5), so that this wavelet estimation of the RAO can be used without producing any error. From Eq. (6.4.1), the factor 1/2j is removed from the DWT coefficients of input time series such as

DWTT h i u = u˜1 u˜2 u˜3 u˜4 ··· u˜n . (6.4.6)

Thus the matrix operation shown in Eq. (6.4.3) becomes h˜ ˜ ˜ i [y16 y17 ··· y64] = ∆ h1 h2 ··· h16

 (16) (17) (64)  u˜1 u˜1 ··· u˜1    (16) (17) (64)   u˜2 u˜2 ··· u˜2    ×  (16) (17) (64)  . (6.4.7)  u˜3 u˜3 ··· u˜3     . . .. .   . . . .   (16) (17) (64)  u˜16 u˜16 ··· u˜16 CHAPTER 6. WAVELETS AND SYSTEM IDENTIFICATION 90

It is notable that the first n−1 response values are lost from the previous matrix operation. However, this can be overcome by introducing n − 1 zeroes before the first input time series so that the input time series record length becomes N + n − 1,

new new new new where the first n−1 values are zero — i.e. {u1 = 0, u2 = 0, . . . , un−1 = 0, un = new new u1, un+1 = u2, . . . , uN−n+1 = uN }. This is because there is essentially a zero signal prior to time 0. Thus from Eq. (6.4.2), for N responses

Y = ∆hDWT · Uˆ DWT , (6.4.8) where

h i Y = y1 y2 ··· yN , h i DWT ˜ ˜ ˜ h = h1 h2 ··· hn , h i ˆ DWT DWT DWT DWT U = ˆun ˆun+1 ··· ˆuN+n−1 ,

DWT  new and ˆui is the DWT of ui−θ , 1 ≤ θ ≤ n . Consequently, Eq. (6.4.7) can be rewritten as

h˜ ˜ ˜ i [y1 y2 ··· y64] = ∆ h1 h2 ··· h16

 (16) (17) (79)  u˜1 u˜1 ··· u˜1    (16) (17) (79)   u˜2 u˜2 ··· u˜2    ×  (16) (17) (79)  . (6.4.9)  u˜3 u˜3 ··· u˜3     . . .. .   . . . .   (16) (17) (79)  u˜16 u˜16 ··· u˜16

To find the DWT of the IRF, we use the same step as in Eq. (6.4.4). However, to find the IRF we do not need to divide the DWT of the IRF by 2J as in Eq. (6.4.5) — i.e. we use  DWT {hi, i = 1, 2, . . . , n} = IDWT h , (6.4.10) where J = log2 n. From the discrete Fourier Transform, the wavelet-estimated IRF can give us the wavelet-estimated RAO as

n X −i2π t k n Gcw (ωt) = ∆ |H (ωt)| = ∆ hke , (6.4.11) k=1 CHAPTER 6. WAVELETS AND SYSTEM IDENTIFICATION 91

where H (ωt) is the discrete Fourier Transform of the wavelet-estimated IRF involv- ing ωt = t/(n∆) and t = 1, . . . , n. However, wavelet analysis can also offer another alternative RAO estimator if there is noise on the input time series only, or if there is noise on both input and response time series. If the noise is intermittent, wavelet analysis can be used to select noise-free sub-series, from which the RAO can be estimated by spectral methods. If the noise is stationary and independent, and the input signal is relatively low frequency, the standard deviation of the noiseσ ˆw can be estimated as the mean absolute deviation (MAD) of the finest scale wavelet coefficients [36]. In this context, the MAD (computed in R with the mad() function [97]) is the of the absolute values of deviations from the median [107]. This combination of wavelet-spectral estimate of the RAO is given by r Cyy Gbws (ω) = (6.4.12) C − C ubub ww where C is the estimated spectrum of the noisy measurement of the input and ubub

Cww is the spectrum of the noise.

6.5 Nonlinear system identification by Wavelet Ridge

The backbone curve is the plot of system’s response amplitudes against the system’s natural frequency which can provide some details on the system’s nonlinearity be- haviours [119]. The backbone curve can also be used for estimating the model parameters of the systems [117, 119, 84]. Previously, the analytic signal computed by using the Hilbert transform have been used to estimate the instantaneous modal parameters of time-varying dynamic systems [38]. However this technique is limited to asymptotic signals only, where the signal’s amplitude vary slowly compare to the signal’s phase [119]. On the other hand, Spina et al. have used both Gabor and Hilbert transform of the dynamic system transient response for detection of nonlinearity[117]. However, both these methods are not suitable for multi degree of freedom (MDOF) systems [119]. CHAPTER 6. WAVELETS AND SYSTEM IDENTIFICATION 92

Carmona et al. introduced a technique to detect ridges in the modulus of the wavelet transform. Wavelet ridge is the localization of the signal in time-frequency domain which is important in nonlinearity detection and useful in analyzing noisy signals [22, 23]. Staszewski showed that the the wavelet ridges and skeletons can be used to estimate the damping ratio for linear multi degree of freedom systems [118]. He then further applied this technique for identification of nonlinear MDOF systems[119, 120]. Londo˜no et al. [84] presented a new technique to identify the backbone curves of nonlinear system which is useful for investigating nonlinear be- haviour of structures.

6.5.1 Instantaneous Modal Parameters

Feldman [38] has shown that, following the analytic signal theory, the IRF of os- cillating system, y(t), can be converted to its analytic signal form by using Hilbert transform. From the analytic signal, the instantaneous modal parameters, which are the instantaneous envelope and instantaneous phase, can be extracted. For linear system, the instantaneous natural frequency and instantaneous damping coefficient is constant over time. However, if the system has nonlinear stiffness, the natural frequency will vary over time since it depends on the amplitude of vibrations. By using the backbone curve, which is the plot of the signal instantaneous envelope on the instantaneous frequency, the nonlinearity can be identified. It is shown that every typical non-linearity in the spring has its own unique backbone curve [38]. Assume a general force free SDOF nonlinear system, given as

y¨t + D(y ˙) + S(y) = 0, (6.5.1) where D(y ˙) and S(y) is the dissipative and restoring force function, respectively. The instantaneous envelope, A(t), and instantaneous frequency, ω(t), of some typ- ical nonlinear oscillator can be approximated by using the Krylov and Bogoliubov method [98]. Spina et al. [117] have shown that for some SDOF nonlinear oscilla- tor, it does have its own unique instantaneous envelope and instantaneous frequency, such as in Table 6.5.1. Therefore, by using information on the system’s instanta- neous envelope or instantaneous frequency, we can identify that system’s dissipative CHAPTER 6. WAVELETS AND SYSTEM IDENTIFICATION 93

Table 6.5.1: Approximated functions for instantaneous frequency and envelope of SDOF oscillator with nonlinear damping and stiffness [117]

Dissipative, D(y ˙), or Instantaneous envelope, Oscillator restoring, S(y), force A(t), or frequency, ω(t) function

Linear c D(y ˙) = cy˙ A(t) = A0 exp(− t) damping 2m

Square  −1 D(y ˙) = cy˙ |y˙| A(t) = 1 + 4c p m t damping A0 3πm k

Cubic −1/2 3  1 3c p m  D(y ˙) = cy˙ A(t) = A2 + 4m k t damping 0

2c p m Dry friction D(y ˙) = c sgn(y ˙) A(t) = A0 − πm k t

ω (t) = 3 Cubic stiffness S(y) = ky + k3y q 2 1 k  3k3A (t)  2π m 1 + 8k

S(y) =   Clearance k (y + Γ) if y < −Γ q    1 k Γ  ω (t) = π2 m arccos A(t) stiffness 0 if |y| < Γ   k (y − Γ) if y > Γ   ky + P if y < 0  Preload  q   S(y) = 0 if y = 0 1 k 2P ω (t) = 2π m 1 + πA(t) stiffness   ky − P if y > 0

or restoring function, respectively. Hence, this can determine the nonlinearity type of the oscillator. CHAPTER 6. WAVELETS AND SYSTEM IDENTIFICATION 94

6.5.2 Wavelet Ridge

Let the signal, x(t), be written in the form of

x (t) = A (t) cos φ (t) , (6.5.2) where A (t) and φ (t) are the amplitude and instantaneous phase, respectively. It is convenient to use progressive wavelets which is wavelet with vanishing negative frequencies. The wavelet coefficients of x (t) are given by 1 T (a, b) = x, ψ = Z , ψ , (6.5.3) x (a,b) 2 x (a,b) where 1 Z dη Z (t) = PV x (t + η) , (6.5.4) x π η is the analytic signal of x (t) and PV denotes principal value integral. If the ampli- ˙ d tude, A (t), and the instantaneous frequency, φ (t) = dt φ(t), vary slowly, then

iφ(t) Zx (t) ≈ A (t) e . (6.5.5)

Assume that the Fourier transform of the wavelet function, Ψ (ξ), is peaked near a frequency ξ = ω0, then the CWT may be approximated as 1    2 T (a, b) ≈ A (b) eiφ(b)Ψ∗ aφ˙ (b) + O A˙ / |A| , φφ¨ / φ˙ . (6.5.6) x 2

The modulus of the wavelet coefficients, |Tx|, is essentially maximum in the neighbourhood of a curve a = r (b) , (6.5.7) known as the wavelet ridge. Since the modulus of this approximation can be ap- proximated by 1   |T (a, b)| ≈ A (b) Ψ∗ aφ˙ (b) , (6.5.8) x 2 we can find the envelope of this signal from its wavelet skeleton. Wavelet skeleton is defined as the wavelet transform of the signal that is restricted to its wavelet ridges and behaves as A(t) exp [iφ(t)] [123, 22, 119]. The relationship between the wavelet ridge and the instantaneous frequency of the signal is given by φ˙ (0) r (b) = ψ , (6.5.9) φ˙ (b) CHAPTER 6. WAVELETS AND SYSTEM IDENTIFICATION 95

where φψ is the instantaneous phase of the wavelet. Based on this, the simplest method to obtain the wavelet ridge is by identifying the global maxima of the wavelet transform modulus. If there is more than one ridges, we can search for the local maxima of the wavelet transform modulus. However, it will be problematic to identify the local maxima that represents the true wavelet ridge if there are noise in the signal and if the frequency of each ridges is close to one another. For the case of signal with relatively low noise, we can use the differentiation technique to identify the wavelet ridge. By using the Newton’s method, for each time b, we can find the ridge r(b) by solving

∂ |T (a, b) | = 0, (6.5.10) ∂a x where r(b − 1) is taken as the initial guess for the ridge at time b, r0(b). The drawbacks of this technique are that it will not be suitable for the case of significant noise and the identified ridge will have the possibility of not being a smooth function. We can reconstruct the original signal (in time domain) by using the wavelet skeleton [23]. The reconstruction of the original signal is done by computing

xrec(t) = Tx (t, r(t)) , (6.5.11) for each ridge component and then summing up all these terms. Even though this technique is simple, it needs the information of the wavelet transform on the ridge which is computationally expensive because it will still requires lots of data computation if there are many ridges. However, it still a lot more efficient compared to reconstructing the original signal by using all the original wavelet coefficients. Let the IRF of the oscillating system, y(t), be written as

y (t) = A (t) exp (iφ (t)) . (6.5.12)

From the IRF, the wavelet ridges can be identified from its wavelet transform. The instantaneous envelope and the instantaneous frequency of the oscillating system can then be computed from the wavelet skeleton and wavelet ridge, respectively. Following Londo˜no et al. [84], the damping of the system can be estimated from the slope of the semi-logarithmic plot of the instantaneous envelope against CHAPTER 6. WAVELETS AND SYSTEM IDENTIFICATION 96 time. The plot of the instantaneous frequency against the instantaneous envelope will produce the system backbone curve. The system’s nonlinearity can then be identified and categorized using the estimated backbone curve and the estimated damping ratios [119, 84]. Given that we know the type of system, we can use this instantaneous characteristics to estimate the system parameter coefficients by fitting it to the suitable function in table 6.5.1.

6.6 Summary

• The wavelet estimated RAO can be used to analyze linear systems. In this research, we have compared the performance of wavelet on estimating RAO compared to the spectral analysis methods. Even though the RAO estimated by the spectral technique work better, the wavelet estimated RAO is capable on handling the case of shorter signal time series compare to the spectral RAO which need longer time series data [9].

• The wavelet ridge can be used to identify nonlinearities in oscillating systems. It can also be used to categorize the type of nonlinearity of the system. From the wavelet ridge and the wavelet skeleton, the dynamic system’s instantaneous envelope and instantaneous frequency can be estimated. Based on the infor- mation found from these estimates, the parameters of the nonlinear system such as the damping ratio and the nonlinear coefficients can be approximated. From our study, the wavelet ridge does work quite well on system identifica- tion of weakly nonlinear oscillator, which are the Duffing and the Van der Pol oscillators, given that the impulse response is known. As an application, the technique was also applied to the model of the oscillating flap wave energy converter [8]. Chapter 7

Synthesis

7.1 Comparison of Spectral and Wavelet Estima- tors of Transfer Function for Linear Systems

Paper highlights:

• A wavelet technique on estimating the impulse response function is used to estimate the response amplitude operator (RAO) of linear dynamic system. The wavelet estimates of the RAO is compared with the spectral estimates of the RAO.

• The heaving buoy wave energy device (HBWED) selected as a case study. The HBWEC is modelled as a single mode of vibration linear system.

The paper Comparison of Spectral and Wavelet Estimators of Transfer Function for Linear Systems [9] discussed and compared the estimation of the RAO of lin- ear dynamic system by using wavelet and Fourier transforms. Both estimates are compared for various input signals and added noise scenarios. For this comparison, a simple wave energy converter, HBWED, which is modelled as a linear dynamic system.

97 CHAPTER 7. SYNTHESIS 98

There are three types of spectral estimates of the transfer function discussed in this paper where each reacts differently for different scenario of noise in the signals. The wavelet estimator does show its ineffectiveness when there is noise on the input signal. A combined wavelet and spectral method is introduced as an alternative RAO estimator if both the input and the output signals are corrupted by noise. The paper also shows that for the case of a chirp input, where the signal has a continuously varying frequency, wavelet estimate of the RAO does perform better than the spectral estimates.

East Asian Journal on Applied Mathematics Vol. 2, No. 3, pp. 214-237 doi: 10.4208 /eajam.170512.270712a August 2012

Comparison of Spectral and Wavelet Estimators of Transfer Function for Linear Systems

1,2, 2 2 M. A. A. Bakar ∗, D. A. Green and A. V. Metcalfe 1 School of Mathematical Sciences, Faculty of Science and Technology, Universiti Kebangsaan Malaysia, UKM Bangi, Selangor, Malaysia. 2 School of Mathematical Sciences, University of Adelaide, South Australia, Australia. Received 17 May 2012; Accepted (in revised version) 27 July 2012 Available online 23 August 2012

Abstract. We compare spectral and wavelet estimators of the response amplitude op- erator (RAO) of a linear system, with various input signals and added noise scenarios. The comparison is based on a model of a heaving buoy wave energy device (HBWED), which oscillates vertically as a single mode of vibration linear system. HBWEDs and other single degree of freedom wave energy devices such as oscillating wave surge con- vertors (OWSC) are currently deployed in the ocean, making such devices important systems to both model and analyse in some detail. The results of the comparison re- late to any linear system. It was found that the wavelet estimator of the RAO offers no advantage over the spectral estimators if both input and response time series data are noise free and long time series are available. If there is noise on only the response time series, only the wavelet estimator or the spectral estimator that uses the cross-spectrum of the input and response signals in the numerator should be used. For the case of noise on only the input time series, only the spectral estimator that uses the cross-spectrum in the denominator gives a sensible estimate of the RAO. If both the input and response signals are corrupted with noise, a modification to both the input and response spec- trum estimates can provide a good estimator of the RAO. A combination of wavelet and spectral methods is introduced as an alternative RAO estimator. The conclusions apply for autoregressive emulators of sea surface elevation, impulse, and pseudorandom bi- nary sequences (PRBS) inputs. However, a wavelet estimator is needed in the special case of a chirp input where the signal has a continuously varying frequency.

AMS subject classifications : 65M10, 78A48

Key words : Heaving buoy, spectral analysis, discrete wavelet transform, impulse response, response amplitude operator.

∗Corresponding author. Email address: (M. A. A. Bakar) http: // www.global-sci.org /eajam 214 c 2012 Global-Science Press

Comparison of Spectral and Wavelet Estimators of Transfer Function for Linear Systems 215 1. Introduction

Several studies have promoted the wavelet transform (WT) as an alternative to spectral analysis (SA) for estimating the response amplitude operator (RAO) of linear systems [15, 29, 30 ]. The study of these systems is important because all structures are sensitive to vibration and some exploit this vibration. A stable linear system will respond to a stationary sinusoidal input at some specific frequency by vibrating at that frequency. However, the amplitude of the response relative to the amplitude of the input, known as the RAO or gain, depends on the frequency. There is also a phase shift, which depends on the frequency. Given this characterisation of a linear system, it is often more convenient to study the linear system in the frequency domain instead of in the time domain. Wavelet transforms have a potential advantage of displaying frequency composition over time. In contrast, the definition of a population spectrum as the Fourier transform of the autocovariance function is based on the assumption of a stationary random process. The use of spectral analysis to estimate the RAO is justified for a general input rather than a stationary input, if the spectrum is considered as a sample estimate of a Fourier transform. A blow from an impact hammer, an accessory for spectrum analysers commonly used in model testing for lightweight structures, is a good example of a non-stationary input. Since spectral estimation of the RAO is not limited to a stationary input, it follows that the WT may not necessarily offer any advantage over the SA. In this paper, we compare WT methods with SA methods for estimating the RAO of a linear system, with different classes of input signals and with different distributions of noise corrupting either input or response signals. The comparison is set in the context of a wave tank model of a heaving buoy wave energy device (HBWED). The reasons for this choice are that wave energy devices generally are receiving renewed attention as the need for renewable energy resources becomes increasingly apparent, and their response to the random wave environment is crucial for design. Specifically, the HBWED can plausibly be modelled as a single mode of the vibration system, and this allows a straightforward comparison of WT and SA estimation of the RAO. The oscillating wave surge converters (OWSC) are single degree of freedom devices — but the OWSC oscillates horizontally in surge instead of oscillating vertically in heave as the HBWED, and is nonlinear [32 ]. Spectral analysis has been used in the study of dynamic systems for many decades (e.g. see [14 ]), and the spectrum analyser has been standard equipment in test laboratories since the 1960s [13 ]. In contrast, although the Haar sequence was proposed in 1909 [10 ], the mathematical generalisation and the use of WT for the analysis of dynamic systems is still in the development stage [15, 21, 24 ]. Since a wavelet is localised in the time-scale domain, certain information can be accessed directly and immediately from the wavelet representation of a time series. This multiscale feature of wavelet transforms can be used to validate a dynamic model from a continuous wavelet transform of the process observations and model time series data [21 ]. Wavelets enable the detection of even very weak signals by using local amplification and compression, which has been advantageous in analysing dynamic systems. By using a wavelet transform, the random property of a chaotic response can also be observed 216 M. A. A. Bakar, D. A. Green and A. V.Metcalfe

— even for very short time series data such as in [33 ], where the authors have applied wavelet transform techniques to analyse the nonlinear dynamic system of ship roll and heave-roll coupling. Pernot & Lamarque [26 ] have computed the transient responses of parametrically excited dynamic systems by using wavelet transforms and also used them for a stability analysis of linear systems. Gouttebroze [9] used a wavelet identification technique to identify the characteristics of structural systems, by analysing the amplitude and phase of a wavelet transform for vibration data. Other applications of wavelets include solving differential equations, turbulence analysis, image processing and signal processing [11 ].

2. Case Study

A heaving buoy wave energy device (HBWED) has been selected as the case study for the comparison. The HBWED is a deep water wave energy device that has reached the stage of commercial development. An HBWED called PowerBuoy (cf. Fig. 1) developed by Ocean Power Technologies Inc. has been deployed at the northeast coast of Scotland, and is soon to be deployed along the southwest coast of Victoria, Australia [25 ]. The study of the dynamic system that models the HBWED provides a basis for an investigation of issues affecting wave energy development, such as efficiency and engineering design. For example, Masubuchi et al. [19 ] have analysed the frequency response of an ocean wave energy device with two floating bodies. The dynamic behaviour of the system can be understood from that study, and hence the energy absorption from ocean waves can be optimised.

2.1. Heaving buoy with one degree of freedom A heaving buoy wave energy device such as in Fig. 1 is constrained to move in the vertical (heave) direction only, so the motion of this device can be modelled as a linear dynamic system with a single mode of vibration [28 ]. A mathematical description for this device is a second order differential equation of the form

2 ¨y + 2ζω n ˙y + ωn y = u , (2.1)

2 2 where ζ is the damping factor, ωn is the undamped natural frequency and ωn 1 ζ is the damped natural frequency of the system. − p  In this investigation, we adopt ζ = 0.2 and ωn = 0.05 as realistic values for a wave tank model of a HBWED. Thus Eq. (2.1) can be written as

¨y + 0.02˙y + 0.0025 y = u. (2.2)

The solution can be expressed as a convolution integral

t y (t) = h(t τ) u (τ) dτ , (2.3) − Z−∞ Comparison of Spectral and Wavelet Estimators of Transfer Function for Linear Systems 217

IRF

Theoretical IRF Estimated IRF(1000 sec) Estimated IRF(4000 sec) h(t) 5 0 5 10 15 −

0 100 200 300 400

time(sec)

where h(t) is the impulse response function (IRF) of the linear system. The response to a unit impulse for the system given in Eq. (2.1) is 1 ζω n t 2 h(t) = e− sin 1 ζ ωn t . (2.4) 2 ωn 1 ζ − − p  For our particular case Eq. (2.2),p the impulse response function (IRF) is given by

1 0.01 t h(t) = e− sin 0.05 0.96 t , (2.5) 0.05 p0.96 which is plotted in Fig. 2. The undamped natural frequencyp ofthe system described by 2 2 Eq. (2.2) is ωn = 0.05 rad /sec and the natural frequency is ωn 1 ζ 0.049 rad /sec. The Fourier transform of the convolution product in Eq. (2.3) used− below≈ is p  Y (ω) = H (ω) U (ω) , (2.6) · 218 M. A. A. Bakar, D. A. Green and A. V.Metcalfe

Wave Data 500 500 − wave height(mm) wave 0 10 20 30 40

time(sec)

(a) Wave tank data

Simulated Input

500

0

−500 wave height(mm) wave 0 10 20 30 40

time(sec)

(b) First 40 seconds of simulated input

where Y (ω), H (ω) and U (ω) denote the Fourier transforms of y(t), h(t) and u(t) re- spectively.

2.2. Input based on wave tank data

A time series of wave tank data ut ; t = 1, 2, , 396 sampled at 0.1 second inter- vals [12 ] is shown in Fig. 3(a). This{ time series· is · · too short} for our investigation, but was adopted to determine the coefficients for an autoregressive model AR( p) to simulate stationary time series input — e.g. see [3]. The most suitable autoregressive model, based on the Akaike information criterion (AIC) [1] and consideration of the residuals, is an AR(13) given by

ut =0.4016 ut 1 0.8701 ut 2 0.2660 ut 3 0.5223 ut 4 0.5154 ut 5 − − − − − − − − − 0.5076 ut 6 0.3935 ut 7 0.3831 ut 8 0.2614 ut 9 0.3748 ut 10 − − − − − − − − − − 0.0866 ut 11 0.1398 ut 12 0.0735 ut 13 + wt , (2.7) − − − − − − where wt is a Gaussian (Normal) variate with mean 0 and standard deviation 149.3. A realisation of length 40, 000 from the AR(13) model, corresponding to 4, 000 seconds sampled every 0.1 second, was initially used as the input to the model of the HBWED. The first 1, 000 seconds of input are shown in Fig. 4(a) and the first 40 seconds are shown in Fig. 3(b), which is qualitatively similar to the original time series in Fig. 3(a). The digitised input data is denoted as ut , while yt is the digitised response. From Eq. (2.1), the response at time t can be{ approximated} { by} a difference equation. Using Comparison of Spectral and Wavelet Estimators of Transfer Function for Linear Systems 219

Input wave height(mm) wave 1000 0 1000 − 0 200 400 600 800 1000 time(sec)

(a) First 1000 seconds of input

Response amplitude (mm) 2000 0 2000 − 0 200 400 600 800 1000 time(sec)

(b) First 1000 seconds of response

central differences, the approximation for the first derivative is of the form

yt+1 yt 1 ˙yt − − , (2.8) ≈ 2∆ and for the second derivative of form

yt+1 2 yt + yt 1 ¨yt − 2 − (2.9) ≈ ∆ where ∆ is the sampling interval [20 ]. Thus an approximation of the response for the linear system in Eq. (2.1) is given by

yt = a1 yt 1 + a2 yt 2 + a0ut 1 , (2.10) − − − where 2 2 2 ∆ 2 ωn∆ ζω n∆ 1 a0 = , a1 = − , a2 = − . 1 + ζω n∆ 1 + ζω n∆ 1 + ζω n∆ With ∆ = 0.1 sec, an approximation of the response for the specific system in Eq. (2.1) is therefore 1.99975 0.999 0.010 yt = yt 1 yt 2 + ut 1 , (2.11) 1.00100 − − 1.001 − 1.001 − which is plotted in Fig. 4(b) over 1, 000 seconds. 220 M. A. A. Bakar, D. A. Green and A. V.Metcalfe

input spectrum spectrum 0 400000 1000000

0.0 0.1 0.2 0.3 0.4 0.5

Freq (cycles/0.1sec)

response spectrum spectrum

0.0e+000.0000 6.0e+08 1.2e+09 0.0005 0.0010 0.0015 0.0020 0.0025

Freq (cycles/0.1sec)

3. Estimation of Transfer Function

The frequency composition of a stationary is described by its spec- trum, which is defined as the Fourier transform of its autocovariance function [4]. This spectrum can be estimated by smoothing the sample periodogram [12 ]. If the sample time series is of length N, then the periodogram has spikes at N/2 specific frequencies from 2π/N∆ up to π/∆. However, these must be smoothed to produce consistent estimates of the true population spectrum. The number of spikes that are averaged is known as the span [6]. The span must be wide enough to remove spurious peaks, but not so wide that true peaks are substantially reduced. With short time series, these conflicting requirements lead to unreliable estimates of the spectrum. Another consideration is that the specific fre- quencies of the periodogram depend only on the length of the time series, and generally will not coincide with any deterministic frequency present in a non-stationary time series. The deterministic frequency will leak out into neighbouring spikes. Fig. 5 shows the spec- tra for the input and response time series used in this study, with a span of 16 and time series length 40,000. A linear dynamic system can be characterised by its transfer function, which describes its response to disturbances at specific frequencies. In a marine context, the modulus of the transfer function is known as the response amplitude operator (RAO) [12 ], and in electrical engineering it is known as the gain. If the spectrum of the disturbance and the Comparison of Spectral and Wavelet Estimators of Transfer Function for Linear Systems 221

RAO is known, then the spectrum for the response can be calculated. Alternatively, the RAO can be estimated as the ratio of the response sample spectrum to the input sample spectrum. The RAO characterises the response of marine structures that can plausibly be modelled as linear systems, such as a HBWED or cargo ships, to sea states [22 ]. Other applications of the RAO include the response of other vehicles such as cars on uneven road surfaces and aircraft on a runway. Another aspect of spectral analysis is that any sudden change in the spectrum of noise from a machine can be an early warning of a defect. Such monitoring is called signature analysis, and together with preventative maintenance can avoid catastrophic failure.

3.1. Spectral estimators Let the input signal be given by

iωt ut = Ue , (3.1) where U is a real number representing the amplitude of the input. Similarly, let the re- sponse be given by

i ωt+φ yt = Ye ( ) , (3.2) where Y is a real number representing the amplitude of the response and φ is the phase shift given by the linear system. By substituting Eqs. (3.1) and (3.2) into the second order differential equation of the linear dynamical system Eq. (2.1), we see that

2 i(ωt+φ) i(ωt+φ) 2 i(ωt+φ) i(ωt) ω Ye + i2ζω nY ωe + ωnYe = Ue . (3.3) − Eq. (3.3) leads to

iφ Y e− = 2 2 , (3.4) U ωn ω + i2ζω nω − and the RAO or gain is given by

iφ e− 1 G(ω) = 2 2 = . (3.5) ω ω ζω ω 2 n + i2 n 2 2 2 2 2 − ωn ω + 4ζ ωnω − q This is the same as H (ω ) in Eq. (2.6), a fact that is€ used withŠ the wavelet estimator of the RAO. The maximum value for the RAO is

2 2 Gmax = 1/ ζω n 4 3ζ , (3.6) − 2  2 p  when the frequency is equal to ωn 1 ζ . Furthermore, the phase shift is − p 1 2ζω nω φ = tan − 2 2 . (3.7) −ωn ω ‚ − Œ 222 M. A. A. Bakar, D. A. Green and A. V.Metcalfe

For a linear system with single input ut and single response yt , the RAO can be esti- mated by

Cy y G2 (ω) = , (3.8) È Cuu where C and C are the sample spectra of the input and response respectively. However, uu y y c this estimator is sensitive to noise. An alternative estimator, which is unaffected by noise on the response, is the ratio of the cross-spectrum of the input and the response to the spectrum of the input — i.e. Cuy G ω , (3.9) 1 ( ) = C uu where C is the sample cross-spectrum of the input and response. The cross-spectrum uy c is the Fourier Transform of the cross-covariance function of the input and response time series — e.g. see [12 ]. Similarly, the estimator

C y y G3 (ω) = (3.10) Cyu is unaffected by noise on the input [2]c. A related statistic is the coherence, defined as

2 2 Cuy G1 coh (ω) = = . (3.11) C C 2 uu y y G c2 The coherence can be thought ofd as the square of the correlation coefficient between the input and response over frequency, and its value is thereforec between 0 and 1. It can be used to detect noise on input, noise on response, nonlinearity, compensated delays and leakage (resolution bias) in the linear system. If there is noise on both the input and the response, one strategy is to use G2, after mak- ing an allowance for the noise component in the computed Cuu and Cy y . The allowance considered here is to assume the high frequency component of the computed cCuu and Cy y is due to noise, and that the noise is white and has a flat spectrum. Computationally, this modification is implemented as a subtraction of the average of the spectrum ordinates over the highest 1 /20 of the frequency range from all the spectrum ordinates. This modification of G2 is denoted by

Cy y G ω − , (3.12) c 4 ( ) = C s uu− where C and C are the modified spectrum estimates. uu− −y y c

3.2. Wavelet IRF estimator Wavelet transforms are defined in continuous and discrete forms. In statistical studies, sample data are usually taken in discrete form, so we only discuss the discrete wavelet transform (DWT). Comparison of Spectral and Wavelet Estimators of Transfer Function for Linear Systems 223

The DWT of a signal of length n yields the DWT coefficients, consisting of J 1 levels − of wavelet coefficients Wj,k and J 1 levels of scaling coefficients Vj,k , where j = { } − j { } 0, , J 1, J = log 2(n), k = 0, , Nj 1 and Nj = 2 . A simpler and faster technique to· perform · · − the discrete wavelet· transform, · · − known as the DWT pyramid algorithm, was introduced by Mallat [18 ]. By using the DWT pyramid algorithm, the coefficients can be calculated by

L 1 W − ψ V , (3.13) j,k = l j+1,2 k+1 l mod Nj+1 − l=0 XL 1 V − ϕ V , (3.14) j,k = l j+1,2 k+1 l mod Nj+1 − l=0 X where ψ is the wavelet filter, ϕ is the scaling filter, L is the width of the filter and VJ,k = xk, with xk; t = 0, 1, 2, , n 1 the time series data. In this analysis, the Haar wavelet is used{ where · · · − }

Haar 1/p2 l = 0 ψl = − , (3.15) 1/p2 l = 1 ¨ Haar 1/p2 l = 0 ϕ = . (3.16) l 1/p2 l = 1 ¨

Consider a time series x t ; t = 1, 2, 3, , n . The DWT of x t can be represented by the sequence { · · · } { } DWT x = x˜1 x˜2 x˜3 x˜4 x˜n , (3.17) · · · wherex ˜ 1 = V0,0 is the vector of” scaling coefficients at level 0,— x ˜ 2 = W0,0 is the vector of wavelet coefficients at level 0,x ˜ 3 = W1,0 andx ˜ 4 = W1,1 are the first and second wavelet coefficients at level 1, andx ˜ n = WJ 1, NJ 1 1 are the final wavelet coefficients at level J 1. An interesting feature of the wavelet− − transform− is that the signal can be recovered from− its wavelet and scaling coefficients by the inverse discrete wavelet transform (IDWT). It can be performed by using Mallat’s pyramid algorithm defined as

L 1 L 1 − − Vj,k ψl W ϕl V , (3.18) = j↑ 1, k+l mod Nj + j↑ 1, k+l mod Nj l=0 − l=0 − X X where

0 k = 0, 2, , Nj+1 Wj↑,k = · · · , (3.19) Wj, k 1 k = 1, 3, , Nj+1 1 ¨ −2 · · · − 0 k = 0, 2, , Nj+1 Vj↑,k = · · · . (3.20) Vj, k 1 k = 1, 3, , Nj+1 1 ¨ −2 · · · − Following Newland [24 ], DWT DWT yn = ∆h u , (3.21) · 224 M. A. A. Bakar, D. A. Green and A. V.Metcalfe where

DWT ˜ ˜ ˜ ˜ ˜ h = h1 h2 h3 h4 hn , · · · ” u˜1 — u˜2  u˜3/2  uDWT  u˜4/2  . =    u˜5/4     .   .   J 1   u˜n/2 −    DWT DWT  Here h is the DWT of the IRF, u is the DWT of the input time series un θ , 1 θ n , { − ≤ ≤ } yn is the response at time n and ∆ is the sampling interval. For N n + 1 responses, Eq. (3.21) can be written in matrix form −

DWT DWT Y = ∆h U , (3.22) · where

Y = yn yn+1 yN , · · · DWT ˜ ˜ ˜ h ”= h1 h2 hn — , DWT DWT ·DWT · · DWT U =” un un 1 — uN , + · · · DWT ” — and ui is the DWT of ui θ , 1 θ n . If the number of inputs is N = 64 and n = 16, the number of responses{ that− can≤ be calculated≤ } by equation 3.22 is N n + 1 = 49. Thus Eq. (3.22) can be rewritten as −

˜ ˜ ˜ y16 y17 y64 =∆ h1 h2 h16 · · · · · · (16 ) (17 ) (64 )   ” u˜1 u˜1 — u˜1 (16 ) (17 ) · · · (64 ) u˜2 u˜2 u˜2  (16 ) (17 ) · · · (64 )  u˜3 /2u ˜ 3 /2 u˜3 /2  (16 ) (17 ) · · · (64 )   u˜ /2u ˜ /2 u˜ /2  . (3.23)  4 4 4  ×  u˜(16 )/4u ˜ (17 )/4 · · · u˜(64 )/4   5 5 5   . . ·. · · .   . . .. .     u˜(16 )/8u ˜ (17 )/8 u˜(64 )/8   16 16 16   · · ·    Robertson et al. [29 ] propose a method to estimate the IRF when the inputs and re- sponses of a linear dynamic system are known, using the above relation. From Eq. (3.22), the DWT of the IRF can be estimated by

1 DWT 1 DWT T DWT DWT T h = Y U U U − . (3.24) ∆ · · ·   Comparison of Spectral and Wavelet Estimators of Transfer Function for Linear Systems 225

Thus the IRF can be found by the inverse DWT of the terms hDWT /2J — i.e. DWT J hi, i = 1, 2, , n = IDWT h /2 , (3.25) · · · where J = log 2 n. However, this operation cannot be¦ directly© implemented if we use the wavelet packages in MATLAB or in R sofware [27 ]. Some alteration should be made to Eq. (3.21-3.25), so that this wavelet estimation of the RAO can be used without producing any error. From Eq. (3.21), the factor 1 /2 j is removed from the DWT coefficients of input time series such as DWT T u = u˜1 u˜2 u˜3 u˜4 u˜n . (3.26) · · · Thus the matrix operation shown” in Eq. (3.23) becomes — ˜ ˜ ˜ y16 y17 y64 =∆ h1 h2 h16 · · · · · · (16 ) (17 ) (64 )   ” u˜1 u˜1 — u˜1 u˜(16 ) u˜(17 ) · · · u˜(64 )  2 2 2  (16 ) (17 ) · · · (64 ) u˜ u˜ u˜ . (3.27)  3 3 3  ×  . . ·. ·. · .   . . . .     u˜(16 ) u˜(17 ) u˜(64 )   16 16 16   · · ·  It is notable that the first n 1 response values are lost from the previous matrix operation. However, this can be− overcome by introducing n 1 zeroes before the first input time series so that the input time series record length b−ecomes N + n 1, where the first n 1 values are zero — i.e. unew 0, unew 0, , unew 0, unew −u , unew 1 = 2 = n 1 = n = 1 n+1 = u , , unew− u . This is because{ there is essentially· · · a zero− signal prior to time 0. 2 N n+1 = N Thus· · · from− Eq. (3.22),} for N responses DWT DWT Y = ∆h Uˆ , (3.28) · where

Y = y1 y2 yN , · · · DWT ˜ ˜ ˜ h ”= h1 h2 h—n , · · · UDWT uDWT uDWT uDWT , ˆ =” ˆn ˆn+1 — ˆN+n 1 · · · − DWT new and uˆi is the DWT of ui θ , 1 ” θ n . Consequently, Eq. (3.27)— can be rewritten as { − ≤ ≤ } ˜ ˜ ˜ y1 y2 y64 =∆ h1 h2 h16 · · · · · · (16 ) (17 ) (79 )   ” u˜1 u˜1 — u˜1 u˜(16 ) u˜(17 ) · · · u˜(79 )  2 2 2  (16 ) (17 ) · · · (79 ) u˜ u˜ u˜ . (3.29)  3 3 3  ×  . . ·. ·. · .   . . . .     u˜(16 ) u˜(17 ) u˜(79 )   16 16 16   · · ·    226 M. A. A. Bakar, D. A. Green and A. V.Metcalfe

To find the DWT of the IRF, we use the same step as in Eq. (3.24). However, to find the IRF we do not need to divide the DWT of the IRF by 2 J as in Eq. (3.25) — i.e. we use

DWT hi, i = 1, 2, , n = IDWT h , (3.30) · · · ¦ © where J = log 2 n. From the discrete Fourier Transform, the wavelet-estimated IRF can give us the wavelet-estimated RAO as

n i2π t k n Gw ωt = ∆ H ωt = ∆ hke− , (3.31) k 1 =   X where H ω is the discreteÓ Fourier Transform of the wavelet-estimated IRF involving ω ( t ) t = t/(n∆) and t = 1, , n. · · · 3.3. Multiple regression IRF estimator Alternatively, we can use multiple regression to estimate the discretised impulse re- sponse hi, i = 1, 2, , n , defined as { · · · } yt = h1ut + h2ut 1 + + hnut n + εt , (3.32) − · · · − where t = n 1, , N and εt is the discrete white noise. In matrix form, Eq. (3.32) can be written as − · · · Y = h U + ε . (3.33) · The IRF hi can be estimated by the least squares method

1 T T 1 h = Y U U U − . (3.34) ∆ · · · € Š However, multiple regression provided in the R software [27 ] uses a lot of computational memory for storage overhead, especially if we use long time series data.

3.4. Combined wavelet-spectral method Both the wavelet and spectral analysis discussed give good estimates of the RAO for data without noise. If there is noise on the response time series, G1 and Gw give good estimates of the RAO provided the noise is not correlated with the input signal. Meanwhile, G3 is a good RAO estimator if there is noise on the input time series,c and GÓ4 can be used if there is noise on both input and response signals. However, wavelet analysis can also offerc another alternative RAO estimator if there is noise on the input time seriesc only, or if there is noise on both input and response time series. If the noise is intermittent, wavelet analysis can be used to select noise-free sub-series, from which the RAO can be estimated by spectral methods. If the noise is stationary and independent, and the input signal is relatively low frequency, the standard deviation of the noise σˆ w can be estimated as the mean absolute deviation (MAD) of the finest scale wavelet coefficients [8]. In this context, Comparison of Spectral and Wavelet Estimators of Transfer Function for Linear Systems 227 the MAD — computed in R with the mad() function [23 ] — is the median of the absolute values of deviations from the median [27 ]. The estimate of the RAO is

Cy y Gws (ω) = (3.35) È Cuu Cww − b where Cuu is the estimated spectrum of the noisybb measurement of the input and Cww is the spectrum of the noise. If we assume the noise is white, then the estimate of Cww is 2 bb σˆ w/0.05.

4. Estimates of the Transfer Function of a Heaving Buoy

4.1. No noise

For this investigation, the AR(13) input and the response time series discussed in Sec- tion 2.2 were used to estimate the IRF for the linear dynamical system of Eq. (2.1). Two lengths of time series were considered, to compare each method dependency with the time series length. The shorter time series was of length T = 1000 seconds, and the longer time series was T = 4000 seconds. Initially, the Haar wavelet (the simplest type of wavelet filter) was selected for the analysis. The wavelet estimate of the IRF for length T = 1000 seconds and T = 4000 seconds are plotted in Fig. 2, which shows a very similar value between those two cases and also with the theoretical IRF. The Fourier transform of the IRF is the estimate of the RAO. To make the comparison, the wavelet and spectral estimates of the RAO are plotted with the theoretical RAO in Fig. 6 for both cases. For spectral estimates, the span values were set at 4 and 16 for the shorter and longer time series respectively, to smooth the estimates of the RAO and prevent identifying spurious peaks. In Fig. 6(a), we see that the spectral estimates based on the shorter time series with span 4 underestimate the peak at the natural frequency, but G3 is less biased than G2 and G1. The wavelet estimate is the least biased. The bandwidth of the spectral estimation is S Hz, where S is the span 0.5 /(N/2) and N is the time series length.c The coherence functionc is relativelyc low around the natural frequency, reflecting the difference between G1 and G2. With the longer time series, the span was set to 16 but the number of spikes in the spectra were increased by a factor of 4 and the resulting bandwidth was narrower. Forc this longerc time series, all four estimates shown in Fig. 6(b) are close to the theoretical value. We therefore conclude that the time series length of the input and the response data play an important role in estimating the RAO via the spectral methods. The data with longer time series (4,000 sec and hence 40, 000 data) was needed to obtain a reliable estimate of the RAO. In the previous analysis, the Haar wavelet was used in the wavelet method. To show that this method works for other types of wavelet, the Daubechies D(4) [7] wavelet has also been used. Fig. 7 shows that the difference between the wavelet estimate of the RAO using the Daubechies D(4) wavelet and the wavelet estimate of the RAO using the 228 M. A. A. Bakar, D. A. Green and A. V.Metcalfe

G G G1 G1 G2 G2 G3 G3 Gw Gw RAO RAO 0 200 400 600 800 1000 0 200 400 600 800 1000 Coherence Coherence 0.50 1.00 0.50 1.00 0.00 0.01 0.02 0.03 0.04 0.05 0.00 0.01 0.02 0.03 0.04 0.05

Frequency(Hz) Frequency(Hz)

(a) 1000 seconds (b) 4000 seconds

G G1 G2 G3 Gw (Haar) Gw (D4) RAO 200 400 600 800 1000

0.000 0.005 0.010 0.015

Frequency (Hz)

Haar wavelet is imperceptible. Thus even when the simpler (Haar) wavelet was used, the wavelet method gave a good estimate of the RAO.

4.2. Noise on the response Two types of noise were considered for the case of noise on the response time series — viz. Comparison of Spectral and Wavelet Estimators of Transfer Function for Linear Systems 229

G G G1 G1 G2 G2 G3 G3 Gw Gw RAO RAO 0 200 400 600 800 1000 0 200 400 600 800 1000 Coherence Coherence 0.0 1.0 0.0 1.0 0.00 0.01 0.02 0.03 0.04 0.05 0.00 0.01 0.02 0.03 0.04 0.05

Frequency(Hz) Frequency(Hz)

(a) DWN, ( wt N (0, 10000 )) (b) Correlated noise ∼

1. discrete white noise (DWN), wt is a Gaussian with mean 0 and standard deviation 10000,

2. correlated noise, nt = 0.9 nt 1+wt where wt is a Gaussian with mean 0 and standard deviation 1000. −

From Fig. 8(a) it can be seen that G1 and Gw are less affected by the noise than G2, while G3 is worse. Both G1 and Gw are very similar at the peak, but Gw is less affected by the noise at higher frequencies. Thec coherenceÓ is relatively low at high frequencies,c whichc is to be expected becausec Ó the response of the system to high frequencyÓ forcing is slight and the noise dominates. All the estimates are poor as the frequency tends to 0. The results with correlated noise are qualitatively similar. The G1 is insensitive to noise uncorrelated with the input because this noise does not affect the cross-covariance of the input and response signal. The wavelet estimator is insensitive toc noise, since the wavelet method is equivalent to fitting the regression model

yt = β0 + β1ut + β2ut 1 + + nt (4.1) − · · · by ordinary least squares. The noise nt added to the response does not bias the estimates βˆ0, βˆ1, and βˆn, even if it is autocorrelated. · · · 4.3. Noise on the input To demonstrate the method of Section 3.4, we smooth the autoregressive input by taking a locally weighted scatterplot smoothing (LOESS) method. The smoothed input and response time series are shown in Fig. 9, and the DWT coefficients of this smoothed 230 M. A. A. Bakar, D. A. Green and A. V.Metcalfe

Smoothed input without noise 60 0 40 wave height(mm) wave − 0 200 400 600 800 1000 time(sec)

(a) First 1000 seconds of smoothed input without noise

Smoothed input with noise wave height(mm) wave 200 0 100 − 0 200 400 600 800 1000 time(sec)

(b) First 1000 seconds of smoothed input with noise, vt N (0, 50 ) ∼ Response amplitude (mm) 6000 0 4000 − 0 200 400 600 800 1000 time(sec)

(c) First 1000 seconds of response

input are shown in Fig. 10(a). Gaussian white noise with mean 0 and standard deviation 50.0 was added to the smoothed input, and the DWT coefficients of this noise-corrupted measurement of the input are shown in Fig. 10(b). The estimate of the standard deviation of the noise is 49.7. It is shown in Fig. 11(a) that Gws is as good as G3 when the noise on the input signal is discrete white noise, because from Eq. (3.35) the noise spectrum has been removed from the input. However, other estimatorsb of the RAOb are affected by the added noise on the input time series.

The correlated noise nt = 0.9 nt 1 + vt was also considered, where vt is Gaussian with mean 0 and standard deviation 10.− However, the combined wavelet-spectral estimate Gws of the RAO is not as good as the spectral estimator G3 because the correlated noise spectrum is not flat, unlike the discrete white noise spectrum. b b Comparison of Spectral and Wavelet Estimators of Transfer Function for Linear Systems 231

DWT of the smoothed input with noise DWT of the smoothed input without noise Resolution Level Resolution Level 12 10 8 6 4 2 0 12 10 8 6 4 2 0

0 1024 2048 3072 4096 0 1024 2048 3072 4096

Translate Translate Standard transform Haar wavelet Standard transform Haar wavelet

(a) DWT of the smoothed input without noise (b) DWT of the smoothed input with noise, vt N (0, 50 ) ∼

G G G1 G1 G2 G2 G3 G3 Gw Gw Gws Gws RAO RAO 0 200 400 600 800 1000 0 200 400 600 800 1000 Coherence Coherence 0.0 1.0 0.0 1.0 0.00 0.01 0.02 0.03 0.04 0.05 0.00 0.01 0.02 0.03 0.04 0.05

Frequency(Hz) Frequency(Hz)

(a) DWN, vt N (0, 50 ) (b) Correlated noise ∼

4.4. Noise on both the input and the response

Noise on both the input and the response time series was also considered, where the added noise is discrete white noise. In Fig. 12, the modified spectral estimate G4 is seen to be better than other estimators. This is to be expected, since G4 uses the modified input b b 232 M. A. A. Bakar, D. A. Green and A. V.Metcalfe

G G1 G2 G3 G4 Gw Gws RAO 0 200 400 600 800 1200 Coherence 0.0 1.0 0.00 0.01 0.02 0.03 0.04 0.05

Frequency(Hz)

vt N (0, 50 ) wt N (0, 20000 ) ∼ ∼

and response spectrum estimates. The combined wavelet-spectral estimate Gws gives a good estimate of the RAO’s peak, but gets worse at higher frequency. b 4.5. Different types of input signals

Four different types of signal were also considered as the input to the single mode of vibration linear system in Eq. (2.2) — viz.

1. chirp signal with constant amplitude, 2. chirp signal with increasing amplitude, 3. unit impulse signal, 4. pseudorandom binary sequence (PRBS) signal.

The responses are shown in Fig. 13. Comparisons of both spectral and wavelet RAO estimators are shown in Fig. 14. For both of the chirp input signal cases, it is seen that the spectral methods are not suitable to estimate the RAO, whereas the wavelet method performs very well. For the PRBS and the unit impulse signal, the wavelet estimator is only marginally better than the spectral estimators on the peak of the RAO. Differences are caused by the selected bandwidth /span of the spectral analysis package, which over-smoothed the spectrum estimation and hence biased the RAO spectral estimates, especially at the peak response. Comparison of Spectral and Wavelet Estimators of Transfer Function for Linear Systems 233

Chirp input signal Chirp with increasing amplitude input signal 200 0 200 wave height(mm) wave 100 0 100 − wave height(mm) wave − 0 0 200 400 600 800 200 400 600 800 1000 1000 time(sec) time(sec)

Response Response amplitude (mm) amplitude (mm) 50000 50000 150000 0 140000 − − 0 0 200 400 600 800 200 400 600 800 1000 1000 time(sec) time(sec)

(a) Chirp with constant amplitude (b) Chirp with increasing amplitude

Unit impulse input signal PRBS input signal 0 50 100 0 50 100 wave height(mm) wave wave height(mm) wave 0 0 400 800 200 400 600 800 1200 1600 2000 1000 time(sec) time(sec)

Response Response 50 50 150 − amplitude (mm) amplitude (mm) 20000 60000 − 0 0 400 800 200 400 600 800 1200 1600 2000 1000 time(sec) time(sec)

(c) Unit impulse (d) PRBS

5. Conclusions and Discussions

Spectral analysis is a well established technique for estimating the RAO of linear sys- tems. With long data records such as might be gathered over 1 hour at a sampling rate of 10 per second, and negligible signal noise, Gw offers no significant advantage over G1, G2 or G3 — and the detail around the peak is worse (cf. Fig. 7). The poor detail around Ó c c c 234 M. A. A. Bakar, D. A. Green and A. V.Metcalfe

G G G1 G1 G2 G2 G3 G3 Gw Gw RAO RAO 0 200 400 600 800 1000 0 200 400 600 800 1000 Coherence Coherence 0.0 1.0 0.0 1.0 0.00 0.01 0.02 0.03 0.04 0.05 0.00 0.01 0.02 0.03 0.04 0.05

Frequency(Hz) Frequency(Hz)

(a) Chirp with constant amplitude (b) Chirp with increasing amplitude

G G G1 G1 G2 G2 G3 G3 Gw Gw RAO RAO 0 200 400 600 800 1000 0 200 400 600 800 1000 Coherence Coherence 0.0 1.0 0.0 1.0 0.00 0.01 0.02 0.03 0.04 0.05 0.00 0.01 0.02 0.03 0.04 0.05

Frequency(Hz) Frequency(Hz) (c) Unit impulse (d) PRBS

the peak is due to the relatively small number of points in the estimate of the impulse response, because of limits on the sizes of the non-sparse matrices that can be inverted.

For shorter records consisting of a few thousand data, Gw is better at identifying the magnitude of the peak response (Fig. 6(a)). The estimation of an RAO from a short time series of a few hundred points, such as might arise in economicÓ s, requires considerable smoothing of estimated spectra by autoregressive estimators, or of the IRF. We have made no comparisons, as in most RAO estimation it is possible to obtain long time series, typically Comparison of Spectral and Wavelet Estimators of Transfer Function for Linear Systems 235 sampled from some analogue measuring device.

If there is only noise on the response, G1 should be used rather than G2, and G3 should not be used. Theoretically, both Gw and G1 are not biased by noise that is independent of the input, although the precision of the estimatec will be reduced. However,c whenc there is noise only on the input G3 givesÓ a very goodc estimate of the RAO compared to G1, G2 and Gw. Noise on both the input and response time series affects all three spectral estimators (G1, G2 and G3), and thec wavelet estimator ( Gw) of the RAO. However, by usingc cG4 we canÓ still get a good estimate of the RAO by making some modifications to the computed spectra.c c A combinationc of the wavelet transformÓ to either identify noise free periodsc or to estimate the variance of the noise and hence modify the input spectrum, followed by SA, also shows promise. In Section 4.5, it was shown that the SA is inappropriate if the input signal has a continuously varying frequency, such as for chirp signals. In contrast, the wavelet method is able to deal with such special cases.

Gw relies on estimation of the IRF, from which the RAO is estimated by taking its Fourier transform. The IRF can also be estimated directly without recourse to the wavelet transform,Ó by fitting a multiple regression model using ordinary least squares (OLS). For a given number of points in the IRF, the matrix to be inverted has the same eigenvalues as that for the wavelet method, and neither matrix is sparse. For our case study, the condition number was 11.78 for OLS and 10.45 for wavelets. When using R software [27 ], we were surprised that the computation time was nearly double with OLS. However, OLS has the advantage that the time series does not have to be dyadic (have a length that is a power of 2). The choice between using wavelets or OLS to estimate the IRF will typically depend on the available software. The theory and estimation of RAO depends on the assumed linear dynamics, but it yields a reasonable approximation for many dynamical systems. However, there is consid- erable scope for the application of wavelets in nonlinear dynamics, and it is now an active research area [5, 16, 17, 31 ]. This application of wavelets, especially to the single mode of a vibration system, could be beneficial for the development of nonlinear single degree of freedom wave energy devices such as the OWSC and variants of HBWED.

Acknowledgments

The authors acknowledge the developer of the open source software R [27 ], and ex- press deep appreciation to the Universiti Kebangsaan Malaysia and the Ministry of Higher Education Malaysia for the financial allocation for this work.

Notation

ut input time series yt response time series ζ damping factor ωn undamped natural frequency 236 M. A. A. Bakar, D. A. Green and A. V.Metcalfe

∆ sampling interval ht impulse response U(ω), Y (ω), H(ω) Fourier transform of ut , yt , ht U, Y absolute value of U(ω), Y (ω) Cuu input spectrum C y y response spectrum Cuy input-response cross-spectrum G(ω) theoretical RAO G1(ω), G2(ω), G3(ω) spectral RAO estimators G4(ω) modified spectral RAO stimator cohc (ω) c c coherence Wcj,k wavelet coefficients Vdj,k scaling coefficients ψ wavelet filter φ scaling filter DWT x DWT of x t { } Gw(ω) wavelet RAO estimator Gws (ω) combined wavelet-spectral RAO estimator Óvt and wt white noise on the input and response time series ndt correlated noise time series

References

[1] H. Akaike. A new look at the identification. Automatic Control, IEEE Trans- actions on , 19(6):716–723, 1974. [2] J. S. Bendat and A. G. Piersol. Random Data: Analysis and Measurement Procedures , volume 729. Wiley, 2011. [3] G. E. P.Box, G. M. Jenkins, and G. C. Reinsel. Time Series Analysis . Holden-day San Francisco, 1976. [4] R. N. Bracewell. The Fourier Transform & Its Applications 3rd Ed. McGraw-Hill, 2000. [5] S. L. Chen, J. J. Liu, and H. C. Lai. Wavelet analysis for identification of damping ratios and natural frequencies. Journal of Sound and Vibration , 323(1-2):130–147, 2009. [6] P.S. P.Cowpertwait, A. Metcalfe, and A. V.Metcalfe. Introductory Time Series with R . Springer Verlag, 2009. [7] I. Daubechies. Ten Lectures on Wavelets , volume 61. Society for Industrial Mathematics, 1992. [8] D. L. Donoho and I. M. Johnstone. Ideal spatial adaptation by wavelet shrinkage. Biometrika , 81(3):425–455, 1994. [9] S. Gouttebroze and J. Lardies. On using the wavelet transform in modal analysis. Mechanics Research Communications , 28(5):561–569, 2001. [10 ] A. Haar. Zur theorie der orthogonalen funktionensysteme. Mathematische Annalen , 69(3): 331–371, 1910. [11 ] W. Hardle, G. Kerkyacharian, D. Picard, and A. Tsybakov. Wavelets, Approximation and Sta- tistical Applications . Springer-Verlag, New York, 1998. [12 ] G. E. Hearn and A. Metcalfe. Spectral Analysis in Engineering: Concepts and Cases . Arnold, London, 1995. Comparison of Spectral and Wavelet Estimators of Transfer Function for Linear Systems 237

[13 ] W. R. Hewlett. Inventions of Opportunity: Technology With Market Needs . Hewlett Packard Co, 1983. [14 ] G.M. Jenkins and D.G. Watts. Spectral Analysis and Its Applications . Holden-Day, 1968. [15 ] Y. Y. Kim, J. C. Hong, and N. Y. Lee. Frequency response function estimation via a robust wavelet de-noising method. Journal of Sound and Vibration , 244(4):635 – 649, 2001. [16 ] Y. Kitada. Identification of nonlinear structural dynamic systems using wavelets. Journal of Engineering Mechanics , 124(10):1059, 1998. [17 ] D. T. L. Lee and A. Yamamoto. Wavelet analysis: Theory and applications. Hewlett-Packard Journal , 45:44 – 54, 1994. [18 ] S. Mallat. A Wavelet Tour of Signal Processing . Academic Press, 1999. [19 ] M. Masubuchi and R. Kawatani. Frequency response analysis of an ocean wave energy con- verter. Journal of Dynamic Systems, Measurement, and Control , 105(1):30–38, 1983. [20 ] J. H. Mathews and K. D. Fink. Numerical Methods Using MATLAB , volume 31. Prentice Hall Upper Saddle River, 1999. [21 ] J. R. McCusker, K. Danai, and D. O. Kazmer. Validation of dynamic models in the time-scale domain. Journal of Dynamic Systems, Measurement, and Control , 132(6):061402, 2010. [22 ] A. Metcalfe, L. Maurits, T. Svenson, R. Thach, and G. E. Hearn. Modal analysis of a small ship sea keeping trial. Australian & New Zealand Industrial and Applied Mathematics Journal , 47:915–933, July 2007. [23 ] G. P.Nason. Wavelet Methods in Statistics with R . Springer-Verlag, New York, 2008. [24 ] D. E. Newland. An Introduction to Random Vibrations, Spectral and Wavelet Analysis . Longman Scientific & Technical, 1993. [25 ] Ocean Power Technologies, Inc. Making Waves in Power . http: // www. oceanpowertechnolo- gies.com. [26 ] S. Pernot and C. H. Lamarque. A wavelet-galerkin procedure to investigate time-periodic sys- tems: Transient vibration and stability analysis. Journal of Sound and Vibration , 245(5):845– 875, 2001. [27 ] R Development Core Team. R: A Language and Environment for Statistical Computing . R Foundation for Statistical Computing, Vienna, Austria, 2008. [28 ] J. Ringwood. The dynamics of wave energy. In Proceeding of Irish Signal and Systems Confer- ence , June 2006. [29 ] A. N. Robertson, K. C. Park, and K. F. Alvin. Extraction of impulse response data via wavelet transform for structural system identification. Journal of Vibration and Acoustics , 120(1):252– 260, 1998. [30 ] A. N. Robertson, K. C. Park, and K. F.Alvin. Identification of structural dynamics models using wavelet-generated impulse response data. Journal of Vibration and Acoustics , 120(1):261– 266, 1998. [31 ] W. J. Staszewski. Analysis of non-linear systems using wavelets. Proceedings of the Institution of Mechanical Engineers – Part C, Journal of Mechanical Engineering Science , 214(11):1339 – 1353, 2000. [32 ] T. Whittaker and M. Folley. Nearshore oscillating wave surge converters and the development of oyster. Philosophical Transactions of The Royal Society A , 370:345–364, 2012. [33 ] Y. Yu, R. Ajit Shenoi, H. Zhu, and L. Xia. Using wavelet transforms to analyze nonlinear ship rolling and heave-roll coupling. Ocean Engineering , 33(7):912–926, 2006. CHAPTER 7. SYNTHESIS 124

7.2 Comparison of Heaving Buoy and Oscillating Flap Wave Energy Converters

Paper highlights:

• Two types of wave energy converters are compared in this paper. The heaving buoy wave energy converter (HBWEC) and the oscillating flap wave energy converter are selected where the former is modelled as a linear dynamic system and the later as a nonlinear dynamic system.

• The discrete time simulation is used to estimate the responses of both wave energy converter. Then comparison on the responses were made based on the spectral analysis method.

• Given the limitation of the spectral analysis method, the Bendat’s nonlinear system identification method have been used to analyze the nonlinear OFWEC dynamic system.

The paper Comparison of Heaving Buoy and Oscillating Flap Wave Energy Con- verters discussed the dynamic systems of wave energy converters (WEC). The heav- ing buoy wave energy converter (HBWEC) and the oscillation flap wave energy converter (OFWEC) have been selected for this study and compared. The HBWEC is modelled as a single degree of freedom linear dynamic system. The OFWEC can also be modelled as a linear dynamic system. However, more realistic model would be nonlinear. Hence, by including the drag force, known as the Morison term, the OFWEC modelled become nonlinear. Comparison between both WECs have been done by comparing the responses of both WEC when the same force(input) being used on both models. The responses are simulated by using the discrete time simulation. For the linear systems, the spectral analysis was conducted for identification of the system’s dynamic. The CHAPTER 7. SYNTHESIS 125

Bendat’s nonlinear system identification technique is used for analyzing the nonlin- ear OFWEC dynamic system, since the spectral analysis are not capable of studying the nonlinear dynamic systems.

Bakar, M.A.A., Green, D.A., Metcalfe, A.V. & Najafian, G. (2013). Comparison of heaving buoy and oscillating flap wave energy converters. In AIP Conference Proceedings: Proceedings of the 20th National Symposium on Mathematical Sciences, 1522, 86-101.

NOTE:

This publication is included on pages 128 - 143 in the print

copy of the thesis held in the University of Adelaide Library.

It is also available online to authorised users at:

http://dx.doi.org/10.1063/1.4801109

CHAPTER 7. SYNTHESIS 144

7.3 Unscented Kalman Filtering for Wave Energy Converters System Identification

Paper highlights:

• Unscented Kalman Filter is used to estimate the system dynamic state and the nonlinear parameter.

• The model of the oscillating flap wave energy converter (OFWEC) dynamic system have been used as a case study.

• The estimation was done by using the wave elevation (input) and the vertical displacement (output) of the OFWEC.

The paper Unscented Kalman Filtering for Wave Energy Converters System Identification discussed the unscented Kalman filter (UKF) for system identification of nonlinear dynamic systems. The UKF was used to estimate the dynamic states of the system and also to estimate the parameters of the dynamic systems. A OFWEC is selected as a case study where it is modelled as a single degree of freedom nonlinear system. The model is similar to the linear system with additional nonlinear term, known as the Morison term, where it allowed the drag of the device under the water. It is shown that the UKF does performed well on estimating the dynamic states of the OFWEC systems and estimating the unknown parameters of the system.

Bakar, M.A.A., Green, D.A., Metcalfe, A.V. & Ariff, N.M. (2014). Unscented Kalman filtering for wave energy converters system identification. In AIP Conference Proceedings: Proceedings of the 3rd International Conference on Mathematical Sciences, 1602, 304-310.

NOTE:

This publication is included on pages 147 - 153 in the print

copy of the thesis held in the University of Adelaide Library.

It is also available online to authorised users at:

http://dx.doi.org/10.1063/1.4882503

CHAPTER 7. SYNTHESIS 154

7.4 Comparison of Autoregressive Spectral and Wavelet Characterizations of Nonlinear Os- cillators

Paper highlights:

• A wavelet ridge is used to estimate the instantaneous envelope and the instan- taneous frequency of nonlinear dynamic systems.

• Autoregressive with exogenous input (ARX) model has been used to propose a model for the nonlinear dynamic systems.

• Probing technique is used to find the generalized frequency response functions of the nonlinear dynamic systems based on the ARX model.

• Comparisons were done on the wavelet and the ARX methods based on weakly nonlinear oscillators, which are the Duffing and the Van der Pol oscillators.

• The model of the oscillating flap wave energy converter (OFWEC) dynamic system have been used as a case study.

The paper Comparison of Autoregressive Spectral and Wavelet Characterizations of Nonlinear Oscillators presented the wavelet based technique for system identi- fication of nonlinear dynamic systems. The wavelet ridge are computed from the wavelet transform of the impulse response of the system. By using the wavelet ridge, the instantaneous envelope and the instantaneous frequency of the nonlinear dynamic systems can be estimated. Through the wavelet estimated backbone curve of the system, the types of system nonlinearity can be identified. This article also discussed the autoregressive with exogenous input (ARX) model. Through this technique, the model for the nonlinear dynamic system can be pro- posed. Then the probing technique is applied on the model to compute the gener- CHAPTER 7. SYNTHESIS 155 alized frequency response functions for the nonlinear dynamic systems. Three weakly nonlinear oscillators have been chosen in this study, which are the Duffing oscillator, the Van der Pol oscilator and the oscillating flap wave energy converter system. Comparison have been done on the discussed identification tech- niques based on these nonlinear systems. It is shown that both the wavelet and the ARX method capable of identifying the system’s parameter from an impulse response with reasonable accuracy. The ARX method is considerably easier to im- plement compare to the wavelet technique. The probing method appears to give a useful frequency domain characterization of the systems through the frequency response functions.

East Asian Journal on Applied Mathematics Vol. xx, No. x, pp. 1-27 doi: 10.4208/eajam xxx 200x

Comparison of Autoregressive Spectral and Wavelet Characterizations of Nonlinear Oscillators M A A Bakar1,2,¦, N M Ariff1, D A Green2 and A V Metcalfe2 1 School of Mathematical Sciences, Faculty of Science and Technology, Universiti Kebangsaan Malaysia, UKM Bangi, Selangor, Malaysia. 2 School of Mathematical Sciences, University of Adelaide, South Australia, Aus- tralia.

Abstract. In this study, wavelet transform has been used to identify the nonlinearity of the system based on the system impulse response function. Wavelet estimates of the instantaneous envelopes and instantaneous frequency are used to plot the system backbone curve. This wavelet estimates is then used to approximate the values for the system’s parameters. Another approach, based on the autoregressive with exogenous input (ARX) model is also considered in this study. Probing technique then being applied on the ARX model to find the frequency response functions of the nonlinear systems. Two weakly nonlinear oscillator, the Duffing and the Van der Pol oscillators, have been chosen to compare both the wavelet and ARX methods. A case study based on a model of an oscillating flap wave energy converter (OFWEC) has been discussed. AMS subject classifications: 65M10, 78A48 Key words: wavelet transform, impulse response, wavelet ridge, autoregressive with exogenous input, probing technique, wave energy converter

¦Corresponding author. Email addresses: [email protected] (M A A Bakar) http://www.global-sci.org/eajam 1 c 200x Global-Science Press

2 M A A Bakar, N M Ariff, D A Green & A V Metcalfe

Figure 1: Oyster, oscillating ap WEC [7]

1. Introduction

Wavelet analysis has been proposed as a strategy for both identifying and estimating the parameters of non-linear oscillatory systems. The usual method is to estimate the wavelet ridge and the wavelet backbone from the impulse response (IRF) [1–3]. The wavelet back- bone is a plot of instantaneous frequency against the amplitude, and is a straight line for a single degree of freedom linear oscillator because the instantaneous frequency is the nat- ural frequency of vibration and it does not vary with amplitude. The main limitation of this strategy is that obtaining an impulse response may not be feasible, outside of labora- tory conditions. Segment averaging technique known as the random decrement technique (RDT) has been proposed as a device for obtaining an estimate of an impulse response from a record of the response to arbitrary forcing [4, 5]. This works for linear systems, but is as best an approximation for non-linear systems precisely because the system is non-linear. In particular segment averaging will not work if the system response is chaotic. The aim of this paper is to investigate how a wavelet analysis compares with an analysis based on fitting non-linear difference equations and so estimating bi-spectra and tri-spectra for three theoretical types of non-linear oscillators. These are the Duffing system, the Van der Pol system, and a system that allows for both inertial forces and drag forces on cylinders subject to wave forces. We refer to the last mentioned system as a Morison system because it is typically modelled with MorisonâA˘Zs´ equation [6–8]. There are many applications of the Duffing system models, including in electronics, and Van der Pol system models including cell biology. We place the Morison equation in the context of an oscillating flap wave energy converter (OFWEC). The paper is arranged in following order. In section two, we will give the general theory on the wavelet identification technique for nonlinear oscillators. The third section will discuss on the autoregressive with exogenous input with spectral identification technique. It follow by the comparison between both technique in the fourth section. The case study will be discussed in section five and will be ended by the conclusions. Comparison of autoregressive spectral and wavelet characterizations of nonlinear oscillators 3

2. Nonlinear system identification by Wavelet Ridge

2.1. Wavelet Transform

The continuous wavelet transform (CWT) of a signal x ptq at time b and scale a can be defined as » ¢ 8 ¡ p q  1 p q ¦ t b Tx a, b x t ψ d t (2.1) a ¡8 a where ¢ 1 t ¡ b ψp q ptq  ψ (2.2) a,b a a p q is the wavelet function. Tx a, b are known as the wavelet coefficients which provide in- formation about the signal, xptq, at scale a and around time b. A wavelet function must satisfy the conditions:

» 8 » 8 ψ ptq d t  0 and ψ2 ptq d t  1 (2.3) ¡8 ¡8

Wavelet transforms preserve the energy of the process and it can be shown that

» 8 » 8 » 8  1 da x2 ptq d t  T 2 pa, bq d b , (2.4) x 2 ¡8 Cψ 0 ¡8 a where » 8 | p q|  Ψ ω Cψ dω, (2.5) 0 ω p q p¤q 8 and Ψ ω is the Fourier transform of ψ , provided 0 Cψ (wavelet admissible condition). The inverse CWT is defined as » 8 » 8 ¢  1 1 b ¡ t da x ptq  ? T pa, bq ψ d b . (2.6) x 2 Cψ 0 a ¡8 a a

Here we use the Morlet wavelet which is defined as ¢ x2 ψ pxq  exp ¡ exp piω xq, (2.7) 2σ2 0 where σ and ω are parameters that control the size of the wavelet envelope and oscil- lations, respectively. Morlet wavelet is a modulated Gaussian function and its integral is ¡ approximately zero for σω0 5. The Fourier transform of the Morlet wavelet is £ pω ¡ 2πq2 Ψpωq  exp ¡ , (2.8) 2 which providing good localization in the frequency domain [2, 9]. 4 M A A Bakar, N M Ariff, D A Green & A V Metcalfe

In the CWT, the coefficients have dimension of seconds. However, the frequency corre- sponding to scale a is given by the relationship

f f  c , (2.9) a a ¤ ∆

 ω0 where fa is the frequency (in Hz) related to scale a, fc 2π is the wavelet central frequency (in Hz), a is the scale and ∆ is the signal sampling interval (in s) [2].

2.2. Instantaneous Modal Parameters

Feldman [10] has shown that the impulse response of oscillating system, yptq, can be converted to its analytic signal form by applying the Hilbert transform. The instantaneous modal parameters, the instantaneous envelope and instantaneous phase, can be extracted from the analytic signal. For linear system, the instantaneous natural frequency and instan- taneous damping coefficient, which determine the phase, are constant over time. However, if the system has nonlinear stiffness, the natural frequency will vary over time since it de- pends on the amplitude of vibrations. By using the backbone curve, which is the plot of the signal instantaneous envelope on the instantaneous frequency, the nonlinearity can be identified. Nonlinearity in the damping can be also identified from the instantaneous en- velope. Assume a general autonomous SDOF weakly nonlinear oscillator

: p 9 q p q  yt D y S y 0, (2.10) where Dpy9 q and Spyq is the dissipative and restoring force function, respectively. The instantaneous envelope, Aptq, and instantaneous frequency, ωptq, can be approximated by using the Krylov and Bogoliubov method [11]. Different types of ninlinear oscillator have unique forms for their modal parameters [10].

2.3. Wavelet Ridge

Carmona et al. introduced a technique to detect ridges in the modulus of the CWT. Wavelet ridge is the localization of the signal in time-frequency domain which is important in nonlinearity detection and useful in analyzing noisy signals [9,12]. Staszewski estimated the damping ratio of the impulse response for linear multi-degree of freedom systems by using the wavelet ridges and skeletons [13]. He then applied this technique to the identi- fication of nonlinear MDOF systems [2,14]. London˜o et al. [3] presented a new technique to identify the backbone curves of a nonlinear system from an impulse response. Let the signal, xptq, be written in the form of

x ptq  Aptq cos φ ptq , (2.11) where Aptq and φ ptq are the amplitude and instantaneous phase, respectively. It is con- venient to use progressive wavelets, wavelets with a Fourier transform that has vanishing Comparison of autoregressive spectral and wavelet characterizations of nonlinear oscillators 5 negative frequencies, which includes the Morlet wavelet. The wavelet coefficients of x ptq can be succinctly written in the notation @ D 1 @ D T pa, bq  x, ψp q  Z , ψp q , (2.12) x b,a 2 x b,a where » 1 dη Z ptq  PV f pt ηq , (2.13) x π η p q is the analytic signal of x t and PV denotes principal value integral [9, 12]. If the ampli- p q 1 p q  d p q tude, A t , and the instantaneous frequency, φ t d t φ t , vary slowly, then p q  p q iφptq Zx t A t e . (2.14)

Assume that the Fourier transform of the wavelet function, Ψ pξq, is peaked near a  frequency ξ ω0, then the CWT may be approximated as ¡ © ¢§ § § § § § 1 p q ¦ 1 § 1 § § 2 § § 1 §2 T pa, bq  Apbq eiφ b Ψ aφ pbq O §A § { |A| , §φφ § { §φ § . (2.15) x 2 | | The modulus of the wavelet coefficients, Tx , is essentially maximum in the neighbour- hood of a curve a  r pbq , (2.16) known as the wavelet ridge. Since the modulus of this approximation can be approximated by § ¡ ©§ 1 § ¦ 1 § |T pa, bq|  Apbq §Ψ aφ pbq § , (2.17) x 2 we can find the envelope of this signal from its wavelet skeleton. Wavelet skeleton is defined as the CWT of the signal that is restricted to its wavelet ridges and behaves as r s Aptq exp iφptq [2, 9, 15]. The relationship between the wavelet ridge and the instanta- neous frequency of the signal is given by

1 p q φψ 0 r pbq  , (2.18) φ1 pbq where φψ is the instantaneous phase of the wavelet. Based on this, the simplest method to obtain the wavelet ridge is by identifying the global maxima of the CWT modulus. If there are more than one ridges, we can search for the local maxima of the CWT modulus. However, it will be problematic to identify the local maxima that represents the true wavelet ridge if there are noise in the signal and if the frequency of each ridges is close to one another. For the case of signal with relatively low noise, we can use the differentiation technique to identify the wavelet ridge. By using the Newton’s method, for each time b, we can find the ridge rpbq by solving B |T pa, bq |  0, (2.19) Ba x 6 M A A Bakar, N M Ariff, D A Green & A V Metcalfe p ¡ q p q where r b 1 is taken as the initial guess for the ridge at time b, r0 b . The drawbacks of this technique are that it will not be suitable for the case of significant noise and the identified ridge will have the possibility of not being a smooth function. We can reconstruct the original signal (in time domain) by using the wavelet skeleton [12]. The reconstruction of the original signal is done by computing p q  p p qq xrec t Tx t, r t , (2.20) for each ridge component and then summing up all these terms. Even though this tech- nique is simple, it needs the information of the CWT on the ridge which is computationally expensive because it will still requires lots of data computiion if there are many ridges. However, it still a lot more efficient compared to reconstructing the original signal by using all the original wavelet coefficients.

2.4. System Identification Technique based on Wavelet Ridge Let the impulse response of the oscillating system, yptq, be written as

y ptq  Aptq exp piφ ptqq . (2.21)

From the impulse response, the wavelet ridges can be identified from its CWT. The instan- taneous envelope and the instantaneous frequency of the oscillating system can then be computed from the wavelet skeleton and wavelet ridge, respectively. Following London˜o et al. [3], the damping of the system can be estimated from the slope of the semi-logarithmic plot of the instantaneous envelope against time. The plot of the instantaneous frequency against the instantaneous envelope will produce the system backbone curve. The system’s nonlinearity can then be identified and categorized using the estimated backbone curve and the estimated damping ratios [2, 3]. Given that we know the type of system, we can use this instantaneous characteristics to estimate the system parameters (refer Table 1 in [1]).

3. Autoregressive with exogenous input (ARX) model with spectral identification technique

3.1. Autoregressive with exogenous input (ARX) model The relation between the input and output signals of a linear dynamic system can be modelled by an autoregressive with exogenous input (ARX) model. The ARX model can generally be defined as  yt a1 yt¡1 a2 yt¡2 ... ap yt¡p b1ut¡1 ... bnut¡n vt , (3.1) where yt and ut is the output and input signals, respectively, and vt is the white noise. In matrix form, the ARX model can be written as

Apzqyptq  Bpzquptq vptq (3.2) Comparison of autoregressive spectral and wavelet characterizations of nonlinear oscillators 7 where p q  ¡1 ¡2 ¡p A z 1 a1z a2z ... apz , p q  ¡1 ¡2 ¡3 ¡n B z b1z b2z b3z ... bnz ,

The ordinary least squares method can be used to estimate the parameters of the ARX model given that the noise is zero-mean white noise. For nonlinear system, the ARX model can be extended to include nonlinear terms and nonlinear relations between the past outputs and the past inputs. This model is also known as the nonlinear ARX model which can be written in a general form as (  yt f yt¡1,..., yt¡p, ut¡1,..., ut¡n vt (3.3) where f is the nonlinear function. In practice, the nonlinearity has to be specified to some extent. We consider only squared and cubed lagged response terms and their interactions, which is adequate for the three nonlinear oscillators considered in this study.

3.2. Probing method By using a method known as the probing method, we can obtain the analytical ex- pression for the frequency response function for nonlinear dynamic system. The method was introduced by Billings and Tsang [16], and an application of the probing method to a nonlinear dynamic model of ship response is described by Metcalfe et al. [17]. First step on computing the nonlinear frequency response is by letting the input, uptq be a sum of K exponentials ¸K p q  iωk t u k Ake , (3.4) k1 where Ak is the amplitudes. Meanwhile, the output can be expressed as ¸8 p q  p q y t yn t , (3.5) n1 where the nth order output is ¸K ¸K  ¨ ipω ¤¤¤ ω qt y ptq  ¤ ¤ ¤ A ¤ ¤ ¤ A H f ,..., f e k1 kn , (3.6) n k1 kn n k1 kn   k1 1 kn 1   where Hn is the nonlinear transfer function of order n. By setting K n and Ak 1 for all k  1, 2, . . . , n, Eqn. 3.4 and 3.6 can be written as ¸n upkq  eiωk t , (3.7) k1 and ¸n ¸n ¨ ipω ¤¤¤ ω qt y ptq  ¤ ¤ ¤ H f ,..., f e k1 kn . (3.8) n n k1 kn   k1 1 kn 1 8 M A A Bakar, N M Ariff, D A Green & A V Metcalfe

4. Comparisons of the wavelet and the ARX technique

We compare the wavelet and the ARX methods for identifying the parameters of the Duffing and Van der Pol oscillators from the impulse response. We demonstrate that the ARX method can also be used with a wave tank input [18] provided the input and the outpt signals are measured. Although the wavelet method requires an impulse response, it may be possible to estimate this from a system response to an arbitrary input by the random decrement technique (RDT) [4, 5]. The RDT averages the responses of the system following a trigger event, such as the displacement being within a small distance of some specified value. It is necessary to take account of whether the initial velocity is positive or negative by noting whether the next displacement is greater than or less than the specified value. The RDT is theoretically jus- tified for a linear systems, but is at best an approximation for a nonlinear systems precisely because the systemsis nonlinear. In the cases of the Duffing and the Van der Pol oscillators, the RDT did not converge, presumably because the forced response is chaotic [19].

4.1. Wavelet technique

4.1.1. Duffing oscillator

Duffing system has been selected as an example for system with nonlinear stiffness. The SDOF Duffing system can be modelled as

: 9 3  myt c yt k yt k3 yt xt (4.1) where yt is the response (output), xt is the force (input), m is the inertial mass, c is the damping coefficient, k is the stiffness coefficient and k3 is the nonlinear feedback cubic stiffness coefficient. For this example, we have selected m  1, c  0.005, k  1 and  k3 100. The impulse response of this Duffing system (Fig. 2) have been simulated by using the Runge-Kutta method.

IRF IRF 0.10 0.05 0.00 amplitude (m) amplitude (m) −0.05 −0.10 0 100 200 300 400 500 100 110 120 130 140 150

time (sec) time (sec)

(a) First 500 seconds (b) Samples of 100 to 150 second Figure 2: Time series of impulse response (IRF) for Dung system Comparison of autoregressive spectral and wavelet characterizations of nonlinear oscillators 9

Spectral Analysis 0.15 amplitude 0.00 0.0 0.2 0.4 0.6 0.8 1.0

Frequency (Hz)

(a) Fourier transorm of impulse response (b) Wavelet transform of impulse response Figure 3: Spectral analysis and CWT of impulse response (IRF) for Dung system

Instantaneous Frequency

theoretical 0.30 WT HT(filtered) 0.20 frequency (Hz) 0.10 0 100 200 300 400 500

time (s)

(a) Instantaneous frequency of the impulse response

IRF and Instantaneous Envelope

IRF HT WT 0.05 amplitude (m) −0.10 100 120 140 160 180 200

time (s)

(b) Actual impulse response (full line) and instantaneous envelope (dashed line) Figure 4: Wavelet estimation for Dung system with impulse input

: It can be seen from Fig. 3, that the CWT of the impulse response can capture the

: Calculated from the definition using numerical integration with the R function, cwt, in Rwave package [20] 10 M A A Bakar, N M Ariff, D A Green & A V Metcalfe change of impulse response’s frequency over time while the spectral analysis only averaged the frequencies. The estimated instantaneous frequency from the wavelet ridge is plotted ; in Fig. 4(a) together with that estimated with the Hilbert transform and the theoretical in- stantaneous frequency. The instantaneous envelope for this Duffing system fits the impulse response’s peak such as shown in Fig. 4(b). Similarly for the Hilbert transform estimated instantaneous envelope. The reconstructed impulse response from the wavelet skeleton is similar to the original impulse response (Fig. 5).

IRF 0.05 Amplitude −0.05

100 110 120 130 140 150

time (s)

Figure 5: Actual (full line) and estimated (dashed line) impulse response (IRF) for Dung system at 100 to 150 second

We used the slope of the tangent of the semi-logarithmic plot of instantaneous enve- lope against time in Fig. 6 to estimate the damping coefficient of this Duffing system. The estimate of the damping coefficient is ˆc  0.0046, while the theoretical value from the model is 0.005. We estimated both k and k3 from the estimated instantaneous frequency and envelopes of the impulse response by fitting them to the approximated instantaneous frequency function from Table 1 in Spina et al. 1 . The estimated ratio, ?k3 is 89.056, while [ ] k the ratio from the model, is equal to 100.

1 × 10−1 9 × 10−2 8 × 10−2 7 × 10−2

6 × 10−2

5 × 10−2

4 × 10−2 Instantaneous Envelope

3 × 10−2

0 100 200 300 400

time

Figure 6: Semi-logarithmic plot of the instantaneous envelopes against times for Dung system

; Calculated with the R function, HilbertTransform, in hht package [20] Comparison of autoregressive spectral and wavelet characterizations of nonlinear oscillators 11

Backbone

theoretical WT HT 0.09 0.07 0.05 Instantaneous Envelope (m) Instantaneous Envelope 0.03 0.0 0.1 0.2 0.3 0.4

Instantaneous Frequency (Hz)

Figure 7: Theoretical and estimated backbone curves for Dung system

It can be seen that for system with nonlinear stiffness, the plot of wavelet estimated backbone in Fig. 7 is not a straight line. This is similar to the theoretical backbone and the Hilbert’s estimated backbone. This shows that the frequency does depend on the response amplitude.

4.1.2. Van der Pol oscillator

Van der Pol system has been selected as an example of system with nonlinear damping. The SDOF Van der Pol model can be written as : p 2 ¡ q 9  yt µ yt 1 yt yt xt (4.2) where yt is the response (output), xt is the force (input) and µ is the nonlinear damping coefficient. For this example, we have selected µ  0.05. Similar to the previous oscillator, the impulse response of this Van der Pol system (Fig. 8) have been simulated by using the Runge-Kutta method.

IRF IRF 2 1.5 1 0 0.0 amplitude (m) amplitude (m) −1.5 −2

0 100 200 300 400 500 100 110 120 130 140 150

time (sec) time (sec)

(a) First 500 seconds (b) Samples of 100 to 150 second Figure 8: The impulse response (IRF) time series for Van der Pol system 12 M A A Bakar, N M Ariff, D A Green & A V Metcalfe

The wavelet estimated instantaneous frequency in Fig. 10(a) shows that the frequency of this Van der Pol system does not change over time. The wavelet estimated instantaneous envelope does capture the peak of the impulse response (Fig. 10(b)) and the reconstructed impulse response, based on the wavelet skeleton, does fit the original impulse response (Fig. 11).

Spectral Analysis 400 200 amplitude 0

0.0 0.2 0.4 0.6 0.8 1.0

Frequency (Hz)

(a) Fourier transorm of impulse response (b) Wavelet transform of impulse response Figure 9: Spectral analysis and CWT of impulse response (IRF) for Van der Pol system Comparison of autoregressive spectral and wavelet characterizations of nonlinear oscillators 13

Instantaneous Frequency

theoretical 0.22 WT HT(filtered) 0.16 frequency (Hz) 0.10 0 100 200 300 400 500

time (s)

(a) Instantaneous frequency of the impulse response

IRF and Instantaneous Envelope

2 IRF HT WT 1 0 amplitude (m) −2

100 120 140 160 180 200

time (s)

(b) Actual impulse response (full line) and instantaneous envelope (dashed line) Figure 10: Wavelet estimation for Van der Pol system with impulse input

IRF 1.5 0.0 Amplitude −1.5

100 110 120 130 140 150

time (s) Figure 11: Actual (full line) and estimated (dashed line) impulse response (IRF) for Van der Pol system at 100 to 150 second

For this Van der Pol system, the damping varies over time. The damping is geometrically equivalent to the slope of the tangent of the curve in semi-logarithmic plot of the instan- taneous envelope against time as in Fig. 12, and it can be seen to increase to a threshold value over time. Based on the wavelet estimated backbone curve of this Van der Pol system 14 M A A Bakar, N M Ariff, D A Green & A V Metcalfe

(Fig. 13), we can say that the response amplitude does not depend on the frequency.

2 × 100

1 × 100

5 × 10−1

2 × 10−1

1 × 10−1 Instantaneous Envelope

5 × 10−2

0 100 200 300 400 500

time

Figure 12: Semi-logarithmic plot of the instantaneous envelopes against times for the Van der Pol system

We used the Nelder-Mead method to find the estimated value of the nonlinear damping coefficient, µ, from the estimated envelopes of the impulse response. The estimated value, µˆ, is found to be 0.0499, which is close to 0.05, the theoretical value from the Van der Pol model.

Backbone

theoretical 2.0 WT HT 1.5 1.0 0.5 Instantaneous Envelope (m) Instantaneous Envelope

0.10 0.12 0.14 0.16 0.18 0.20

Instantaneous Frequency (Hz)

Figure 13: Theoretical and estimated backbone curves for Van der Pol system

4.2. ARX with probing technique

4.2.1. Duffing oscillator

First, we will consider the case of impulse input signal. The form of an ARX odel for the Duffing oscillator is found by substituting central difference into the differential equation Comparison of autoregressive spectral and wavelet characterizations of nonlinear oscillators 15 in Eqn. 4.1.The fitted ARX model is given as

 ¡ ¡ 3 ¡ 3 yt 1.9895yt¡1 0.9995yt¡2 0.9904yt¡1 0.0028yt¡2 0.00001ut¡1 (4.3)

By using the probing technique, the first order frequency response function is given as

¡ 0.00001e iω H pωq  , (4.4) 1 1 ¡ 1.9895e¡iω 0.9995e¡i2ω and the third order frequency response function is

¡ p q ¡ p q p q p q p qr¡ i ω1 ω2 ω3 ¡ i2 ω1 ω2 ω3 s p q  H1 ω1 H1 ω2 H1 ω3 0.9904e 0.0028e H3 ω1, ω2, ω3 ¡ p q ¡ p q . 1 ¡ 1.9895e i ω1 ω2 ω3 0.9995e i2 ω1 ω2 ω3 (4.5)

|H1| log |H3(ω1, ω2, ω3)|

−11 −10 −10 −9 0.15 −9

0.102 −8 −7 −6 −5

| −7 1 2 0.10 ω H −8 −6 | −9 −10 0.098 0.05

−11 −10 −9 −11 0.00 0.094 0.0 0.2 0.4 0.6 0.8 1.0 0.094 0.096 0.098 0.100 0.102 0.104

ω Frequency (rad/sec) 1

p q p q  (a) H1 ω (b) H3 ω1, ω2, ω3 , where ω3 ω2

log |H3(ω1, ω2, ω3)| log |H3(ω1, ω2, ω3)|

−11 −8 −7.5 −7 −6 −7 −7.5 −5 −11.5 0.08 0.102 −5.5 −6.5 −6.5 −6 −4.5 −3.5 −12 −6.5 −4

2 −6 2 −7 −5.5 ω ω

−7.5

−8 0.04 −6.5 0.098

−7.5

−11.5

−8.5 −11 −11.5 −7 −6 −8 0.00 0.094 0.094 0.096 0.098 0.100 0.102 0.104 0.00 0.02 0.04 0.06 0.08 0.10

ω1 ω1

p q  p q  (c) H3 ω1, ω2, ω3 , where ω3 0.1 (d) H3 ω1, ω2, ω3 , where ω1 ω2 ω3 0.1 Figure 14: Frequency response function for Dung oscillator, where input is impulse signal

For the case of wave tank input, the fitted ARX model for Duffing oscillator is given by

 ¡ ¡ 3 yt 1.9895yt¡1 0.9995yt¡2 0.9975yt¡1 0.0010ut¡1 (4.6) 16 M A A Bakar, N M Ariff, D A Green & A V Metcalfe

3 Notice that the in this case, the variable yt¡2 was omitted because its coefficients is very small, while other variables coefficient is almost the same except the input variable coeffi- cient. The first order frequency response function is ¡ 0.0010e iω H pωq  , (4.7) 1 1 ¡ 1.9895e¡iω 0.9995e¡i2ω and the third order frequency response function is ¡ p q ¡ p q p q p q i ω1 ω2 ω3 p q  0.9975H1 ω1 H1 ω2 H1 ω3 e H3 ω1, ω2, ω3 ¡ p q ¡ p q . (4.8) 1 ¡ 1.9895e i ω1 ω2 ω3 0.9995e i2 ω1 ω2 ω3

|H1| log |H3(ω1, ω2, ω3)|

−10 −8 −9 −9 −7 −8 0.102 −5

15000 −6

| −6 −4 1 2 −7 −5 ω H −8 |

−9 0.098 −10 5000

−8 −9

0 −10 0.094 0.0 0.2 0.4 0.6 0.8 1.0 0.094 0.096 0.098 0.100 0.102 0.104

ω Frequency (rad/sec) 1

p q p q  (a) H1 ω (b) H3 ω1, ω2, ω3 , where ω3 ω2

log |H3(ω1, ω2, ω3)| log |H3(ω1, ω2, ω3)|

−7 −6 −6.5 −10 −7 −6.5 −10.5

−4 −5 −5 0.08 0.102 −6 −5.5 −5.5 −11 −3.5 −2.5 −5.5 −5 −4.5 2 −6 2 ω ω −6.5 −7

−5.5 0.04 0.098

−6.5

−7.5 −10.5 −10 −11 −5 −6 −7 −11 −10.5 0.00 0.094 0.094 0.096 0.098 0.100 0.102 0.104 0.00 0.02 0.04 0.06 0.08 0.10

ω1 ω1

p q  p q  (c) H3 ω1, ω2, ω3 , where ω3 0.1 (d) H3 ω1, ω2, ω3 , where ω1 ω2 ω3 0.1 Figure 15: Frequency response function for Dung oscillator, where input is wave tank signal

4.2.2. Van der Pol oscillator

For the Van der Pol oscillator, the formof an ARX model is obtained by substituting forward differences into the differential equation in Eqn. 4.1. The fitted ARX is  ¡ 2 ¡ 3 ¡ 3 ¡ yt 1.9950yt¡1 1.0050yt¡2 0.0050yt¡1 yt¡2 0.0033yt¡1 0.0017yt¡2 0.0000007ut¡1 (4.9) Comparison of autoregressive spectral and wavelet characterizations of nonlinear oscillators 17

The first order frequency response function is ¡ ¡0.0000007e iω H pωq  , (4.10) 1 1 ¡ 1.9950e¡iω 1.0050e¡i2ω and the third order frequency response function is p q p q p q ‘ H1 ω1 H1 ω2 H1 ω3 H pω , ω , ω q  ¡ p q ¡ p q (4.11)  3 1 2 3 1¡1.9950e i ω1 ω2 ω3 1.0050e i2¨ω1 ω2 ω3  ¡ p q ¡ p q ¡ p q ¡ p q 0.0050 i ω1 2ω2 2ω3 i 2ω1 ω2 2ω3 i 2ω1 2ω2 ω3 ¡ i ω1 ω2 ω3 3 e e e 0.0033e .

|H1| log |H3(ω1, ω2, ω3)|

−34 0.20

0.0015 −35 −33 −32 −34

0.15 −31 −33 −30 −32 −29 | −28 0.0010 1 2 −27 −25 −24 ω H −26 −27 −28 −29 | −27 0.10 −28 −30 −31 0.0005

0.05 −27 −28 −29 −30 −26 0.00 0.0000 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.05 0.10 0.15 0.20

ω Frequency (rad/sec) 1

p q p q  (a) H1 ω (b) H3 ω1, ω2, ω3 , where ω3 ω2

log |H3(ω1, ω2, ω3)| log |H3(ω1, ω2, ω3)| 0.20 −29 −31 −25

0.08 −25.5 −28 −30 0.15

−27 −29 −26 −28 2 −27 2 −25 −24

ω −25 ω −27 −28

0.10 −26

−26.5

−29 0.04 −26 0.05 −26 −25.5 −25

−25 0.00 0.00 0.00 0.05 0.10 0.15 0.20 0.00 0.02 0.04 0.06 0.08 0.10

ω1 ω1

p q  p q  (c) H3 ω1, ω2, ω3 , where ω3 0.1 (d) H3 ω1, ω2, ω3 , where ω1 ω2 ω3 0.1 Figure 16: Frequency response function for Van der Pol oscillator, where input is impulse signal

For the case of wave tank input, the ARX model for the Van der Pol oscillator is  ¡ 2 ¡ 3 ¡ 3 ¡ yt 1.9949yt¡1 1.0049yt¡2 0.0050yt¡1 yt¡2 0.0033yt¡1 0.0017yt¡2 0.0010ut¡1 (4.12) which is similar to the case of impulse input but with slightly different coefficients for the variables, especially the coefficient of the input variable. The first order frequency response function is ¡ ¡0.0010e iω H pωq  , (4.13) 1 1 ¡ 1.9949e¡iω 1.0049e¡i2ω 18 M A A Bakar, N M Ariff, D A Green & A V Metcalfe and the third order frequency response function is

p q p q p q ‘ H1 ω1 H1 ω2 H1 ω3 H pω , ω , ω q  ¡ p q ¡ p q (4.14)  3 1 2 3 1¡1.9949e i ω1 ω2 ω3 1.0049e i2¨ω1 ω2 ω3  ¡ p q ¡ p q ¡ p q ¡ p q 0.0050 i ω1 2ω2 2ω3 i 2ω1 ω2 2ω3 i 2ω1 2ω2 ω3 ¡ i ω1 ω2 ω3 3 e e e 0.0033e .

|H1| log |H3(ω1, ω2, ω3)| 20

0.20 −7 −5

−4 −6 −3 −5 15 0.15 −2 −4 −1 0 | 1 1 2 2 5 4 4 1 ω

H 3 10 0 |

0.10 1 −1 −2 0

0 −3 5 0.05 1 2 0 2 −1 3 4 0 0.00 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.05 0.10 0.15 0.20

ω Frequency (rad/sec) 1

p q p q  (a) H1 ω (b) H3 ω1, ω2, ω3 , where ω3 ω2

log |H3(ω1, ω2, ω3)| log |H3(ω1, ω2, ω3)|

4 0.20 −1 3.5

0 −2

0.08 3 0.15 1 −1 2 0

2 3 5 2 4 4 1

ω 3 1 ω

0.10 0 2 2 2 0.04 2.5 0.05

3.5 4 3 3 4 4 3 0.00 0.00 0.00 0.05 0.10 0.15 0.20 0.00 0.02 0.04 0.06 0.08 0.10

ω1 ω1

p q  p q  (c) H3 ω1, ω2, ω3 , where ω3 0.1 (d) H3 ω1, ω2, ω3 , where ω1 ω2 ω3 0.1 Figure 17: Frequency response function for Van der Pol oscillator, where input is wave tank signal

p q For all the cases, the first order frequency response functions, H1 ω , are as expected p q (Figs. 14(a), 15(a), 16(a) and 17(a)). The third order frequency response functions, H3 ω1, ω2, ω3 ,  is also known as tri-spectra. It can be observed that tri-spectra when ω3 ω2 (Fig. 14(b),  15(b), 16(b) and 17(b)), is qualitatively similar with the tri-spectra where ω3 0.1 (Fig. 14(c), 15(c), 16(c) and 17(c)), for all the cases. Figures 14(d), 15(d), 16(d) and 17(d) shows p q  that the Duffing and the Van der Pol oscillators tri-spectra does peak at ω1, ω2, ω3 p0.1, 0, 0q, p0, 0.1, 0q, p0, 0, 0.1q. The Figs. 14(d), 15(d), 16(d) and 17(d) provides some evidence of nonlinearity as contours are plotted, rather than three spikes rising from a plane at zero as it would be for a linear system. Comparison of autoregressive spectral and wavelet characterizations of nonlinear oscillators 19

Figure 18: Simple cylinder oscillating ap WEC

5. Case Study

In this case study, we will be discussing the application of this technique on identification of a nonlinear dynamic system of oscillating flap wave energy converter (OFWEC). OFWEC is a SDOF dynamic system but with surge mode of vibration in response to the wave surge force. According to Folley et al. [6], assuming the motion of the flap is sinusoidal, the nonlinear dynamic system for OFWEC can be modelled as

p q : p q 9 | 9 | 9  I Ia θt kpθt Λ Br θt Bv θt θt Ft cos θt , (5.1)

where Ft is the wave surge force at time t, θt is the angular rotation of body at time t, I is the body of inertia,Ia is the added moment of inertia, kp is the pitch stiffness of body, Λ is the power take-off damping coefficient, Br is the radiation damping coefficient and Bv is viscous damping coefficient. Equation 5.1 is equivalent to the second order differential equation of a linear mass spring system with an additional non-linear term for the torque induced by vortex-shedding, which is usually approximated by the instantaneous velocity squared. This instantaneous velocity squared is known as the drag term from the Morison equation [21] which results in the model to be considered as a nonlinear model. This nonlinear damping will cause the amplitude of the responses to be slightly reduced [22]. For this study, we assumed that the design of the flap is a cylinder with two meter diameter hinged to the seabed such as shown in Fig. 18. Assumed that the value of the        parameters in Eq. 5.1 are m I Ia 1, c Λ Br 0.01, kp k 2 and Bv 0.1. The impulse response of the system such as in Fig. 19 are simulated by using the fourth- order Rungge Kutta method, with the initial displacement of 0.1. The impulse response are simulated for 500 seconds with the sampling interval of 0.1 second. First, we transformed the impulse response by using the CWT (Fig. 20). 20 M A A Bakar, N M Ariff, D A Green & A V Metcalfe

IRF IRF 0.10 0.04 0.00 0.00 amplitude (m) amplitude (m) −0.04 −0.10 0 100 200 300 400 500 100 110 120 130 140 150

time (sec) time (sec)

(a) First 500 seconds (b) Samples of 100 to 150 second Figure 19: Time series of impulse response (IRF) function for OFWEC system with impulse input

Figure 20: CWT of the impulse response (IRF) for OFWEC system with impulse input

Instantaneous Frequency IRF and Instantaneous Envelope 0.04 0.8 0.00 0.4 amplitude (m) frequency (Hz) 0.0 −0.04 0 100 200 300 400 500 100 120 140 160 180 200

time (s) time (s)

(a) Instantaneous frequency of the impulse response (b) Actual impulse response (full line) and instanta- neous envelope (dashed line) Figure 21: Wavelet estimation for OFWEC system with impulse input Comparison of autoregressive spectral and wavelet characterizations of nonlinear oscillators 21

IRF 0.04 0.00 Amplitude −0.04 100 110 120 130 140 150

time (s)

Figure 22: Actual impulse response (full line) and estimated impulse response (dashed line) for OFWEC system with impulse input at 100 to 150 second

The estimated instantaneous frequency from the wavelet ridge for the OFWEC system in Fig. 21(a) is constant over time. The estimated instantaneous envelope for this system also fits the peak of the impulse response (Fig. 21(b)). From the wavelet skeleton, we reconstruct the impulse response and based on Fig. 22, we can see that the estimation fits the actual impulse response of the OFWEC.

From Fig. 23, we can see that the damping of the OFWEC system depends on time. This is because the semi-logarithmic plot of the instantaneous envelope is not quite linear. Hence, the estimate for damping at time t can only be found by calculating the tangent slope of the semi-logarithmic plot at time t. Based on the estimated backbone curve of this OFWEC system in Fig. 24, we can say that, for the OFWEC system, the response amplitude does not depend on the frequency.

1 × 10−1

5 × 10−2

2 × 10−2

1 × 10−2 Instantaneous Envelope 5 × 10−3

0 100 200 300 400

time Figure 23: Semi-logarithmic plot of the instantaneous envelopes against times for the OFWEC system with impulse input 22 M A A Bakar, N M Ariff, D A Green & A V Metcalfe

Estimated Backbone 0.08 0.06 0.04 0.02 Instantaneous Envelope (m) Instantaneous Envelope

0.10 0.15 0.20 0.25 0.30 0.35 0.40

Instantaneous Frequency (Hz) Figure 24: Estimated backbone curves for OFWEC system with impulse input

The RDT does capture the impulse response of this system when the wave tank input used. From Fig. 25, we can see that the theoretical impulse response is quite similar to the one captured by the RDT.

RDT

theoretical

0.15 RDT 0.00 amplitude (m) −0.15 0 10 20 30 40 50

time (sec)

Figure 25: impulse response (IRF) captured by the RDT for OFWEC system with impulse input

Wavelet transform 1.58 0.5 frequency (Hz) 0.16 0 10 20 30 40

time (s)

Figure 26: CWT of the RDT impulse response (IRF) for OFWEC system with impulse input Comparison of autoregressive spectral and wavelet characterizations of nonlinear oscillators 23

Instantaneous Frequency IRF and Instantaneous Envelope 0.8 0.05 0.4 amplitude (m) frequency (Hz) −0.05 0.0 0 10 20 30 40 50 0 10 20 30 40 50

time (s) time (s)

(a) Instantaneous frequency of the RDT impulse re- (b) RDT impulse response (full line) and instanta- sponse neous envelope (dashed line) Figure 27: Wavelet estimation for OFWEC system with wave tank input

1 × 100

5 × 10−1

2 × 10−1

1 × 10−1

5 × 10−2

2 × 10−2 Instantaneous Envelope

1 × 10−2

0 10 20 30 40 50

time

Figure 28: Semi-logarithmic plot of the instantaneous envelopes against times for the OFWEC system with wave tank input

Estimated Backbone 0.060 0.050 Instantaneous Envelope (m) Instantaneous Envelope

0.10 0.15 0.20 0.25 0.30 0.35 0.40

Instantaneous Frequency (Hz) Figure 29: Estimated backbone curves for OFWEC system with wave tank input 24 M A A Bakar, N M Ariff, D A Green & A V Metcalfe

The form of the ARX model for the OFWEC is found by substituting central differnces | | 2 into the differential equation in Eqn 5.1, and replacing the θt θt by θ . Given that we used yt as the responses for the Duffing and the Van der Pol oscillators before this, we change the notation for the response of the OFWEC system from θt into yt . By using the ARX technique, the model for the OFWEC system with impulse input is given as

 ¡ 2 2 ¡ q yt 1.9780yt¡1 0.9983yt¡2 0.0019yt¡1 0.0020yt¡2 0.0039yt¡1 yt¡2 0.00001ut¡1 (5.2) The first order frequency response function is

¡ 0.00001e iω H pωq  , (5.3) 1 1 ¡ 1.9780e¡iω 0.9983e¡i2ω and the second order frequency response function is

p q p q ¡ p q  H1 ω1 H1 ω2 H2 ω1, ω2 ¡ p q ¡ p q (5.4) 1 ¡ 1.9780e i ω1 ω2 0.9983e i2 ω1 ω2  ¡ © ¡ p q ¡ p q 0.0039 ¡ p q ¡ p q 0.0019e i ω1 ω2 0.0020e i ω1 ω2 ¡ e i ω1 2ω2 e i 2ω1 ω2 . 2

|H1| log |H2(ω1, ω2)| 0.20 −22 −22

−21 −21 0.03 −20 −20 −19 −16 −19 −18 −18 −17

0.15 −19 −19 −19 −20 −20 −21 −20 −18 −21 | 1 2 0.02 ω H | 0.10 −22 0.01 0.05

−17 −23 −22 0.00 0.00 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.05 0.10 0.15 0.20

ω Frequency (rad/sec) 1

p q p q (a) H1 ω (b) H2 ω1, ω2 Figure 30: Frequency response function for OFWEC system, where input is impulse signal

Meanwhile, for the case of wave tank input, the model from the ARX is

 ¡ ¡ 2 ¡ 2 q yt 1.9780yt¡1 0.9980yt¡2 0.0010yt¡1 0.0008yt¡2 0.0019yt¡1 yt¡2 0.0100ut¡1 (5.5) The first order frequency response function is

¡ ¡0.0100e iω H pωq  , (5.6) 1 1 ¡ 1.9780e¡iω 0.9980e¡i2ω Comparison of autoregressive spectral and wavelet characterizations of nonlinear oscillators 25 and the second order frequency response function is

p q p q ¡ p q  H1 ω1 H1 ω2 H2 ω1, ω2 ¡ p q ¡ p q (5.7) 1 ¡ 1.9780e i ω1 ω2 0.9980e i2 ω1 ω2  ¡ © ¡ p q ¡ p q 0.0019 ¡ p q ¡ p q ¡0.0010e i ω1 ω2 ¡ 0.0008e i ω1 ω2 e i ω1 2ω2 e i 2ω1 ω2 . 2

|H1| log |H2(ω1, ω2)|

−9 0.20

30 −9 −8 −7 −8 −6 −6 −7 −3 −4 −5 −5 −4 −3 0.15 −6 −6 −5 −7 −7 −8 20 −6 | 1 2 −7 ω H | 0.10 −8

10 −9 0.05 5 −4 −3 0 0.00 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.05 0.10 0.15 0.20

ω Frequency (rad/sec) 1

p q p q (a) H1 ω (b) H2 ω1, ω2 Figure 31: Frequency response function for OFWEC system, where input is wave tank signal

 Figures 30(b) and 31(b) shown that there are relatively high ridges defined by ω2  0.15 and ω1 ω2 0.15. The contour plot of the bi-spectrum is qualitatively similar to the Van der Pol oscillator’s tri-spectrum. This may be because the damping of both oscillators are nonlinear.

6. Conclusions and Discussions

This study has shown that the wavelet ridge can be used to identify nonlinearities in oscillating systems. It can also be used to categorize the type of nonlinearity of the system. By using the information on the system’s instantaneous envelope and instantaneous fre- quency, which are estimated from the wavelet ridge and wavelet skeleton, the parameters such as the damping ratio and the nonlinear coefficients can be estimated. Based on the information found from these estimates, the parameters of the nonlinear system such as the damping ratio and the nonlinear coefficients can be approximated. In the cases of both the Duffing and the Van der Pol weakly nonlinear oscillators, both the wavelet and the ARX method capable of identifying the system’s parameter from an impulse response with reasonable accuracy. The ARX method is considerably easier to im- plement compare to the wavelet technique. The probing method appears to give a useful frequency domain characterization of the systems through the frequency response func- tions. 26 M A A Bakar, N M Ariff, D A Green & A V Metcalfe

In the case of wave tank input, the RDT does not converge and so does not lead to capturing the impulse response of the Duffing and the Van der Pol weakly nonlinear oscilla- tors. Presumably , this is because the system response of both weakly nonlinear oscillators to forcing is chaotic [19]. However, the ARX does work well when wave tank signals being used as the input for both systems. In the case study, the RDT does capture the impulse response for the OFWEC and there is no suggestion of a chaotic response. Both the wavelet and the ARX techniques work well despite the approximation of the absolute value by a squaring function for the ARX approach. Overall, the wavelet method does provide useful insight into the process through the wavelet ridge, and a combination of the ARX and the wavelet methods is recommended for nonlinear system identification especially if the RDT could be used to capture the impulse response. Both methods can also be used when the form of the system is not known.

Acknowledgments

The authors acknowledge the developer of the open source software R [20], and ex- press deep appreciation to the Universiti Kebangsaan Malaysia and the Ministry of Higher Education Malaysia for the financial allocation for this work.

References

[1] Spina, D., Valente, C., and Tomlinson, G. A new procedure for detecting nonlinearity from transient data using the gabor transform. Nonlinear Dynamics, 1996, 11(3), 235–254. [2] Staszewski, W. J. Identification of non-linear systems using multi-scale ridges and skeletons of the wavelet transform. Journal of Sound and Vibration, 1998, 214(4), 639 – 658. [3] Londoño, J. M., Neild, S. A., and Cooper, J. E. Identification of backbone curves of nonlinear systems from resonance decay responses. Journal of Sound and Vibration, 2015, 348, 224– 238. [4] Kijewski, T. and Kareem, A. Wavelet transforms for system identification in civil engineering. Computer-Aided Civil and Infrastructure Engineering, 2003, 18(5), 339–355. [5] Ruzzene, M., Fasana, A., Garibaldi, L., and Piombo, B. Natural frequencies and dampings identification using wavelet transform: Application to real data. Mechanical Systems and Signal Processing, 1997, 11(2), 207–218. [6] The design of small seabed-mounted bottom-hinged wave energy converters, 2007. [7] Whittaker, T. and Folley, M. Nearshore oscillating wave surge converters and the develop- ment of oyster. Philosophical Transactions of The Royal Society A, 2012, 370, 345–364. [8] Bakar, M. A. A., Green, D. A., Metcalfe, A. V., and Najafian, G. Comparison of heaving buoy and oscillating flap wave energy converters. In: AIP Conference Proceedings: Proceedings of the 20th National Symposium on Mathematical Sciences, volume 1522. 2013 pages 86–101. [9] Carmona, R., Hwang, W., and Torresani, B. Characterization of signals by the ridges of their wavelet transforms. Signal Processing, IEEE Transactions on, 1997, 45(10), 2586–2590. [10] Feldman, M. Non-linear system vibration analysis using hilbert transform–i. free vibration analysis method’freevib’. Mechanical Systems and Signal Processing, 1994, 8(2), 119–127. [11] Nayfeh, A. H. Perturbation methods, 2008 (Wiley-VCH). Comparison of autoregressive spectral and wavelet characterizations of nonlinear oscillators 27

[12] Carmona, R., Hwang, W.-L., and Torrésani, B. Practical Time-Frequency Analysis: Gabor and Wavelet Transforms, with an Implementation in S, 1998, volume 9 (Academic Press). [13] Staszewski, W. J. Identification of damping in mdof systems using time-scale decomposition. Journal of Sound and Vibration, 1997, 203(2), 283 – 305. [14] Staszewski, W. J. Analysis of non-linear systems using wavelets. Proceedings of the Institution of Mechanical Engineers – Part C – Journal of Mechanical Engineering Science, 2000, 214(11), 1339 – 1353. [15] Tchamitchian, P. and Torrésani, B. Ridge and skeleton extraction from the wavelet transform. Wavelets and their Applications, 1992, 123, 151. [16] Billings, S. and Tsang, K. Spectral analysis for non-linear systems, part i: Parametric non- linear spectral analysis. Mechanical Systems and Signal Processing, 1989, 3(4), 319–339. [17] Metcalfe, A., Maurits, L., Svenson, T., Thach, R., and Hearn, G. E. Modal analysis of a small ship sea keeping trial. Australian & New Zealand Industrial and Applied Mathematics Journal, July 2007, 47, 915–933. [18] Bakar, M., Green, D., and Metcalfe, A. Comparison of spectral and wavelet estimators of transfer function for linear systems. East Asian Journal on Applied Mathematics, 2012, 2(3), 214 – 237. [19] Strogatz, S. H. Nonlinear dynamics and chaos: with applications to physics, biology, chemistry, and engineering, 2000 (Westview Press). [20] R Development Core Team. R: A Language and Environment for Statistical Computing.R Foundation for Statistical Computing, Vienna, Austria, 2008. [21] Morison, J., Johnson, J., and Schaaf, S. The force exerted by surface waves on piles. Journal of Petroleum Technology, 1950, 2(5), 149–154. [22] Bakar, M. A. A., Green, D. A., Metcalfe, A. V., and Ariff, N. M. Unscented kalman filtering for wave energy converters system identification. In: AIP Conference Proceedings: Proceedings of the 3rd International Conference on Mathematical Sciences, volume 1602 (AIP Publishing), 2014 pages 304–310. Chapter 8

Conclusions

In this study, I have discussed several techniques on estimating the response ampli- tude operator (RAO) for single degree of freedom linear systems, which are based on the spectral analysis, wavelet or using both [9]. Different cases of noise on the signals have been discussed and from the analysis, the main advantage of a wavelets approach is its ability to remove noise from input and output signals before applying spectral analysis methods. This study also discussed and shown that the time series model, such as the nonlinear autoregressive with exogenous input (NARX) model, can be used to build the model for nonlinear dynamic systems [8]. The NARX model is suitable for nonlinear dynamic systems that can be represented, to a reasonable approximation, by polynomials in lagged responses and inputs. The time domain model enables corresponding response spectra to be computed. The UKF technique, is also very useful on estimating the states and parameters of linear and nonlinear dynamic system, given the availability of long data records. The UKF uses the same technique as the KF while also includes the unscented transformation which improves the convergence and accuracy of the estimation. The UKF requires a state space model of the system in discrete time, which is similar to, and which can include NARX model. Given how fast the UKF algorithm converge, it will be suitable for on-line identification of nonlinear dynamic systems. For the weakly nonlinear oscillator, the wavelet ridge can be used to identify nonlinearities in oscillating systems and categorize the type of nonlinearity of the

185 CHAPTER 8. CONCLUSIONS 186 system. This requires a model for a nonlinear oscillator of the proposed form. The wavelet ridge also can be used to estimate the parameters of the nonlinear system such as the damping ratio and the nonlinear coefficients can be approximated from the wavelet estimated backbone curve. The wavelet ridge is effective for system identification of weakly nonlinear oscillator, provided that the impulse response is known for a range of initial amplitudes. The random decrement technique (RDT) offers the possibility of estimating the impulse response from a record of the system output under general forcing. It is justified for a linear systems, and it appears to give a reasonable approximation for some, at least, non-chaotic nonlinear systems. The impulse response could not be captured by the RDT for the Duffing and the Van der Pol oscillator with force, presumably , because the system response of both weakly nonlinear oscillators to forcing is chaotic. However, the RDT does capture the impulse response for the OFWEC and there is no suggestion of a chaotic response. From this study, it might be useful to combine several techniques to gain a better insights on the dynamic systems especially for nonlinear systems. For an example, the wavelet ridge, can be used to identify the nonlinearities and also to categorize the type of the system’s nonlinearity from the system’s impulse response. Once the type of nonlinearity is known, then the ARX method can be used to estimate the system’s parameters, given that it will be easier to implement compare to the wavelet technique [8]. However, if the impulse response of the system is unavailable, then the RDT may be used to capture the impulse response. Alternatively, the UKF can be used to estimate the nonlinear parameters. For system identification techniques which required the signals being analyzed, the wavelet can be used to eliminate the noise on the signals prior to the implementation of the technique. This will reduce the estimation error which might be caused by the noise. The main theme for future research is to investigate the methods considered in this thesis, and other methods, for multi-input multi-output systems. In particular such systems will allow for more realistic modelling of a range of wave energy con- verter systems that have multiple degrees of freedom. It is suggested that real data from the actual wave energy converters can be used to get a better insights on the dynamic of the wave energy converters system. The dynamic system of the power CHAPTER 8. CONCLUSIONS 187 take off for the wave energy converters would be another interesting topic which could be benefited on optimizing the energy harvesting. Even though this study focuses on the dynamic of oscillating system, the techniques discussed here might also be applied on another area such as in structural engineering, economy and me- teorology. There are still many applications that could benefited from the wavelet applications for system identification given the capability of wavelet on analyzing in both time and frequency domain, simultaneously. There are many applications that can be benefited from wavelet given its capabil- ity in solving differential equations, in statistics such as in the area of nonparametric statistics, survival analysis, time series analysis, variance stabilization and in solving the smoothing problem, in turbulence analysis, image processing and signal process- ing. Many techniques that are based on the Fourier transform can also be replicated by the wavelet transform. The advantageous of wavelet in analyzing nonstationary signals and nonlinear system could give a way on understanding many nonlinear phenomenons and could be useful in many applications. Bibliography

[1] H. Akc¸ay and P. P. Khargonekar, The least squares algorithm, paramet- ric system identification and bounded noise, Automatica, 29 (1993), pp. 1535– 1540.

[2] K. Alvin, A. Robertson, G. Reich, and K. Park, Structural system identification: from reality to models, Computers & structures, 81 (2003), pp. 1149–1176.

[3] V. S. Anishchenko, V. Astakhov, A. Neiman, T. Vadivasova, and L. Schimansky-Geier, Nonlinear dynamics of chaotic and stochastic sys- tems: tutorial and modern developments, Springer Science & Business Media, 2007.

[4] P. Argoul and T.-P. Le, Instantaneous indicators of structural behaviour based on the continuous cauchy wavelet analysis, Mechanical Systems and Sig- nal Processing, 17 (2003), pp. 243 – 250.

[5] K. Astr˚ om¨ and T. Bohlin, Numerical identification of linear dynamic sys- tems from normal operating records, in Theory of self-adaptive control systems, Springer, 1966, pp. 96–111.

[6] K. J. Astr˚ om¨ and P. Eykhoff, System identificationa survey, Automatica, 7 (1971), pp. 123–162.

[7] E. Bai, A blind approach to the hammerstein–wiener model identification, Automatica, 38 (2002), pp. 967–979.

188 BIBLIOGRAPHY 189

[8] M. Bakar, N. Ariff, D. Green, and A. Metcalfe, Comparison of autoregressive spectral and wavelet characterizations of nonlinear oscillators. Submitted to East Asian Journal on Applied Mathematics.

[9] M. Bakar, D. Green, and A. Metcalfe, Comparison of spectral and wavelet estimators of transfer function for linear systems, East Asian Journal on Applied Mathematics, 2 (2012), pp. 214 – 237.

[10] M. Bakar, D. Green, A. Metcalfe, and G. Najafian, Comparison of heaving buoy and oscillating flap wave energy converters, in AIP Conference Proceedings: Proceedings of the 20th National Symposium on Mathematical Sciences, vol. 1522, 2013, pp. 86–101.

[11] J. Bendat, Nonlinear system techniques and applications, Wiley, New York, 1998.

[12] J. Bendat, P. Palo, and R. Coppolino, A general identification tech- nique for nonlinear differential equations of motion, Probabilistic engineering mechanics, 7 (1992), pp. 43–61.

[13] J. Bendat and A. Piersol, Random data: Analysis and measurement pro- cedures, vol. 729, Wiley, 2011.

[14] S. Billings, Nonlinear system identification: NARMAX methods in the time, frequency, and spatio-temporal domains, John Wiley & Sons, 2013.

[15] S. Billings and K. Tsang, Spectral analysis for non-linear systems, part i: Parametric non-linear spectral analysis, Mechanical Systems and Signal Processing, 3 (1989), pp. 319–339.

[16] S. Billings and H. Wei, A new class of wavelet networks for nonlinear system identification, Neural Networks, IEEE Transactions on, 16 (2005), pp. 862–874.

[17] L. Birta and G. Arbez, Modelling and Simulation: Exploring Dynamic System Behaviour, Springer-Verlag, New York, 2007. BIBLIOGRAPHY 190

[18] R. Blackman and J. Tukey, The measurement of power spectra from the point of view of communications engineeringpart i, Bell System Technical Jour- nal, 37 (1958), pp. 185–282.

[19] K. Bold, C. Edwards, J. Guckenheimer, S. Guharay, K. Hoffman, J. Hubbard, R. Oliva, and W. Weckesser, The forced van der pol equa- tion ii: Canards in the reduced system, SIAM Journal on Applied Dynamical Systems, 2 (2003), pp. 570–608.

[20] G. Box, G. Jenkins, and G. Reinsel, Time series analysis, Holden-day San Francisco, 1976.

[21] R. Brincker, L. Zhang, and P. Andersen, Modal identification of output-only systems using frequency domain decomposition, Smart materials and structures, 10 (2001), p. 441.

[22] R. Carmona, W. Hwang, and B. Torresani, Characterization of signals by the ridges of their wavelet transforms, Signal Processing, IEEE Transactions on, 45 (1997), pp. 2586–2590.

[23] R. Carmona, W.-L. Hwang, and B. Torresani´ , Practical Time- Frequency Analysis: Gabor and Wavelet Transforms, with an Implementation in S, vol. 9, Academic Press, 1998.

[24] J. H. Cartwright, V. M. Egu´ıluz, E. Hernandez-Garc´ ´ıa, and O. Piro, Dynamics of elastic excitable media, International Journal of Bi- furcation and Chaos, 9 (1999), pp. 2197–2202.

[25] V. Cerone and D. Regruto, Parameter bounds for discrete-time hammer- stein models with bounded output errors, Automatic Control, IEEE Transac- tions on, 48 (2003), pp. 1855–1860.

[26] A. C.-L. Chian, E. L. Rempel, and C. Rogers, Complex economic dy- namics: chaotic saddle, crisis and intermittency, Chaos, Solitons & Fractals, 29 (2006), pp. 1194–1218. BIBLIOGRAPHY 191

[27] A. F. Childers, Parameter Identification and the for Continuous Non-Linear Dynamical Systems, PhD thesis, Virginia Tech, 2009.

[28] J. Conti and P. Holtberg, International energy outlook 2011. Online, September 2011.

[29] G. Cybenko, Just-in-time learning and estimation, NATO ASI SERIES F COMPUTER AND SYSTEMS SCIENCES, 153 (1996), pp. 423–434.

[30] J. Davidson, S. Giorgi, and J. V. Ringwood, Identification of wave energy device models from numerical wave tank data - part 1: Numerical wave tank identification tests, IEEE Transactions on Sustainable Energy, 7 (2016), pp. 1012–1019.

[31] M. Deflorian and S. Zaglauer, Design of experiments for nonlinear dy- namic system identification, in 18th IFAC World Congress, Milano, Aug, 2011, pp. 13179–13184.

[32] M. Deistler, System identification and time series analysis: Past, present, and future, in Stochastic Theory and Control, Springer, 2002, pp. 97–109.

[33] A. P. Dempster, N. M. Laird, and D. B. Rubin, Maximum likelihood from incomplete data via the em algorithm, Journal of the royal statistical society. Series B (methodological), 39 (1977), pp. 1–38.

[34] F. Ding and T. Chen, Identification of hammerstein nonlinear armax sys- tems, Automatica, 41 (2005), pp. 1479–1489.

[35] P. S. Diniz, E. A. Da Silva, and S. L. Netto, Digital signal processing: system analysis and design, Cambridge University Press, 2010.

[36] D. L. Donoho and I. M. Johnstone, Ideal spatial adaptation by wavelet shrinkage, Biometrika, 81 (1994), pp. 425–455.

[37] J. Falnes, A review of wave-energy extraction, Marine Structures, 20 (2007), pp. 185 – 201. BIBLIOGRAPHY 192

[38] M. Feldman, Non-linear system vibration analysis using hilbert transform–i. free vibration analysis method’freevib’, Mechanical Systems and Signal Pro- cessing, 8 (1994), pp. 119–127.

[39] R. FitzHugh, Impulses and physiological states in theoretical models of nerve membrane, Biophysical journal, 1 (1961), pp. 445–466.

[40] U. Forssell and P. Lindskog, Combining semi-physical and neural net- work modeling: an example of its usefulness, tech. report, Link¨opingUniver- sity, 1996.

[41] U. Forssell and L. Ljung, Closed-loop identification revisited, Automat- ica, 35 (1999), pp. 1215–1241.

[42] E. Fung, Y. Wong, H. Ho, and M. P. Mignolet, Modelling and predic- tion of machining errors using armax and narmax structures, Applied Math- ematical Modelling, 27 (2003), pp. 611–627.

[43] F. Gazzola, Mathematical models for suspension bridges: nonlinear Struc- tural instability, Springer, 2015.

[44] M. Gevers, A personal view of the development of system identification: A 30-year journey through an exciting field, Control Systems, IEEE, 26 (2006), pp. 93–105.

[45] R. Ghanem and F. Romeo, A wavelet-based approach for model and param- eter identification of non-linear systems, International Journal of Non-Linear Mechanics, 36 (2001), pp. 835–859.

[46] G. C. Goodwin and R. L. Payne, Dynamic system identification: experi- ment design and , Academic press, 1977.

[47] S. Gouttebroze and J. Lardies, On using the wavelet transform in modal analysis, Mechanics Research Communications, 28 (2001), pp. 561–569. BIBLIOGRAPHY 193

[48] U. Grenander and M. Rosenblatt, Statistical spectral analysis of time series arising from stationary stochastic processes, The Annals of Mathemat- ical Statistics, 24 (1953), pp. 537–558.

[49] M. S. Grewal and A. P. Andrews, Applications of kalman filtering in aerospace 1960 to the present [historical perspectives], Control Systems, IEEE, 30 (2010), pp. 69–78.

[50] H. Gruenspecht, International energy outlook 2011, tech. report, Center for Strategic and International Studies, 2010.

[51] F. Gustafsson, F. Gunnarsson, N. Bergman, U. Forssell, J. Jans- son, R. Karlsson, and P.-J. Nordlund, Particle filters for positioning, navigation, and tracking, Signal Processing, IEEE Transactions on, 50 (2002), pp. 425–437.

[52] E. J. Hannan, Testing for a jump in the spectral function, Journal of the Royal Statistical Society. Series B (Methodological), 23 (1961), pp. 394–404.

[53] W. Hardle, G. Kerkyacharian, D. Picard, and A. Tsybakov, Wavelets, Approximation and Statistical Applications, Springer-Verlag, New York, 1998.

[54] A. C. Harvey and R. G. Pierse, Estimating missing observations in eco- nomic time series, Journal of the American Statistical Association, 79 (1984), pp. 125–131.

[55] T. Hastie, R. Tibshirani, and J. Friedman, The elements of statistical learning: data mining, inference, and prediction, Springer, 2009.

[56] G. E. Hearn and A. Metcalfe, Spectral Analysis in Engineering: Con- cepts and Cases, Arnold, London, 1995.

[57] J. Hu, K. Kumamaru, and K. Hirasawa, A quasi-armax approach to modelling of non-linear systems, International Journal of Control, 74 (2001), pp. 1754–1766. BIBLIOGRAPHY 194

[58] M. Jammer, Concepts of force, DoverPublications. com, 1999.

[59] L. Jeen-Shang and Z. Yigong, Nonlinear structural identification using extended kalman filter, Computers & structures, 52 (1994), pp. 757–764.

[60] G. Jin, M. K. Sain, K. D. Pham, F. Billie, and J. C. Ramal, Modeling mr-dampers: a nonlinear blackbox approach, in American Control Conference, 2001. Proceedings of the 2001, vol. 1, IEEE, 2001, pp. 429–434.

[61] X. J. Jing, Z. Q. Lang, and S. A. Billings, Magnitude bounds of gen- eralized frequency response functions for nonlinear volterra systems described by narx model, Automatica, 44 (2008), pp. 838–845.

[62] X. Jinyu, X. Xiaorong, H. Zhixiang, and H. Yingduo, Power systems wide-area damping control based on online system identification [j], Automa- tion of Electric Power Systems, 23 (2004), p. 005.

[63] S. J. Julier and J. K. Uhlmann, New extension of the kalman filter to nonlinear systems, in AeroSense’97, International Society for Optics and Pho- tonics, 1997, pp. 182–193.

[64] S. J. Julier, J. K. Uhlmann, and H. F. Durrant-Whyte, A new ap- proach for filtering nonlinear systems, in American Control Conference, 1995. Proceedings of the, vol. 3, IEEE, 1995, pp. 1628–1632.

[65] R. E. Kalman, A new approach to linear filtering and prediction problems, Journal of Basic Engineering, 82 (1960), pp. 35–45.

[66] R. E. Kalman and R. S. Bucy, New results in linear filtering and prediction theory, Journal of Basic Engineering, 83 (1961), pp. 95–108.

[67] T. Kalmar-Nagy´ and B. Balachandran, Duffings Equation: Non-linear Oscillators and Their Behaviour, John Wiley & Sons Ltd, 2011, ch. Forced har- monic vibration of a Duffing oscillator with linear viscous damping, pp. 139– 174. BIBLIOGRAPHY 195

[68] R. Kandepu, B. Foss, and L. Imsland, Applying the unscented kalman filter for nonlinear state estimation, Journal of Process Control, 18 (2008), pp. 753–768.

[69] B. Kaplan, I. Gabay, G. Sarafian, and D. Sarafian, Biological appli- cations of the filtered van der pol oscillator, Journal of the Franklin Institute, 345 (2008), pp. 226–232.

[70] G. Kerschen, K. Worden, A. F. Vakakis, and J.-C. Golinval, Past, present and future of nonlinear system identification in structural dynamics, Mechanical Systems and Signal Processing, 20 (2006), pp. 505 – 592.

[71] Y. Kitada, Identification of nonlinear structural dynamic systems using wavelets., Journal of Engineering Mechanics, 124 (1998), p. 1059.

[72] G. Kitagawa, Non-gaussian statespace modeling of nonstationary time se- ries, Journal of the American statistical association, 82 (1987), pp. 1032–1041.

[73] G. A. Korn, Advanced dynamic-system simulation: model- tech- niques and Monte Carlo simulation, John Wiley & Sons, 2007.

[74] R. C. Lee, Optimal estimation, identification, and control, MIT, 1964.

[75] Y. Lee and M. SCHETZEN, Measurement of the wiener kernels of a non- linear system by cross-correlation, International Journal of Control, 2 (1965), pp. 237–254.

[76] A. M. Legendre, Nouvelles m´ethodes pour la d´eterminationdes orbites des com`etes, no. 1, F. Didot, 1805.

[77] M. Leijon, C. Bostrom,¨ O. Danielsson, S. Gustafsson, K. Haiko- nen, O. Langhamer, E. Stromstedt,¨ M. St˚alberg, J. Sundberg, O. Svensson, et al., Wave energy from the north sea: experiences from the lysekil research site, Surveys in geophysics, 29 (2008), pp. 221–240. BIBLIOGRAPHY 196

[78] I. Leontaritis and S. A. Billings, Input-output parametric models for non-linear systems part i: deterministic non-linear systems, International jour- nal of control, 41 (1985), pp. 303–328.

[79] J. Li and J. Roberts, Stochastic structural system identification, Compu- tational mechanics, 24 (1999), pp. 206–210.

[80] L. Li and S. Billings, Estimation of generalized frequency response func- tions for quadratically and cubically nonlinear systems, Journal of Sound and Vibration, 330 (2011), pp. 461–470.

[81] L. Ljung, Asymptotic behavior of the extended kalman filter as a parameter estimator for linear systems, Automatic Control, IEEE Transactions on, 24 (1979), pp. 36–50.

[82] , System Identification: Theory for the user, P T R Prentice Hall, 1987.

[83] , Perspectives on system identification, Annual Reviews in Control, 34 (2010), pp. 1–12.

[84] J. M. Londono,˜ S. A. Neild, and J. E. Cooper, Identification of back- bone curves of nonlinear systems from resonance decay responses, Journal of Sound and Vibration, 348 (2015), pp. 224–238.

[85] L. Lu and B. Yao, Experimental design for identification of nonlinear sys- tems with bounded uncertainties, in American Control Conference (ACC), 2010, IEEE, 2010, pp. 4504–4509.

[86] D. Mackenzie, Wavelets: Seeing the forest and the trees. available from http://www.beyonddiscovery.org (2001, accessed 10 April 2010).

[87] S. Mallat, A Wavelet Tour of Signal Processing, Academic Press, 1999.

[88] J. Mathews and K. Fink, Numerical methods using MATLAB, vol. 31, Prentice hall Upper Saddle River, 1999. BIBLIOGRAPHY 197

[89] L. A. McGee and S. F. Schmidt, Discovery of the kalman filter as a prac- tical tool for aerospace and industry, NASA Technical Memorandum 86847, NASA, 1985.

[90] R. Mehra, On-line identification of linear dynamic systems with applications to kalman filtering, Automatic Control, IEEE Transactions on, 16 (1971), pp. 12–21.

[91] A. Metcalfe, L. Maurits, T. Svenson, R. Thach, and G. E. Hearn, Modal analysis of a small ship sea keeping trial, Australian & New Zealand Industrial and Applied Mathematics Journal, 47 (2007), pp. 915–933.

[92] G. W. Morrison and D. H. Pike, Kalman filtering applied to statistical forecasting, Management Science, 23 (1977), pp. 768–774.

[93] R. B. Mrad and J. Levitt, Nonlinear process representation using ar- max models with time dependent coefficients, in Decision and Control, 1998. Proceedings of the 37th IEEE Conference on, vol. 1, IEEE, 1998, pp. 495–500.

[94] G. Najafian, R. Burrows, and R. Tickell, A review of the probabilistic description of morison wave loading and response of fixed offshore structures, Journal of fluids and structures, 9 (1995), pp. 585–616.

[95] S. Narayanan, S. Yim, P. Polo, et al., Nonlinear system identifica- tion of a moored structural system, in The Eighth International Offshore and Polar Engineering Conference, International Society of Offshore and Polar En- gineers, 1998.

[96] G. Nason, wavethresh: Wavelets statistics and transforms., 2013. R package version 4.6.4.

[97] G. P. Nason, Wavelet Methods in Statistics with R, Springer-Verlag, New York, 2008.

[98] A. H. Nayfeh, Perturbation methods, Wiley-VCH, 2008. BIBLIOGRAPHY 198

[99] O. Nelles, Nonlinear system identification: from classical approaches to neu- ral networks and fuzzy models, Springer Science & Business Media, 2001.

[100] D. E. Newland, An introduction to random vibrations, spectral and wavelet analysis, Longman Scientific & Technical, 1993.

[101] L. Ozbek and U. Ozlale, Employing the extended kalman filter in mea- suring the output gap, Journal of Economic Dynamics and Control, 29 (2005), pp. 1611–1622.

[102] D. B. Percival and A. T. Walden, Wavelet methods for time series analysis, Cambridge University Press, Cambridge, 2000.

[103] S. Pernot and C. H. Lamarque, A wavelet-galerkin procedure to investi- gate time-periodic systems: Transient vibration and stability analysis, Journal of Sound and Vibration, 245 (2001), pp. 845–875.

[104] R. L. Plackett, Some theorems in least squares, Biometrika, 37 (1950), pp. 149–157.

[105] B. Powell, K. Bailey, and S. Cikanek, Dynamic modeling and control of hybrid electric vehicle powertrain systems, Control Systems, IEEE, 18 (1998), pp. 17–33.

[106] M. B. Priestley, Evolutionary spectra and non-stationary processes, Journal of the Royal Statistical Society. Series B (Methodological), 27 (1965), pp. 204– 237.

[107] R Development Core Team, R: A Language and Environment for Sta- tistical Computing, R Foundation for Statistical Computing, Vienna, Austria, 2008.

[108] C. Ridder, O. Munkelt, and H. Kirchner, Adaptive background es- timation and foreground detection using kalman-filtering, in Proceedings of International Conference on recent Advances in Mechatronics, Citeseer, 1995, pp. 193–199. BIBLIOGRAPHY 199

[109] A. Robertson, K. Park, and A. K.F., Extraction of inmpulse response data via wavelet transform for structural system identification, Journal of Vi- bration and Acoustics, 120 (1998), pp. 252–260.

[110] J. Roll, A. Nazin, and L. Ljung, Nonlinear system identification via direct weight optimization, Automatica, 41 (2005), pp. 475–490.

[111] M. Ruth and B. Hannon, Modeling Dynamic Economic Systems, Springer US, 2012.

[112] K. Sales and S. Billings, Self-tuning control of non-linear armax models, International Journal of Control, 51 (1990), pp. 753–769.

[113] J. Schoukens, R. Pintelon, and Y. Rolain, Mastering system identifi- cation in 100 exercises, John Wiley & Sons, 2012.

[114] B. Sinopoli, L. Schenato, M. Franceschetti, K. Poolla, M. I. Jor- dan, and S. S. Sastry, Kalman filtering with intermittent observations, Automatic Control, IEEE Transactions on, 49 (2004), pp. 1453–1464.

[115] T. J. Slight, B. Romeira, L. Wang, J. M. Figueiredo, E. Wasige, and C. N. Ironside, A li´enard oscillator resonant tunnelling diode-laser diode hybrid integrated circuit: model and experiment, Quantum Electronics, IEEE Journal of, 44 (2008), pp. 1158–1163.

[116] J. Socolar, Nonlinear dynamical systems, in Complex Systems Science in Biomedicine, T. Deisboeck and J. Kresh, eds., Topics in Biomedical Engineer- ing International Book Series, Springer US, 2006, pp. 115–140.

[117] D. Spina, C. Valente, and G. Tomlinson, A new procedure for detecting nonlinearity from transient data using the gabor transform, Nonlinear Dynam- ics, 11 (1996), pp. 235–254.

[118] W. J. Staszewski, Identification of damping in mdof systems using time- scale decomposition, Journal of Sound and Vibration, 203 (1997), pp. 283 – 305. BIBLIOGRAPHY 200

[119] , Identification of non-linear systems using multi-scale ridges and skele- tons of the wavelet transform, Journal of Sound and Vibration, 214 (1998), pp. 639 – 658.

[120] , Analysis of non-linear systems using wavelets., Proceedings of the Insti- tution of Mechanical Engineers – Part C – Journal of Mechanical Engineering Science, 214 (2000), pp. 1339 – 1353.

[121] S. H. Strogatz, Nonlinear dynamics and chaos: with applications to physics, biology, chemistry, and engineering, Westview Press, 2000.

[122] A. Swain and S. Billings, Generalized frequency response function ma- trix for mimo non-linear systems, International Journal of Control, 74 (2001), pp. 829–844.

[123] P. Tchamitchian and B. Torresani´ , Ridge and skeleton extraction from the wavelet transform, Wavelets and their Applications, 123 (1992), p. 151.

[124] F. A. Tgersen and K. E. Andersen, Dynamic and com- bination in forecasting: an empirical evaluation of bagging and boosting, in International Symposium on Forecasting, 2010.

[125] R. Van Der Merwe and E. A. Wan, The square-root unscented kalman filter for state and parameter-estimation, in Acoustics, Speech, and Signal Pro- cessing, 2001. Proceedings.(ICASSP’01). 2001 IEEE International Conference on, vol. 6, IEEE, 2001, pp. 3461–3464.

[126] B. Van der Pol, On relaxation-oscillations, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 2 (1926), pp. 978–992.

[127] B. Van der Pol and J. Van der Mark, Frequency demultiplication, Na- ture, 120 (1927), pp. 363–364.

[128] M. C. VanDyke, J. L. Schwartz, and C. D. Hall, Unscented kalman filtering for spacecraft attitude state and parameter estimation. Online, 2004. BIBLIOGRAPHY 201

[129] E. Walter and L. Pronzato, Identification of parametric models, vol. 8, Springer Verlag New-York, 1997.

[130] E. A. Wan and R. Van Der Merwe, The unscented kalman filter for nonlinear estimation, in Adaptive Systems for Signal Processing, Commu- nications, and Control Symposium 2000. AS-SPCC. The IEEE 2000, IEEE, 2000, pp. 153–158.

[131] G. Welch and G. Bishop, An introduction to the kalman filter, 1995.

[132] P. Whitle, Hypothesis testing in time series analysis, PhD thesis, Uppsala University, 1951.

[133] N. Wiener, Nonlinear Problems In Random Theory, Cambridge, MA: MIT Press, 1958.

[134] L. A. Wong and J. C. Chen, Nonlinear and chaotic behavior of structural system investigated by wavelet transform techniques, International Journal of Non-Linear Mechanics, 36 (2001), pp. 221–235.

[135] Worldwatch-Institute, Renewable energy enters boom period. http://www.worldwatch.org/node/1771 [Accessed 18 June 2010], 2003.

[136] H. Yu, G. Cai, and Y. Li, Dynamic analysis and control of a new hyper- chaotic finance system, Nonlinear Dynamics, 67 (2012), pp. 2171–2182.

[137] Y. Yu, R. A. Shenoi, H. Zhu, and L. Xia, Using wavelet transforms to analyze nonlinear ship rolling and heave-roll coupling, Ocean Engineering, 33 (2006), pp. 912–926.

[138] L. Zadeh, On the identification problem, Circuit Theory, IRE Transactions on, 3 (1956), pp. 277–281.

[139] J. Zheng, X. Gao, Y. Guo, and G. Meng, Application of wavelet trans- form to bifurcation and chaos study, Applied Mathematics and Mechanics, 19 (1998), pp. 557–563.