<<

DEGREE PROJECT IN MATHEMATICS, SECOND CYCLE, 30 CREDITS STOCKHOLM, SWEDEN 2016

Estimation and prediction of wave input and system states based on local hydropressure and machinery response measurements

QUENTIN LAURENT

KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ENGINEERING SCIENCES

Estimation and prediction of wave input and system states based on local hydropressure and machinery response measurements

QUENTIN LAURENT

Master’s Thesis in Optimization and Systems Theory (30 ECTS credits) Master Programme in Applied and Computational Mathematics (120 credits) Royal Institute of Technology year 2016 In collaboration with CorPower AB Supervisor at KTH: Xiaoming Hu Examiner: Xiaoming Hu

TRITA-MAT-E 2016:54 ISRN-KTH/MAT/E--16/54--SE

Royal Institute of Technology SCI School of Engineering Sciences

KTH SCI SE-100 44 Stockholm, Sweden

URL: www.kth.se/sci

Abstract

Waves represent a big untapped energy source and many are endeavouring to develop Wave Energy Converters(WEC) to harvest this resource. The goal of this thesis, carried out with the young technology company CorPower Ocean AB, is to enable a better control of the company’s WEC by providing control strategies with a prediction of the input force on the device, also called excitation force. Previous work is already available for wave prediction, but this time the time-series we need to predict is not available and we used instead some pressure, force and position measurements to determine the value of the force in the future. The time-series of the measurements are linked to the values of the force thanks to linear(airy) wave theory and some linearisation of the existent model for the forces applied on the WECs. Three methods were suggested : prediction using an AR model then transformation thanks to transfer functions, Kalman filtering and Wiener filtering. The two latter have a better, more or less equivalent performance in terms of Mean Square Error, but the focus was made on the Wiener filter since it didn’t require identification under the assumption of a JONSWAP spectrum for the waves. This last method was implemented in the Simulink model of CorPowerOcean and extensively tested to quantify the influence of a variation in the state, noise conditions or parameters of the filter. The prediction were however rarely above 60% for a half wave period in the future and use of the method as is for non-causal control is questionable. We conclude by giving some other potential solutions for pursuing work in this domain.

Acknowledgements

I would like to thank my parents for the support they have been showing me during this year and my time doing this thesis. Thanks to my sister Hélène for being such an easy to live person. I am also grateful for the support I got at CorPower Ocean, from my advisors Gunnar, and Jorgen as well as the other members of the modelling team. Thanks to Anthony Papavasiliou and Xiaoming Hu for taking the role of advisors at UCL and KTH. Thanks to my friend Garrett for being such a good cook. Thanks to my flatmates Ka- jsa, Morgana, Tobias and Joakim. Thanks to my travel companions Miguel, David, Franck, Wouter, Lucia, Jorge, Ludovica and Marina for taking me out to an outstanding break in the course of this thesis. Thanks to William for the talking in French, to Sofia for teaching me my Swedish and Ericka for the talking in Spanish. Thanks to my cousin Nicolas for visiting. Thanks to my friends from Belgium Othman, Alex, Bertrand, Colin and Margaux for following with an amused eye my ventures in Sweden. Thanks to my friends from the Swedish and French language cafés, the SNNC, KTH and numer- ous international students who contributed making this year an outstanding experience, to name a few, Bàlint, Lorenz, Louise, Margaux, Tessa, Leon, Bowie and Mélanie.

Contents

1 Introduction 10 1.1 Motivation ...... 10 1.2 Previous work ...... 11 1.3 This work ...... 13

2 Theory 16 2.1 Waves and hydrodynamic signals ...... 16 2.1.1 Observation and representation of waves ...... 16 2.1.2 Fluids and water waves ...... 17 2.1.3 Spectrum and random phase-amplitude model . . . . . 21 2.1.4 Waves in oceanic waters ...... 21 2.2 Modelling of Wave Energy Conversion Systems ...... 23 2.2.1 Coordinates ...... 23 2.2.2 Equation of motion ...... 24 2.2.3 Hydrodynamic forces ...... 26 2.2.4 Pressure signal ...... 29 2.3 Controller ...... 30 2.4 Cyclical prediction models ...... 32 2.5 Auto-regressive models ...... 33

3 Prediction methods 35 3.1 Available data and approximations ...... 35 3.2 AR model and transformation ...... 38 3.3 Kalman filter ...... 39 3.3.1 System identification ...... 40 3.4 Wiener predictor ...... 42

2 4 Simulation 46 4.1 Prediction extension for the Simulink model ...... 46

5 Results 50 5.1 Quality criteria ...... 50 5.1.1 Confidence interval ...... 50 5.1.2 Goodness-Of-Fit ...... 51 5.2 Choice of a method ...... 51 5.3 Wiener results ...... 52 5.3.1 Contributions of the pressure and resulting force to the estimation for different sea states ...... 54 5.3.2 Influence of the Signal To Noise ratios . . . . 54 5.3.3 Order of the filter ...... 55 5.3.4 Comparison with the theoretical bound ...... 57 5.3.5 Influence of the Sea State ...... 58 5.3.6 Error in the sea-state information ...... 59 5.3.7 Additional information ...... 59

6 Conclusion 61

A Up-wave measurements 64

3 List of mathematical Symbols

푋ˆ(휔) Fourier transform of the signal 푋

휆 Wave length of waves

휑(푟, 푡) Velocity potential

휌(푟, 푡) Mass concentration

휂(푡) WEC position and angles

휈(푡) WEC speed and angular speed

휏푃 푇 푂 PTO force

휏푑푟푎푔 Drag force

휏푒푥푐 Corrected excitation force and moment

휏푔 Weight of the buoy

휏ℎ푦푑푟표 Hydrodynamic forces

휏ℎ푦푠푡 Hydrostatic force

훼푖 Wave component i’s phase

휔푖 Wave component i’s angular speed

휏푟푎푑 Radiation force

휏푤(푡) Force and moment due to wind

퐸퐽푂푁푆푊 퐴푃 (푓),푅푧푒푡푎 JONSWAP spectrum

4 퐻휁,퐹 Transfer function from 휁 to 퐹푒푥푐

퐻푗 Wave j’s height

퐻푠 Significant 푀 WEC inertia matrix

푅퐹 Auto-covariance of 퐹푒푥푐

푅퐹 푅 Cross-covariance of 퐹푒푥푐 and 푅푓

푅퐹 푝 Cross-covariance of 퐹푒푥푐 and 푝

푅푅 Auto-covariance of 푅푓

푅푅푝 Cross-covariance of 푅푓 and 푝

푅푓 Resulting force

푅푝 Auto-covariance of 푝

푅푝푅 Cross-covariance of 푝 and 푅푓 푇 Wave period

푇푗 Wave 푗’s period

푇푝 Peak wave period

푇푠 Significant wave period

푉0 Equilibrium volume

푉푠푢푏 Instantaneous submerged volume

푓푖 Wave component i’s frequency 푔 Gravitational acceleration

푘 Wave number

푚푖푛푓 Added mass 푝(푟, 푡) Pressure

5 푝푎 Atmospheric pressure

푝푑푦푛(푟, 푡) Dynamic pressure

푝푝(푡) Pressure at the probe 푣(푥, 푦, 푡) Velocity

푦 = 휁(푥, 푡) Position of free surface

푣(푟, 푡) Velocity field

6 List of abbreviations

AR Auto-reqressive (model)

ARX Auto-reqressive with external component(model)

DFT Discrete Fourier Transform

DHR Dynamic Harmonic Regression

EKF Extended Kalman Filter

FFT Fast Fourier Transform

GOF Goodness-Of-Fit

KBC Kinetic boundary condition

MPC Model-Predictive Controller

7 List of Figures

2.1 Surface elevation of a sea with 퐻푠 = 1푚, 푇푝 = 6.4푠 ...... 17 2.2 Comparison of the cosine coefficients JONSWAP spectrum and discrete Fourier transform of an artificial wave record for a peak period of 4.2 and an of 1.... 23 2.3 Axes system for the modelling of a WEC ...... 24 2.4 Sketch of a WEC with a PTO linked with a wire to the buoy . 25

2.5 푀퐻푒푥푐(푤푖) for the heave mode, with associated impulse response 27 2.6 Depth factor for several depth values, bottom depth is 50푚 .. 30

3.1 Linear approximation of 푅푓 − 푉푒푞휌푔 ...... 37 3.2 Signals for a wave period of 6.4푠, wave height of 1푚 ...... 37 4.1 ControllerTextFile.txt ...... 47 4.2 The wave prediction part of the controller block ...... 47 4.3 The wave prediction block ...... 48

5.1 Comparison of the Wiener order 140, AR order 40 and Kalman order 15 predictors for a wave period of 6.4푠 and a wave height of 1푚 ...... 52 5.2 Prediction of the Wiener filter in blue, actual value for the excitation force in Newton for a wave period of 5s, plotted against time in seconds ...... 53 5.3 Outlook of the performances of wiener filters using different subsets of the measurements available ...... 55 5.4 Performance of the predictor for a SNR of 20 and a SNR of 80 56 5.5 Assumed covariances for the corrected 푅푓 signal for several values of the noise parameter ...... 56 5.6 Theoretical GOF values for several values of filter order for

푇푝 = 6.4푠, 퐻푠 = 1푚 ...... 57

8 5.7 GOF values for several values of filter order for 푇푝 = 6.4푠, 퐻푠 = 1푚 ...... 58 5.8 Real and theoretical GOF of the Wiener predictor for several wave periods ...... 59

5.9 GoF for a signal of 8.5푠 and predictors with different 푇푝 pa- rameters ...... 60

A.1 Prediction error for the FFT prediction ...... 66

9 Chapter 1

Introduction

1.1 Motivation

Today’s politics and companies alike are increasingly concerned with finding more sustainable ways of buying energy. They are encouraged in this matter on the one hand by the increasing worry of the public regarding environmen- tal issues and by the willingness to reduce their dependency on fossil fuels and other organisations’ resources. In the race for the renewables, wave energy is a runner with big potential. Only a fraction of the energy that could be produced by harvesting is currently harnessed, as the world resource could account for 1 to 10푇 푊 [1]. Since the global mean consumption of electricity was of around 12.3푇 푊 in 2013[2], this could account for a substantial part of the future energy mix. Very much aware of the very unique opportunities to change the energy landscape for the better, CorPower Ocean, a company started a mere seven years ago, is actively developing a high-efficiency Wave Energy Converter(WEC). This WEC takes the form of a buoy, moving about with the waves on the surface of the ocean. It is moored to the sea-bottom and it is pulled down by a pretension system so that the WEC is an oscillating system; the waves pull the buoy up and the pretension pulls the buoy back down, for a nice oscillating movement. A generator can leverage this oscillation to produce electricity. Now the best power output is achieved when the device enters resonance, which occurs when the velocity is in phase with the surface elevation. In order to resonate with the waves whose frequency can

10 change over time, the WEC embarks technology that allows it to control its oscillations. Several control strategies can be imagined, such as phase control by latching which consists of locking a rack on the buoy to stay up after a crest or down after a trough, or linear damping which simply slows down the movements of the buoy. Another strategy is model predictive control(MPC), which models the state of the WEC’s system in the future and hence requires knowledge of the incoming forces due to the waves. For MPC, but also other kinds of non-causal control strategies for the WEC, the prediction of the forces acting on the buoy in the short-term is required.

1.2 Previous work

Most of the work that has already been done in the field of wave prediction focuses on the prediction of the wave elevation. The governing equations of the waves are non-linear in general [3]. In most situations of interest for a WEC, the waves have a relatively long wavelength compared to their amplitude. Most papers focusing on the wave part of wave energy use the framework of ([4]), which is a linearisation of the general model for gravity waves. As a consequence, sinusoidal waves propagate with a certain speed determined by the theory. The surface elevation at one point can then be seen as a sum of cosines with different frequencies and phases. That lead to the definition of cyclical models for the representation of waves. The sea elevation is then separated in several components of different chosen frequencies. The phase and ampli- tudes of each frequency component is determined by solving a least square problem on the initial data and a Kalman filter is then applied to the model. Known cyclical models are Harvey’s stuctural model and the Dynamic Har- monic Regression(DHR) [5]. In those cyclical methods, the important choice to make is the range for the frequencies for the harmonic components and their distribution in the range. The most robust choice is a homogeneous distribution over the range as it won’t be affected too much by a change in the wave spectrum[5, 6]. There is no clear way to determine the frequencies, and they are constant in time. The auto-regressive models and the extended Kalman filter address those two shortcomings. The extended Kalman filter, used in the article "A study on Short-Term

11 Sea Profile Prediction for Wave Energy Applications" of F. Fusco and J.Hals’s "Constrained optimal control of a heaving buoy wave energy converter" [5, 7] consists of an augmented cyclical model. The extension lies in the fact that instead of having fixed frequencies, the model has only one frequency which is allowed to change over time. As a consequence the model cannot be described anymore with a state-space approximation and consequently the Kalman filter cannot be applied. The Extended Kalman filter consists of a linearization of the model and an application of the regular Kalman filter. This assumes of course that the time steps are small enough sothat the linearization is still relevant. The EFK performs well for narrow-banded spectra, since it is only able to track one main frequency. Superposing several frequencies can be considered, but it doesn’t yield very good results since the frequencies are updated with the same Kalman. So far the auto-regressive models as presented by F. Fusco[5] have given some of the best results. The filter corresponds implicitly to a cyclical model where the phases correspond to the phases of the filter. The method used in by Fusco[5] for the identification of the AR model is the minimization ofa function referred to as long-range predictive identification. The minimization is initiated with the results of regular least squares. B. Fischer [8] makes the distinction between plug-in predictors and direct multi-step predictors. It also addresses the problem of parameter changes of the auto-regressive models and says that the direct multi-step filter with continuous adaptation of the parameters was slightly better than the plug-in or static parameters ones. There are also other ways of identifying parameters of AR models[9]. In "Short-term Wave Forecasting as a Univariate Time Series Problem"[6] variability on the parameters 푎푖 is introduced, but it is also concluded that it does not improve much the predictions and that a simple periodic estimate using the above method might be sufficient to determine good parameters. It is also possible to add a moving average component to reduce the complexity of the model, or to use another type of model. In [10], a reduction of parameters is achieved, by assuming a prior probability on the data and so replacing the Dirac orthogonal basis function of the AR models by Kautz- filters. In this article Kautz-filter are compared with AR, ARMA, hybrid Kautz-Filter/AR methods, and are deemed to provide sufficient accuracy with a reduced number of parameters. Another possibility would be to use Gaussian Processes, setting a prior distribution over the space of functions underlying the data, then finding an estimate with a maximum likelihood estimator. This method was shown to

12 have important drawbacks, such as the fact that the privileged frequencies in the prediction are determined by the harmonic components of the covariance matrix given as input to the method. In [11] F. Paparella considers the possibility of using measurements from another location to predict the surface elevation for a oscillating wave column. The performance of a Finite Impulse Response model (using up-wave mea- surements) and a Auto-regressive model with external input (ARX)(using both up-wave and surface elevation measurements at the location of inter- est) was assessed and are generally not better than the ones obtained with a regular AR model. In [12], J. Halliday explores the use of the FFT for wave prediction, but it seems to be inappropriate even for one-dimensional problem, due to the aperiodic nature of the sea. However, the Fast Fourier Transform could prove useful to predict wave elevation from an up-wave measurement point, providing the energy of the wave already reached the point of interest. In [13], E. Blondel uses FFT, but this time an optimisation scheme is used to fit a non-linear model to the data. The model accounts for the interactions between wave components for order 2, and requires a simulation for higher order components. In [14],F. Fusco establishes a nearly linear relation between the prediction horizon needed and the time constant of the transfer function between the optimal speed and the excitation force. Another important parameter in order to get good predictions for the excitation force is the bandwidth of the excitation transfer function(low-pass filter) that links the excitation force to the wave elevation. In [15], he uses a method based on the parameters of the predictor to de- termine the variance of the prediction error, and the effects of the prediction errors on control performance for a MPC are also quantified.

1.3 This work

As opposed to what can be found in the literature, the goal is to predict a signal whose current value is unknown. It is as a consequence a problem which is a little different from what has been done so far. The measurements available are some pressure measurements issued by a probe located under the buoy, and some other force and position measurements. The first thought was to use the pressure alone to determine the excitation force. Will this

13 method prove appropriate or are some of the frequency components of the waves omitted in the pressure signal? Does the depth of the sensor influence the quality of the prediction? The idea of simply identifying an AR model on the pressure and use a transfer function on the prediction was given up. Is it possible to turn to the force measurements, which once corrected with the position of the buoy and much linearisation, could be linked to the forces on the buoy hull and the surface elevation? Could we combine the knowledge we get from those force measurements and the pressure to get an estimate as precise as possible? How long in the future will we be able to predict the force on the buoy and with what accuracy? Could some additional information improve this prediction? Those are the questions this work will give an answer to. The work is separated in six chapters, of which this introduction is the first. Chapter 2 : Background Theory In this chapter is introduced the background necessary to understand waves through fluid mechanics and other simplified models. The modelling of a WEC is also explained whilst empha- sizing the aspects relevant to wave prediction. Chapter 3 : Prediction methods The methods tested in this thesis are presented in this chapter. It starts with a definition of the data available and introduces the resulting force, which is computed from force and position measurements. The focus is made on three methods :

∙ A prediction of the pressure thanks to an AR model followed by a filtering,

∙ A Kalman filter using all the measurements available

∙ A Wiener predictor using all the measurements Chapter 4 : Simulation This chapter focuses on the practical aspects of the prediction and its integration to CorPower Ocean’s Simulink model. Its content is therefore reserved to CorPower Ocean. Chapter 5 : Results Here the main quality criterium for quality is introduced and the choice of a method is motivated with some performance tests. Hence the rest of the chapter focuses mainly on the most efficient and convenient method : a Wiener filter using the resulting force as input. Parameters such as noise, sea state and filter order are varied in order to estimate the filter’s performance.

14 Chapter 6 : Conclusion gives a summary of what has been done in this thesis and the results achieved. It also suggests the next steps to take in wave prediction and control.

15 Chapter 2

Theory

This chapter aims at introducing the reader to the concepts needed to un- derstand wave prediction. It first outlines the basics of WEC modelling, it then dives into the representation of waves and hydrodynamic signals. The equation of motion and the hydrodynamic forces and pressure are then ex- plained in the two next sections, before the control of the buoy is explored. A few known prediction models are presented to conclude this chapter.

2.1 Waves and hydrodynamic signals

The prediction of the forces on the buoy in the future requires a deeper understanding of what waves are made of and how they evolve with time. A visible consequence of the wave is a variation in the surface elevation of the sea. There are also other signals associated with it, i.e. the pressure under the water, the particle velocities,... In this section we will first describe a few statistics about waves, then how waves are modelled using fluid mechanics, and how they can be represented using a spectrum.

2.1.1 Observation and representation of waves The surface elevation of the water is the instantaneous elevation of the sea surface. A single wave is defined as the surface elevation between two down- ward or upward zero-crossings of the elevation with the reference level(downward is preferred by most). A wave is characterized by the distance between the lowest and the highest level of the wave(see figure 2.1). A useful character-

16 Figure 2.1: Surface elevation of a sea with 퐻푠 = 1푚, 푇푝 = 6.4푠

istic for a sea profile is the significant wave height 퐻푠, defined as the mean of the highest one-third of waves in the wave record :

푁/3 1 ∑︁ 퐻 = 퐻 = 퐻 푠 1/3 푁/3 푗 푗=1 In the same way we define the zero-crossing period as the interval between the crossings, and the significant wave period as the mean of the periods of the highest one-third of waves.

푁/3 1 ∑︁ 푇 = 푇 = 푇 푠 1/3 푁/3 푗 푗=1

We are often interested in the peak period 푇푝

2.1.2 Fluids and water waves The fluid in motion will be described by several fields, first the instantaneous elevation of the sea surface 휁(푡), the pressure under the water 푝(푟, 푡) where

17 푟 is a position under the sea surface, and a velocity potential 휑(푟, 푡). The particle velocity at one point −→푣 (푟, 푡) is the gradient of the velocity potential. We first express the conservation of mass and momentum with thecon- tinuity equation 휕휌 + ∇ · (휌−→푣 ) = 0 휕푡 and we add to it the Navier-Stokes equation with the viscosity term ne- glected(the fluid is assumed to be ideal).

퐷−→푣 −→ 휌 = −∇푝 + 푓 퐷푡 −→ where 휌 is the volumetric mass density of the fluid, and 푓 is the external force per unit volume. We assume the only external force is the gravitational force. −→ −→ Hence we have 푓 = −휌푔 푒 푧. We also assume the fluid to be incompressible and irrotational, which gives us

∇ · −→푣 = 0 since 휌 is then considered constant. and

∇ × −→푣 = 0

The equations can be written

∇2휑 = 0 푧 < 휁 휕휑 1 푝 + |∇휑|2 + + 푔푧 = 퐶 푧 < 휁 휕푡 2 휌 which are the Laplace equation and a non-stationary version of the Bernouilli equation. The term 퐶 is a constant which will be determined later on by the boundary condition on the free surface. Those (non-linear) equations also come with a bunch of boundary condi- tions. We say that a fluid particle at a boundary 퐵 with an impervious body always stays on that boundary.

퐷퐵 휕퐵 = + (∇휑 · ∇)퐵 = 0 on 퐵 = 0 퐷푡 휕푡

18 For a flat bottom at depth 푧 = −ℎ, this gives 휕휑 = 0 on 푧 = −ℎ 휕푧 The particles on a free surface 푦 = 휁 or 퐹 = 푦 − 휁 = 0 also stay on that free surface. 퐷퐹 퐷 휕휑 휕휁 휕휑 휕휁 = (푧 − 휁) = − − 퐷푡 퐷푡 휕푧 휕푡 휕푥 휕푥 The pressure at the boundary with the atmosphere needs to be equal to the atmospheric pressure, 푝 = 푝푎. This gives 휕휑 1 푝 + |∇휑|2 + 푔휁 = 퐶 − 푎 on 푧 = 휁 휕푡 2 휌

We can see that 푝푎 since the above equation must hold for a sea with 퐶 = 휌 no wave and 휁 = 0. We also have from the two last equations 휕2휑 휕휑 + 푔 = 0, 푧 = 휁 휕푡2 휕푦 From the velocity potential we can derive the dynamic pressure

휕휑 푣2 휕휑 푝 = 푝 − 푝 + 휌푔푧 = −휌( + ) ≃ −휌 푑푦푛 푎 휕푡 2 휕푡 This dynamic pressure is a signal providing us with important information since it is linked directly to the surface elevation and the force on the buoy as we will see later on.

Airy wave theory By linearising we can get a set of simplified equations describing the speed potential 휑, the surface elevation 휁 and the pressure field 푝. This new model is called airy wave theory and it is used for surface gravity waves which are assumed to have a long wavelength compared to their amplitude ( 퐴 ). 휆 << 1 It consists of removing the non-linear terms in the equations. (recom- mended practice in [16]):

∇2휑 = 0 − ℎ < 푦 < 0 (2.1) 휕휑 푝 − 푝 + 푎 + 푔푦 = 0 푦 < 휁 (2.2) 휕푡 휌 (2.3)

19 The boundary conditions are

휕휑 = 0 푦 = −ℎ 휕푧 휕2휑 휕휑 + 푔 = 0, 푦 = 0 휕푡2 휕푦 Given 휑 compute : 1 휕휑 휁(푥, 푡) = − | (2.4) 푔 휕푡 푦=0 (2.5)

This theory is also referred to as Linear (Airy) Wave theory, presented in [17, 4] We consider a plane horizontal sea bottom, and consider only harmonic solutions of this set of equations. The solutions of those equations are written

푔퐴 cosh 푘(푧 + ℎ) 휑 = sin(푘푥 − 휔푡) 휔 cosh 푘ℎ 휁 = 퐴 cos(푘푥 − 휔푡) where 퐴 is the amplitude of the wave 퐴 = 퐻/2, 휔 the angular speed and 푘 its wave number. The wavelength of the wave is 휆 = 2휋/푘. and the period is 푇 = 2휋/휔. A point with a constant phase, i.e. a moving point so that is constant moves at the phase velocity 휔 휆 . Note that for 푘푥 − 휔푡 푉푝 = 푘 = 푇 very deep water, i.e. ℎ >> 퐴 the velocity potential can be approximated : 푔퐴 휑 = sin(푘푥 − 휔푡)푒푘푦 휔 In electromagnetic and sound waves the relation between the wave num- ber and the frequency is linear. It is not the case with water waves. There is a relation between the angular speed 휔 and the wave number 푘, called the relation : 휔2 = 푔푘 tanh(푘ℎ) which simplifies to 휔2 = 푔푘 in deep water. This provides a mapping between 휔 and 푘.

20 2.1.3 Spectrum and random phase-amplitude model The harmonic solutions for the governing equations of water waves are simply shifted cosines at a single frequency. A sum of such harmonic solutions still satisfies those equations. The linear wave theory inspired the random- phase/amplitude model for the representation of waves :

푁 ∑︁ 휁(푡) = 푎푖 cos(2휋푓푖푡 + 훼푖) 푖=1 where , and are random variables. The phase is uniformly dis- 휂(푡) 푎푖 훼푖 훼푖 tributed over . The amplitude is at each frequency Rayleigh dis- [0, 2휋] 푎푖 tributed with a parameter 휇푖 depending on the frequency.

2 휋 푎푖 휋푎푖 for 푝(푎푖) = 2 exp(− 2 ) 푎푖 ≥ 0 2 휇푖 4휇푖

We only have to define a 퐸 {푎푖} = 휇푖 to get a distribution of the amplitudes. The frequencies 푓푖 are often taken at regular frequency interval ∆푓. We can 1 {︀ 1 2}︀ also define the variance density spectrum as 퐸(푓) = limΔ →0 퐸 푎 . 푓 Δ푓 2 Since the amplitudes and phases are random, the representation of the sea is a stationary gaussian process. Note that the sea is never really stationary, and in general we divide a wave record into possibly overlapping segments that are deemed to be approximately stationary. The process in each of these segments can then be entirely described by a covariance function. All of the above concepts can be generalized to 2-dimensional spectra for the description of multi-directional waves. In that case the variance density spectrum depends also of the direction for the propagating wave(퐸(푓, 휃)). Usually this spectrum is concentrated around one main direction(the direc- tion of the wind), but sometimes the spectrum is influenced by two distinct wave systems that add up, (for example a ). In this work we focus on a one-dimensional spectrum. Note that sometimes the dispersion relationship is not respected, this can be due for example to an ambient current.

2.1.4 Waves in oceanic waters Oceanic waters are deep waters uninfluenced by currents or obstacles. In simplified conditions, the energy density spectrum of the waves depends only on the wind, the distance to the upwind coastline and the time since the wind

21 started to blow. Some spectra shapes are widely used in waves engineering : the JONSWAP spectrum for young sea states or the Pierson-Moskowitz spectrum for fully developed sea-states. The Pierson-Moskowitz spectrum corresponds to fully developed sea states, which are sea states in which the wind has been blowing for a long time.

2 −4 −5 5 푓 −4 퐸푃 푀 (푓) = 훼푃 푀 푔 (2휋) 푓 exp(− ( ) ) 4 푓푃 푀 The JONSWAP spectrum generalizes the Pierson-Moskowitz spectrum for younger sea-states by adding a peak-enhancement term in the spectrum.

푓/푓 −1 2 −4 −5 5 푓 −4 exp 1 ( 푝푒푎푘 )2 퐸퐽푂푁푆푊 퐴푃 (푓) = 훼푔 (2휋) 푓 exp(− ( ) )훾 2 휎 4 푓푃 푀

훾 is a peak-enhancement factor and 휎 is a peak-width parameter (휎 = 휎푎 for 푓 ≤ 푓푝푒푎푘 and 휎 = 휎푏 for 푓 > 푓푝푒푎푘 because the peak widths differ from left to right of the peak frequency). The shape of the JONSWAP spectrum has been validated for many sea states. Sea states always seem to stabilize into a JONSWAP spectrum, thanks to the energy exchange in quadruplet wave- wave interactions or dissipation by white-capping. Note that the JONSWAP spectrum does not account for swell.[18] In figure 2.2 we compare the Discrete Fourier Transform computed over the simulation time(normalized by the number of samples) with the coeffi- cients of the cosines. The irregular appearance of the DFT coefficients is due to the choice of the sample time and the simulation time. Indeed, in the analyzed the signal is non-periodic and the frequency components spread to nearby frequencies.

22 0.06 DFT coefficients for positive frequencies 0.05 Half of the amplitudes for the cosines in JONSWAP

0.04

0.03

0.02

0.01

0 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 Frequency (Hz) Figure 2.2: Comparison of the cosine coefficients JONSWAP spectrum and discrete Fourier transform of an artificial wave record for a peak period of 4.2 and an significant wave height of 1. 2.2 Modelling of Wave Energy Conversion Sys- tems

Here we present the description of the dynamics of the WEC and its repre- sentation.

2.2.1 Coordinates The WEC is a floating body, moving and rotating on the sea surface inthree dimensions. We can define an 3-axis system,푥,푦,푧 with origin at a chosen equilibrium position of the buoy(i.e. the static forces such as the buoyancy, gravity and some machinery forces compensate, see section 2.7). The WEC motion is hence described by six degrees of freedom, three in translation along the axes and three in rotation around the axes. The rotations are made according to the order for eulerian angles. The six degrees of freedom are commonly referred to as : 1-surge, 2-sway, 3-heave, 4-roll, 5-pitch, 6-yaw, see figure 2.3. The position of the WEC in space according to the chosen reference frame is described by a vector 휂. One can use models accounting only for a subset

23 Figure 2.3: Axes system for the modelling of a WEC of the degrees of freedom. It can for example model the waves only in the two dimensions 푥 and 푧(those are called plane waves). The WEC being symmetric we can then only consider its movement in surge,heave and pitch.

2.2.2 Equation of motion The equation of movement of the WEC equals as usual the sum of forces exerted on the WEC to the mass of the WEC times acceleration. We know that the WEC is acted upon by several things. We have the hydro and wind forces, exerted by the environment. But the buoy is also coupled to a Power Take Off system which also exerts a force on the device. There is also a cable linked to a system at the bottom of the ocean. A sketch of a general point-absorber type WEC can be seen in figure 2.4. In CorpowerOcean’s case, the PTO and the buoy are not linked by a wire, but are coupled mechanically(wavespring force, pretension force, generator force, friction). The equation of motion for the buoy can then be written as

푀휂¨ = 휏ℎ푦푑푟표 + 휏푤 + 휏푃 푇 푂 + 휏푔 (2.6)

푀 refers to the inertia matrix of the buoy. 휏ℎ푦푑푟표 is the total force exerted by the sea on the buoy, 휏푤 is the force of the wind and can usually be neglected, and 휏푃 푇 푂 is the force exerted by the PTO system on the buoy. The hydro forces can be divided as :

휏ℎ푦푑푟표 = 휏푒푥푐 + 휏ℎ푦푠푡 + 휏푟푎푑 + 휏푑푟푎푔

24 Figure 2.4: Sketch of a WEC with a PTO linked with a wire to the buoy

where 휏푒푥푐 is the excitation force, 휏ℎ푦푠푡 is the hydrostatic restoring force, 휏푟푎푑 the radiation force and 휏푑푟푎푔 the drag force due to friction. We give more explanation on those forces in section 2.2.3.

The machinery force 휏푃 푇 푂 can be expressed as :

휏푃 푇 푂 = 휏푡푟푎푛푠 + 휏푔푎푠,푝푟푒푡푒푛푠푖표푛 + 휏푓푟푖푐푡푖표푛 + 휏푔푎푠,푤푎푣푒푠푝푟푖푛푔

휏푡푟푎푛푠 is the force due to the generator, 휏푔푎푠,푝푟푒푡푒푛푠푖표푛 and 휏푔푎푠,푤푎푣푒푠푝푟푖푛푔 are forces exerted by some cylinders that link the slide and the buoy, the first one aims at protecting the PTO from too high forces and pulls the buoy down to its equilibrium position, and the second one increases the amplitude of oscillations of the WEC. 휏푓푟푖푐푡푖표푛 are some friction forces. As it is not necessary to understand this thesis, the mechanical system is not detailed any further and we refer the reader to CorpowerOcean’s documentation. In this work the PTO force will most often be given as input data. The equation of motion in its canonical form will hence be for us :

푀휂¨ = 휏푒푥푐 + 휏ℎ푦푠푡 + 휏푟푎푑 + 휏푑푟푎푔 + 휏푤 + 휏푃 푇 푂 + 휏푔 (2.7) This equation can be linearized, the hydrostatic is depending on the po- sition of the buoy, the radiation force depends on the acceleration and the

25 speed, the drag force depends on the speed and the wind forces are neglected. We can get an equation of the form :

(푀 + 푚inf )¨휂 + 퐷휂˙ + 퐺휂 = 휏푒푥푐(푡) + 휏푃 푇 푂 + 휏푔

Where 푚inf is the added mass, a term due to the radiation force. 퐷 is a damping matrix that accounts for the hydrodynamic damping due to radiation and viscous forces. 퐺 is the restoring matrix, accounting for the hydrostatic forces. We can rewrite the force balance in the frequency domain as 1 푀푖휔휈ˆ + 퐷휈ˆ + 퐺휈ˆ =휏 ˆ +휏 ˆ +휏 ˆ 푖휔 푒푥푐 푃 푇 푂 푔 where 휈ˆ is the Fourier transform of 휈(푡), F {푢(푡)} where 휈 is the velocity vector 휈 =휂 ˙ We can then introduce the intrinsic impedance 푍푖(휔) of the floating body [19, 3].

푍푖(휔) = 퐷 + 푖푋 , with the so-called reactance of the system 퐺 . 푋 푋 = 휔푀 − 휔 Hence we have

푍휈ˆ =휏 ˆ푒푥푐 +휏 ˆ푃 푇 푂 +휏 ˆ푔 In this simplification 휈 is the result of a system with two input signals, 휏푒푥푐(푡) and 휏푃 푇 푂. In this work we are interested in predicting the excitation force, so that the controller of the buoy will be able to act on 퐹푃 푇 푂 through the generator torques and in this way get the most of the WEC.

2.2.3 Hydrodynamic forces Excitation force The excitation is the force that an incident wave applies on the fixed buoy. This force depends of course on the shape of the buoy. A way to approximate this one is by using simulation to compute the force applied on the buoy for some wave frequencies. This is what the software WAMIT does[20].

WAMIT gives a series of gains 퐹푗,푊 퐴푀퐼푇 (휔푖) and phases 휃푗,푊 퐴푀퐼푇 (휔푖) for a set of angular frequencies 휔푖s. We can then find the "raw"(non-corrected) excitation force from those coefficients and the wave elevation 휁(푥푟푒푓 , 푦푟푒푓 , 푡) at the origin of the reference frame.

26 ×105 6

4

2 Transfer function 0 0 0.2 0.4 0.6 0.8 1 1.2 Frequency(Hz)

10000

5000

0 Impulse response -5000 -10 -8 -6 -4 -2 0 2 4 6 8 10 time

Figure 2.5: 푀퐻푒푥푐(푤푖) for the heave mode, with associated impulse response

ˆ There is a transfer function 퐻휁퐹 (휔) linking the raw excitation force to the surface elevation. ˆ ˆ ˆ 퐹푒푥푐(휔) = 퐻휁퐹 (휔)휁(휔) or in the time domain

퐹푒푥푐(푡) = ℎ휁퐹 (푡) * 휁(푡) In this work we are mostly interested in the heave mode of motion. Let’s now take a look at 퐹푒푥푐(푤푖) and the corresponding impulse response for the heave mode in figure 2.5). We can see that the transfer function from 휁 to the excitation force is the one of a low-pass filter. The impulse response is also displayed. As we can see the impulse response is very non-causal. This means we will need information about those signals further in the future in order to be able to compute the excitation force the wider the impulse response is. It seems to make more sense to use a measure of the current excitation force since this is the result of the filtering of the wave elevation by the dynamics of the body [6]. Let us note that the computation of the excitation force with a transfer function is one more approximation that assumes the movements of the buoy to be small compared with the wave-length of the waves. However, the excitation force is also influenced by the position and

27 motion of the buoy(itself controlled by the control strategy, MPC or other), making the linear dependence of the excitation force to the surface elevation invalid. The influence of this issue in the final result hasn’t been quantified and is left for later work. In order to account for the movement of the buoy and the instantaneous submergence of the buoy, we apply a correction to the excitation force. In surge and heave it is given by

푉푠푢푏 휏푒푥푐,푠푢푟푔푒 = (퐹푒푥푐,푠푢푟푔푒) 푉0

푉푠푢푏 휏푒푥푐,ℎ푒푎푣푒 = (퐹푒푥푐,ℎ푒푎푣푒 − 퐴푤휌푔휁) 푉0 Note that there is a similar correction for the sway mode of motion and no correction for the rotation in this model.

Hydrostatic force The hydrostatic force only acts in the heave mode of motion. It is simply proportional to the instantaneous submerged volume of the buoy, 푉푠푢푏(푡). It is assumed to attack at the centroid of this same submerged volume.

휏ℎ푦푠푡,ℎ푒푎푣푒(푡) = 휌푔푉푠푢푏(푡)

Radiation force When the buoy oscillates in the ocean it creates new waves, and for this it needs to apply a force on the water. The radiation force is the reaction force of the water on the buoy. In the frequency domain, the radiation force is a transformation of the device velocity and acceleration

휏푟푎푑 = −퐵(휔)휂 ˙ + 퐴(휔)¨휂 In the time domain this becomes a convolution. In order to simplify the computation of the radiation force, a state-space approximation is often used.

푧˙(푡) = 퐴푧˜ (푡) + 퐵˜휂˙ ˜ ˜ 휏푟푎푑 = 퐶푧(푡) + 퐷휂˙

28 Drag force The drag force is due to the friction between the buoy and the water. It is quadratic in the velocity of the buoy.

2.2.4 Pressure signal Similarly, we can also write a relation between the surface elevation and the pressure at a pressure probe situated a few meters below the surface that measures the instantaneous pressure. ˆ ˆ 푝ˆ푝(휔) = 퐻휁푝(휔)휁(휔)

푝푝(푡) = ℎ푝휁퐹 (푡) * 휁(푡) Note that this transfer functions is also non-causal

The depth 푑푝 at which the probe is located is considered constant. In that case we can see that the total pressure at the probe 푝푡푝 = 푝푝 +푝푎 +휌푔푑푝, where 푝푝 is the dynamic pressure and 푝푎 the atmospheric pressure. In airy wave theory, the dynamic pressure can be linked to the surface elevation using the following relation [4]

휕휑 푝 = −휌 푝 휕푡 For a sinusoidal wave given by 휁(푡) = 퐴푐표푠(푘푥 − 휔푡) you have the corre- sponding velocity potential that is given by 푔퐴 cosh(푘(푦+ℎ)) 휑 = 휔 sin(푘푥 − 휔푡) cosh(푘ℎ) and the pressure is given by

cosh(푘(푑 + ℎ)) 푝 = 푔퐴 cos(푘푥 − 휔푡) 푝 푝 cosh(푘ℎ) This is a result of airy wave theory and is only valid when the conditions for the application of this theory are satisfied. It also doesn’t account for the displacement of the buoy and its potential inclination. We call cosh(푘(푑푝+ℎ)) the depth factor, which is decreasing as fre- 퐶푝 = cosh(푘ℎ) quency rises. We can see that ˆ 퐻휁푝(휔) = 휌푔퐶푝 The calculated depth factor for different depths for the pressure probe are shown in figure 2.6(the depth coefficient for 푑푝 = 0 is always one). As

29 Pressure factor 1 -2 0.9 -5 -10 0.8 -15 -20 0.7

0.6

0.5

0.4 Pressure factor 0.3

0.2

0.1

0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Frequency(Hz)

Figure 2.6: Depth factor for several depth values, bottom depth is 50푚 we can see, the deeper the pressure probe is the sooner the depth factor decreases with frequency. This means that only the information contained in lower frequencies will be present in the pressure signal. For a depth of 20푚, the frequency components above 0.25퐻푧 completely disappear. There are important frequency components in the surface elevation well over 0.5퐻푧 in the case of a peak period of 4.2푠 (see figure 2.2). We would like to link the pressure, one of our measurements, to the aim signal, the excitation force. Unfortunately one does not simply inverse a transfer function in the frequency domain, because for higher frequencies, the transfer function between the surface elevation and the pressure becomes zero. It is then pretty obvious that the frequency components that are not present in the measurements of the pressure cannot be reconstructed when predicting the excitation force. It is crucial to make sure that the relevant wave periods are detectable by the pressure sensor.

2.3 Controller

The goal of the controller is to maximize the absorption of energy by the WEC. This is achieved by acting on the machinery forces by engaging or

30 disengaging some flywheels linked to the generators and by setting the gen- erator damping both when the flywheels are engaged and disengaged. There are also some braking parameters that can be used in case the generator torque alone is not sufficient. There is an optimal linear solution tothe control problem which consists of having the WEC oscillate with its veloc- ity in phase with the excitation force, and with the oscillation adjusted so that the radiation power equals half the excitation power. This is equivalent to writing the machinery force as the filtering of the speed by a non-causal impulse response(this is referred to as complex conjugate or phase and am- plitude control)[21]. In order to be of use for a controller, the predictions must be sufficiently accurate over a significant period of time. For reactive controllers, the frequency response of the system according to the excitation force is non-causal, and that sets requirements on the prediction horizon. In

[14], the influence of the time constant 휏1 of the optimal controller of the buoy was quantified and a nearly linear relation between that time constant and the needed prediction horizon was established. The optimal controller is characterized by 퐻표푝푡(휔) (푉표푝푡 = 퐻표푝푡(휔)퐹푒푥(휔)) and 휏1 = 1/휔1 where 휔1 is the cut-off frequency of 퐻표푝푡. It just means that the prediction horizon depends on the length of the non-causal part of the impulse response of the system to the excitation force. Other control strategies have been implemented so far, but the control strategy of interest for us here is Model Predictive Control. The MPC for CorPowerOcean’s WEC will take a prediction of the excitation force in the future as input and output some instructions for the engagement of the fly- wheels. These instructions are determined so as to maximize an objective function in the relevant future. As in the optimal linear case here above, the controller is "non-causal" and the quality of the force prediction will greatly influence its yield. Another important parameter in order to get good predictions for the excitation force is the bandwidth of the excitation transfer function(low-pass filter). The controller is not likely to use a prediction of the excitation force more than 2 periods ahead[7]. In order to quantify the influence of the predic- tion error on the performance of the controller, an analysis similar to [15] has to be performed with the non-linear MPC that is currently implemented on CorPowerOcean’s WEC. However, since the MPC still needs to go through some improvements, this analysis is left for later work.

31 2.4 Cyclical prediction models

When looking at linear wave theory, we can see that the surface elevation can be seen as a set of sinusoidal waves propagating in different directions. As a consequence, we write the following cyclical model for the surface elevation at time-step 푘 as

푚 ∑︁ 휁(푘) = 푎푖 cos(휔푖푇푠푘) + 푏푖 sin(휔푖푇푠푘) + 휖(푘) 푖=1 As we can see it is a sum of sines whose amplitude and phase are determined by the choice of the 푎푖s and 푏푖s. The frequencies are to be chosen beforehand, and the 푎푖s and 푏푖 are chosen so as to fit past data with a least-squares approach. 푇푠 is the sample time. The important choice to make is the range for the frequencies and the distribution of the frequencies in the range. The most robust choice is a homogeneous distribution over the range as it won’t be affected too much by a change in the wave spectrum [22].

Harvey’s structural model The first cyclical model is Harvey’s struc- tural model : 푚 ∑︁ 휁(푘) = 휓푖(푘) + 휖(푘) 푖=1 (︂ )︂ (︂ )︂ (︂ )︂ (︂ )︂ 휓푖(푘 + 1) cos(휔푖푇푠) sin(휖푖푇푠) 휓푖(푘) 푤푖(푘) ⋆ = ⋆ + ⋆ 휓푖 (푘 + 1) − sin(휔푖푇푠) cos(휖푖푇푠) 휓푖 (푘) 푤푖 (푘) We can derive a complete state-space form :

푥(푘 + 1) = 퐴푥(푘) + 푤(푘)

휂(푘) = 퐶푥(푘) + 휁(푘) In this model and ⋆ are the real and imaginary parts of 휓푖(푘 + 1) 휓푖 (푘 + 1) the harmonic component with angular speed 휔푖. This time the parameters to be identified once the frequencies are chosen are the initial and ⋆ . 휓푖(0) 휓푖 (0) The issue with this model is that we need to pick the right frequencies so that the model performs well and the number of parameters is not too high.

32 Extended Kalman Filter The extended Kalman filter consists of a aug- mented cyclical model. The extension lies in the fact that instead of having fixed frequencies, the model allows those frequencies to change over time.As a consequence the model cannot be described anymore with a state-space approximation and consequently the Kalman filter cannot be applied. The Extended Kalman filter consists of a linearization of the model and anap- plication of the regular Kalman filter. This assumes of course that the time steps are small enough so that the linearization is still relevant. We only show here a model with only one harmonic component, where the angular speed 휔(푘) is supposed to depend on the time step. The estimation of 휔(푘), 휓(푘), 휓(푘 + 1) is made using the Kalman filter on the following model ⎧ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ 휓(푘 + 1) cos(휔(푘)푇푠) sin(휔(푘)푇푠) 0 휓(푘) 휖(푘) ⎪ * ⋆ ⋆ ⎨ ⎝휓 (푘 + 1)⎠ = ⎝− sin(휔(푘)푇푠) cos(휔(푘)푇푠) 0⎠ ⎝휓 (푘)⎠ + ⎝ 휖 ⎠ ⎪ 휔(푘 + 1) 0 0 1 휔(푘) 휅(푘) ⎩⎪ 휂(푘) = 휓(푘) + 휁(푘) The problem with this method for prediction is that in the case where several frequencies are used they are always updated in the same way, which is a significant drawback for this model[6].

2.5 Auto-regressive models

A signal 휁[푘] that follows a autoregressive model of order 푛 satisfies the following relation for all 푘 ∈ Z 푛 ∑︁ 휁[푘] = 푎푖휁(푘 − 푖) + 휈(푘) 푖=1 where the 푎푖s are the AR coefficients, and 휈[푘] is white noise of variance 2. If we assume the excitation force or the surface elevation or any other 휎1 hydrodynamic signal to follow one such model, we need to have access to the estimations of the AR parameters, . Then the estimation of the future ̂︀푎푖(푘) wave elevation 휁̂︀(푘 + 푙|푘) (at time 푘 + 푙 given information up to time 푘) is given by 푛 ∑︁ 휁̂︀(푘 + 푙|푘) = ̂︀푎푖휁̂︀(푘 + 푙 − 푖|푘) 푖=1 33 where of course 휁̂︀(푖|푘) = 휁(푖) if 푖 ≤ 푘. We will call this notation the "plug-in notation". We can use another notation(let’s call it "direct notation") which com- putes the new estimation using only a linear combination of the available past measurements, but which is different for every horizon.

푛 ∑︁ 푙 휁̂︀(푘 + 푙|푘) = 푎푖휁(푘 − 푙 + 1) 푖=1

Where the predictive coefficients 푙 can be computed from the AR coefficients 푎푖 as follows. 푛 (푙) ∑︁ 푙−푗 푎푖 = 푎푗푎푖 푗=1 We can also look at the spectral properties of an AR model. In the z-domain the equations above can be written.

휁(푧)휙(푧) = 휈(푧)

휈(푧) 휁(푧) = for |푧| > 푝 , 푖 = 1, 2, .., 푛 ∏︀푛 −1 푖 푖=1(1 − 푝푖푧 ) If we take an even number of poles, we can group them by complex conjugate pairs, ⋆ . We then have −1 ⋆ −1 −1 2 −2. (푝푖, 푝푖 ) (1−푝푖푧 )(1−푝푖 푧 ) = 1−2푅푒(푝푖)푧 +|푝푖| 푧 If we take an innovation noise that only has innovation between 푘 = −푛 and 푘 = −1, we have a z-transform 휈(푧) which is equal to a polynom in 푧 of degree 푛. We have then 휁(푧) = 휈(푧) Which can be ∏︀푛/2 −1 2 −2 (1−2푅푒(푝푖)푧 +|푝푖| 푧 ) rewritten as a sum of fractions : 푖=1

푛/2 ∑︁ 푃푖(푧) 휁(푧) = (1 − 2푅푒(푝 )푧−1 + |푝 |2푧−2) 푖=1 푖 푖 with 푃푖(푧) some polynoms of degree 푛. which in the time domain is a sum of sines of frequencies are equal to the phases of the poles, and whose initial phase depends on the innovation 휈[푘]. This means AR models can indeed be seen as some kind of cyclical models since it actually only propagates sines and cosines(with a little damping proportional to the real values of the poles).

34 Chapter 3

Prediction methods

We present here the different methods that were considered to compute a prediction of the excitation force. We first start by introducing the differ- ent sources of information, as well as their link to the signal of interest, the excitation force. We then present the three methods : AR model and transformation, Kalman prediction and Wiener prediction.

3.1 Available data and approximations

We have several sources of measurements : ∙ A pressure sensor situated under the buoy

∙ Force and dynamic sensors 퐹푃 푇 푂, 휈¨, 휈˙, 휈 The error band from the existing pressure sensors that will actually be used in the buoy will have an error band of approximately 3푚퐵푎푟. For testing purposes, we will model this as a gaussian white noise with standard deviation 휎 = 3푚퐵푎푟. The noise on the other measurements will have to be quantified in real operation.

Excitation force and computation of a linked signal : the resulting force We cannot compute the current excitation force directly from the signals we have, but we can get some clues regarding its value. From the force sensors and the knowledge of the instantaneous position of the buoy we can compute a signal which will be related to the excitation force and instantaneous water elevation.

35 By simply rewriting the equation of movement for the buoy in the heave mode, we have for a two dimensional model of motion

휏푒푥푐,ℎ푒푎푣푒 = (푀휈˙ − 휏푔 − 퐹푃 푇 푂 · 푒퐴퐶 − 휏ℎ푦푠푡 − 휏푟푎푑 − 휏푑푟푎푔)ℎ푒푎푣푒 where 퐹푃 푇 푂 is the norm of 휏푃 푇 푂 and 푒퐴퐶 is the direction in which the mooring line is pointing. A few approximations can be made in order to simplify things (︂0)︂ a bit. We approximate 푒 by . 퐹 , 휈˙, 휈 are considered known and 퐴퐶 1 푃 푇 푂 can be used to compute the drag force, radiation force and machinery force.

The following equation defines 푅퐹 , a resulting force depending on the measurements 퐹푃 푇 푂, 휈 and 휈˙.

(︂ (︂ 0 )︂ )︂ 푅 (퐹 , 휈, 휈˙) = 푀휈˙ − 휏 − 퐹 − 휏 − 휏 퐹 푃 푇 푂 푔 푃 푇 푂 −1 푟푎푑 푑푟푎푔 ℎ푒푎푣푒

Knowing this resulting force 푅푓 we get some information about 퐹ℎ푒푎푣푒.

푅퐹 (퐹푃 푇 푂, 휈, 휈˙) = 퐶표푟푟(퐹푒푥푐,ℎ푒푎푣푒) + 퐹ℎ푦푠푡,ℎ푒푎푣푒(휈, 휁) However that relation is not a linear one. A linearization gives

휕푉 (푥) 휏 ≃ 푉 휌푔 + 휌푔 푠푢푏 | (휁 − 푝표푠) ℎ푦푠푡,ℎ푒푎푣푒 푒푞 휕푥 푥=0

휏푒푥푐,ℎ푒푎푣푒 ≃ 퐹푒푥푐,ℎ푒푎푣푒 − 퐴푤휌푔휁 휕푉 (푥) 푅 − 푉 휌푔 ≃ 퐹 + 푠푢푏 | (휁 − 푝표푠) − 퐴 휌푔휁 푓 푒푞 푒푥푐,ℎ푒푎푣푒 휕푥 푥=0 푤 We can see that 휕푉푠푢푏(푥) is a linear combination of 푅푓 − 푉푒푞휌푔 + 휕푥 |푥=0푝표푠 퐹푒푥푐,ℎ푒푎푣푒 and 휁. The error on 푅푓 in the real-world system will be due not only to errors in the measurements but also to those approximations. We can see that this approximation valid, see figure 3.1. A plot of the four signals of interest is shown in figure 3.2. We can see that the resulting force is almost in phase with the excitation force.

36 ×106 1.5 R -V ρ g f eq Linear estimation 1

0.5 g [N]

ρ 0 eq -V f -0.5 R

-1

-1.5

50 60 70 80 90 100 110 Time step

Figure 3.1: Linear approximation of 푅푓 − 푉푒푞휌푔

×105

1

0.5

0

-0.5

Dynamic pressure (Pa) -1 R -V gρ - position Correction (N) f eq Non corrected excitation force (N) 105 ζ -1.5 255 260 265 270 275 280 285 290 295 300 Time(s) Figure 3.2: Signals for a wave period of 6.4푠, wave height of 1푚

37 3.2 AR model and transformation

A first approach was to use an AR model to predict the pressure inthe future, then use Wiener deconvolution with the knowledge of the transfer function from the surface elevation to the pressure to estimate the values of the excitation force. Note that the pressure must be predicted a very long time in advance, at least 퐿 + 푊푖푚푝, where 퐿 is the maximum prediction horizon and 푊푖푚푝 the width of the non-causal lobe of the impulse response. First we estimate the pressure as

푛 ∑︁ 푝ˆ(푘 + 푙|푘) = 푎푖푝ˆ(푘 + 푙 − 푖|푘) 푖=1 with 푙 ranging from 1 to 퐿 + 푊푖푚푝. The Wiener deconvolution filter (with infinite impulse response) is given by ([23]): 퐻⋆(푓)푆(푓) 퐺(푓) = |퐻(푓)|2푆(푓) + 푁(푓) where 푁(푓) is the covariance of the noise in the frequency domain. Here we use white noise, which has a constant 푁(푓) over the whole spectrum. The flaw of this method lies in the fact that the Wiener deconvolution considers all values of the pressure, past and future to be noised equally. This is not the case as the uncertainty is much higher for values far in the future, and the noise stops being stationary as well. In addition, due to the fact that the relevant frequencies(up to 0.5퐻푧) of the signal are filtered out if the pressure probe is located at a significant depth (see figure 2.6, frequencies higher than 0.25퐻푧 are non-existent for a pressure probe at a depth of 20푚), this method was deemed unfit. As a consequence, this method hasn’t been studied any further. Another method using an AR model has been considered, the one of simply identifying an AR model on the resulting force, then using that predicted resulting force as the predicted excitation force(we saw that the two of them are pretty close in figure 3.2). This method, we will see has a performance upto 10% inferior to the Wiener filter which we will address soon. However, we also needto predict the surface elevation in order to be able to correct the excitation force, which is not addressed with this one method.

38 3.3 Kalman filter

We saw earlier that using the pressure only was not a good way to get accurate results. We need to use of the resulting force introduced in section 3.1 to get a signal linked to the excitation force. A Kalman filter is a common approach to combine measurements. In order to use all the information coming both from the pressure sensor and the resulting force obtained from the dynamics of the system, we will use a non-linear Kalman filter with a set of measures

푦푘, and input 푢푘 and a state 푥푘. A Kalman filter is always based on a state-space representation.

푥푘+1 = 퐴푥푘 + 퐵푢푘 + 퐺푤푘 푦푘 = ℎ(푥푘) + 푣푘

The vector of measure at time 푘 is 푦푘, and it contains the history of the measurements for the pressure and the corrected resultant force 푅푓 , which does not account for the hydrostatic force and the excitation force. It is thus defined as follows. (︀ )︀ 푦푘 = 푝푝,푘 푝푝,푘−1 ... 푝푝,푘−ℎ푠+1 푅푓,푘 ... 푅푓,푘−ℎ푠+1 where ℎ푠 is the parameter that defines how many values are kept. The function ℎ is defined as follows : ⎛ ⎞ 푝푝,푘 . ⎜ . ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ 푝푝,푘−ℎ푠+1 ⎟ ℎ(푥푘) = ⎜ 푉푠푢푏,푘 ⎟ ⎜ 푉 (퐹푒푥푐,푘 − 휁푘퐴푤,푘휌푔) + (푉푠푢푏,푘 − 푉푒푞)휌푔) ⎟ ⎜ 푒푞 . ⎟ ⎜ . ⎟ ⎝ ⎠ 푉푠푢푏,푘−ℎ푠+1 (퐹 − 휁 퐴 휌푔) + (푉 − 푉 )휌푔) 푉푒푞 푒푥푐,푘−ℎ푠+1 푘−ℎ푠+1 푤,푘−ℎ푠+1 푠푢푏,푘−ℎ푠+1 푒푞 Several models have been tested and they can include as states the ex- citation force only, and they might additionally include the pressure and the surface elevation. For more information, see the section 3.3.1 for more information about system identification . Once the model is defined, the Kalman filter can be applied. Since the linear approximation of the function ℎ(푥푘) was shown to be close to its actual value, we can use its linear approximation and hence have a constant jacobian matrix 퐻(푥푘) = 퐻.

39 Here is the recursion for the Kalman filter

푥ˆ0|−1 =푥0 푇 푇 −1 퐾푘 =퐴푃푘|푘−1퐶 (퐶푃푘|푘−1퐶 + 푅) 푇 푇 푇 푇 −1 푇 푃푘+1|푘 =퐴푃푘|푘−1퐴 + 퐺푄퐺 − 퐴푃푘|푘−1퐶 (퐶푃푘|푘−1퐶 + 푅) 퐶푃푘|푘−1퐴

푃0|−1 =푃0 푇 −1 퐾푘 =푃푘|푘−1퐻푘 푆푘

푥ˆ푘|푘 =ˆ푥푘|푘−1 + 퐾푘푦푘

푃푘|푘 =(퐼 − 퐾푘퐻푘)푃푘|푘−1

푥ˆ푘+1|푘 =퐴푥ˆ푘|푘−1 + 퐵푢푘 + 퐾푘(푦푘 − 퐶푥ˆ푘|푘−1)푦푘 where 푥ˆ푘+1|푘 is the prediction of the state for state 푘 + 1 with the measure- ments up to 푘. Notice that we need to initialize the mean and variance on the initial state, 푥0 and 푃0. And the prediction for an horizon 푙 > 1 follows the following recursion :

푥ˆ푘+푙|푘 = 퐴푥ˆ푘+푙−1|푘 + 퐵푢푘+푙−1

3.3.1 System identification The choice of a model is a crucial one in the Kalman filter approach. The first question we might want to ask is : which are the states 푥푘? They can for example represent a history of the values of 퐹푒푥푐. But they could also include the values of 휁, or 푝푝 or even both! It could also be a set of more abstract states which do not represent anything but a linear combination of which would give the expected value. Let’s look at each of those options is most suitable.

Three signals 퐹푒푥푐, 휁 and 푝푝 as states The state vector 푥푘 at time 푘 is composed by the values of 퐹푒푥푐, 휁 and 푝푝 from a certain time before 푘 until a certain time after 푘. 푥푘 is defined as follows. (︀ )︀푇 푥푘 = 퐹푒푥푐,푘 퐹푒푥푐,푘−1 ... 퐹푒푥푐,푘−ℎ, 푝푝,푘 푝푝,푘−1 ... 푝푝,푘−ℎ 휁푘, ..., 휁푘−ℎ where ℎ = ℎ푠+푊퐼푚푝 −1, and 푊푖푚푝 is half the width of the impulse response used.

40 We assumed that the three signals follow an AR model with three inputs and three outputs. In the following, 퐴푅퐹 denotes for example the AR coeffi- cients for the excitation force and 푋휁퐹 the coefficients of 휁 in the AR model for 퐹푒푥푐 so that

ℎ ℎ ℎ ∑︁ ∑︁ ∑︁ 퐹̂︀푒푥푐,푘+1 = 퐴푅퐹,푖퐹푒푥푐,푘−푖 + 푋휁퐹,푖휁푘−푖 + 푋푝퐹,푖푝푝,푘−푖 푖=0 푖=0 푖=0 An equivalent formulation is to be found for and 휁̂︀푘+1 푝̂︀푝,푘+1 In matrix notation, the AR model is expressed through the matrix 퐴 ∈ Rℎ+푝+1×ℎ+푝+1 ⎛ ⎞ 퐴푅퐹 푋휁퐹 푋푝퐹 ⎜ 1 0 ... 0 ⎟ ⎜ . ⎟ ⎜ .. ⎟ ⎜ 0 1 0 ⎟ ⎜ .. 0ℎ+1×ℎ+1 0ℎ+1×ℎ+1 ⎟ ⎜ . ⎟ ⎜ ⎟ ⎜ 1 0 ⎟ ⎜ ⎟ ⎜ 푋 퐴푅 푋 ⎟ ⎜ 퐹 휁 휁 푝휁 ⎟ ⎜ 1 0 ... 0 ⎟ ⎜ ⎟ ⎜ .. ⎟ 퐴 = ⎜ 0 1 . 0 ⎟ ⎜ 0ℎ+1×ℎ+1 . 0ℎ+1×ℎ+1 ⎟ ⎜ .. ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ 1 0 ⎟ ⎜ ⎟ ⎜ 푋퐹 푝 푋휁푝 퐴푅푝 ⎟ ⎜ ⎟ ⎜ 1 0 ... 0 ⎟ ⎜ . ⎟ ⎜ .. ⎟ ⎜ 0 1 0 ⎟ ⎜ 0ℎ+1×ℎ+1 0ℎ+1×ℎ+1 .. ⎟ ⎝ . ⎠ 1 0

There is no external input in this model, hence no 퐵 matrix. Let’s now look at the jacobian of the function ℎ, 퐻푘. Here the linear approximation of 푅푓 is used, so 퐻푘 is constant for all 푘. (︂ )︂ 0 0 퐼ℎ+1 퐻푘 = 퐼ℎ+1 퐽2 0

휕푉 (푥) (퐽) = 푠푢푏 | − 퐴 휌푔 푘푘 휕푥 푥=0 푤 41 A state space model identification Instead of forcing using the recent history of the different signals as the states, we can simply identify astate space model on the data. Here it is : we simply choose an order for the state-space model, then use the three hydrodynamic signals as the different element of the output 푦. In this case, states do not come with any specific meaning, except for their linear combination which would provide us with the values for the current excitation force, surface elevation and pressure. It is quite straightforward to modify this state-space model to get the pressure and the resulting force as measurements. Indeed, we only need to modify the measurement matrix 퐻 since the pressure is already an output in the identified model and the resulting force is a linear combination ofthe two other outputs of the identified model; the excitation force and the surface elevation. The identification is made in the canonical form with the help of theMat- lab function ’ssest’. If we set the ’Form’ option to ’canonical’ in ssest, we get a model in the ARX form, but the order repartition is made "automatically".

3.4 Wiener predictor

The Wiener predictor is a linear predictor aiming at minimizing the mean square error of estimation. For a signal 푥(푘) and some noised measures 푦(푘), the estimation 푥ˆ(푘) of 푥(푘) is defined as

푙 ∑︁2 푥ˆ(푘) = 푤(푙)푦(푘 − 푙)

푙=푙1 where the weights 푤(푙) are the weights The MSE 휉 is given by 휉 = E [퐸(푘)2] = [︁ ]︁ 푋(푘) − 푋ˆ(푘)2 Here we want to combine all the information available, i.e. the pressure signal 푝(푘) and the resulting force 푅푓 in a Wiener filter with several inputs. The available information is the values of both input signals up to time 푘. The predictor is to be defined as follows :

푛−1 푛−1 ˆ ∑︁ (ℎ) ∑︁ (ℎ) 퐹푒푥푐(푘 + ℎ) = 푤푝 (푗)푝(푘 − 푗) + 푤푅 (푗)푅푓 (푘 − 푗) 푗=0 푗=0

42 where 푛 is the order of the filter, 푙 is the prediction horizon of the predictor, and the coefficients (ℎ and (ℎ are to be computed with a system of 푤푝 (푗) 푤푅 (푗) equations, called the Wiener-Hopf equations.

Stationarity hypoothesis and correlation/crosscorrelation We re- quire that the processes addressed be mutually weak stationary in order to be able to construct the Wiener filter, the joint probability of the signal does not change when shifted in time. That means the mean of the signal is time- independent, and the covariance only depends on the time difference. Hence we can define the following cross-covariances :

⋆ 푅푝(푙) = E [푝(푘)푝 (푘 − 푙)]

[︀ ⋆ ]︀ 푅푅(푙) = E 푅푓 (푘)푅푓 (푘 − 푙) [︀ ⋆ ]︀ 푅푝푅(푙) = E 푝(푘)푅푓 (푘 − 푙) ⋆ 푅푅푝(푙) = E [푅푓 (푘)푝 (푘 − 푙)] ⋆ 푅퐹 푝(푙) = E [퐹푒푥푐(푘)푝 (푘 − 푙)] [︀ ⋆ ]︀ 푅퐹 푅(푙) = E 퐹푒푥푐(푘)푅푓 (푘 − 푙)

Orthogonality principle We can now express optimality conditions for the coefficients (ℎ) and (ℎ) . We simply require that the derivative of 푤푝 (푗) 푤푅 (푗) the mean square error be minimized and since it is continuous and convex in the coefficients, we simply need its derivative to be zero. That yieldsthe following conditions : 휕휉 (ℎ) = 0 휕푤푝 (푙) 2E [퐸(푘 + ℎ)푝(푘 − 푙)] = 0 and 휕휉 (ℎ) = 0 휕푤푅 (푙)

2E [퐸(푘 + ℎ)푅푓 (푘 − 푙)] = 0

43 Wiener-Hopf equations By reworking the optimality conditions, we ar- rive at the equations

푛−1 푛−1 ∑︁ (ℎ) ∑︁ (ℎ) 푅퐹 푝(푙) = 푤푝 (푗)푅푝(푙 − 푗) + 푤푅 (푗)푅푅푝(푙 − 푗) 푗=0 푗=0 Here is what the system of equations looks like in matrix form : ⎛ ⎞ 푅푝(0) 푅푝(1) . . . 푅푅푝(0) 푅푅푝(−1) ... ⎛ (ℎ) ⎞ ⎜ 푅푝(1) 푅푝(0) 푅푅푝(1) 푅푅푝(0) ⎟ 푤푝 (0) ⎜ . . ⎟ ⎜ .. .. ⎟ ⎜ 푤(ℎ)(1) ⎟ ⎜ 푅푝(2) 푅푅푝(2) ⎟ ⎜ 푝 ⎟ ⎜ . . ⎟ ⎜ . ⎟ ⎜ . . ⎟ ⎜ . ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ (ℎ) ⎟ ⎜ 푅푝(푛 − 1) 푅푅푝(푛 − 1) ⎟ ⎜푤푝 (푛 − 1)⎟ ⎜ 푅 (0) 푅 (−1) . . . 푅 (0) 푅 (1) ... ⎟ ⎜ (ℎ) ⎟ ⎜ 푝푅 푝푅 푅 푅 ⎟ ⎜ 푤푅 (0) ⎟ ⎜ 푅푝푅(1) 푅푝푅(0) 푅푅(1) 푅푅(0) ⎟ ⎜ (ℎ) ⎟ ⎜ ⎟ ⎜ 푤푅 (1) ⎟ ⎜ ...... ⎟ ⎜ . ⎟ ⎜ 푅푝푅(2) 푅푝푅(2) ⎟ ⎜ . ⎟ ⎜ . . ⎟ ⎝ ⎠ ⎜ . . ⎟ (ℎ) ⎝ ⎠ 푤푅 (푛 − 1) 푅푝푅(푛 − 1) 푅푅(푛 − 1) ⎛ ⎞ 푅퐹 푝(0) ⎜ 푅퐹 푝(1) ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ ⎟ ⎜ ⎟ ⎜푅퐹 푝(푛 − 1)⎟ = ⎜ ⎟ ⎜ 푅퐹 푅(0) ⎟ ⎜ ⎟ ⎜ 푅퐹 푅(1) ⎟ ⎜ . ⎟ ⎝ . ⎠ 푅퐹 푅(2) or ℎ R푝푅푤 = 푅퐹 푝푅(ℎ) Note that we need to account for some noise in the measurements we get. This is done by adding the noise covariance to the auto-covariance matrix :

ℎ (R푝푅 + Σ)푤 = 푅퐹 푝푅(ℎ) Note that the Σ matrix is a diagonal matrix with the values in the higher left part of the matrix being 2, an estimate of the variance of the noise on 휎푝

44 the pressure, and the lower right part being 2 , an estimate of the variance 휎푅푓 of the noise on the resulting force measurements. Note that the upper left and lower right of the matrix(corresponding to 푅푝 and 푅푅) are symmetric, and that the upper right and lower left(corresponding to 푅푅푝 and 푅푝푅) are each other’s transpose. Note that we have an easy way of computing the covariances, since we have the covariance function of the surface elevation(the JONSWAP spec- trum) and the transfer functions linking every signal. Let’s call the inverse

Discrete Fourrier Transform of a sampling of the JONSWAP spectrum 푅휁 (휔). The other useful covariances (in the frequency domain) can be obtained as follows :

⋆ 푅푝 = 퐻푝(휔)푅휁 (휔)퐻푝 (휔) ⋆ 푅푅 = 퐻푅(휔)푅휁 (휔)퐻푅(휔) ⋆ 푅푝푅 = 퐻푝(휔)푅휁 (휔)퐻푅(휔) ⋆ 푅푅푝 = 퐻푅(휔)푅휁 (휔)퐻푝 (휔) ⋆ 푅퐹 푝 = 퐻푒푥푐(휔)푅휁 (휔)퐻푝 (휔) ⋆ 푅퐹 푅 = 퐻푒푥푐(휔)푅휁 (휔)퐻푅(휔) We can then get the temporal values of the covariances at the different lags.

Actual value of the objective The value for the mean square error for horizon ℎ can be computed by using the following relation :

⋆ 푇 −1 ⋆ 휉ℎ = 푅퐹 (0) − (푅퐹 푝푅(ℎ)) R푝푅푅퐹 푝푅(ℎ) That means that the mean square error is composed of a base part which is due to the variance of the excitation force, and we can reduce this uncertainty by using our knowledge of the past values of the signal. The amount to which we can reduce this error is the norm of the correlation of the Force with the measurements(pressure and resulting force) through the inverse of the inverse of the covariance matrix of the measurements.

45 Chapter 4

Simulation

A Simulink model of CorPower Ocean’s Wave Energy Converter is currently being actively developed within the company’s modelling team. The parts of the model presented in chapter 2 correspond to what is implemented in this model. In this thesis, most of the testing has been carried out using independent Matlab scripts using pre-generated data from the model. The method implemented in the model is the one of the Wiener filter described in 3.4.

4.1 Prediction extension for the Simulink model

The parameters for the prediction extension can be found in the file Control- TextFile.txt. They are as follows :

∙ varNoiseRf : the noise variance parameter for the resulting force ∙ varNoiseP : the noise variance parameter for the pressure ∙ AROrder : the order of the predictor, that is to say, how many steps in the past of the signals will be accounted for by the predictor

∙ inputs : choice of the inputs for the filter, 0 for 푅푓 , 1 for 푝푝 and 2 for both

∙ After how much time will the predictor be updated to account for the new values of 퐻푠/푇푝 ∙ PredictionHorizon how many steps ahead do we need to predict

46 Figure 4.1: ControllerTextFile.txt

Figure 4.2: The wave prediction part of the controller block

The wave prediction is now a part of the controller block, the inputs are estimations of the resulting force, the dynamic pressure and the position of the buoy in heave, as well as estimations of the peak wave period and significant wave height. The outputs are predictions of the (non-corrected) excitation force and the surface elevation. The corrected excitation force is not computed since that involves knowing the future position of the buoy, which will also depend on the MPC or another non-causal strategy. The inclusion of the wave prediction can be seen in figure 4.2. The implementation of the wave prediction bloc can be seen in figure 4.3. The coefficients of the predictor are computed in the WienerCoefficients bloc and the prediction is performed in the FIR Predictor block.

47 Figure 4.3: The wave prediction block

Simulation When we represent the surface elevation as

푁 ∑︁ 휁(푡) = 푎푖 cos(2휋푓푖푡 + 훼푖) 푖=1 the excitation force is then represented as

푁 ∑︁ 휏푒푥푐(푡) = 푀퐻푒푥푐(푓푖)푎푖 cos(2휋푓푖푡 + 푎푙푝ℎ푎푖 + 푃 ℎ퐻푒푥푐) 푖=1 and the pressure at the probe as

푁 ∑︁ 푝푝(푡) = 푎푖푢푧(푓푖) cos(2휋푓푖푡 + 훼푖) 푖=1

Fourrier coefficients, Fourrier Transform, DTFT, DFT and their link to transfer functions and impulse responses The available values for the frequency responses, the coefficients 푀퐻푒푥푐 are available for a given frequency resolution, defined by ∆휔1. The value given are not the ones of the Fourrier complex series, but just twice the positive part of the spectrum.

Let’s denote the Fourrier Series coefficient corresponding to 푀퐻푒푥푐 and 푘∆휔1 by 퐻푒푥푐,푘. We have 푀퐻푒푥푐(푘∆휔1) = 2퐻푒푥푐,푘

48 and ⋆ 퐻푒푥푐,푘 = (퐻푒푥푐,−푘) Now the discrete spectrum implies that the signal will be periodic with a period of 2휋 . 푇1 = Δ휔 But when we are1 simulating our signals, we are doing so in a period that is not a integer multiple of 푇1. As a result, when we take the discrete Fourier Transform of the signal, we do not have exactly the same curve as the one depicted above. When simulating we are also discretizing with a Sample Time. In the simulations in this thesis this sample time is ∆푇 = 0.1푠. We are sure that there is no aliasing since 퐹푠 = 1/0.1 is bigger than twice the maximum frequency component. In order to get the right impulse response from a sum of cosines, we can for example do for ℎ푒푥푐[푛] (which is the response to be convoluted to the surface elevation to get the excitation force).

푁 2∆푇 ∑︁ ℎ [푛] = 푀퐻 (푓 ) cos(2휋푓 푛∆푇 + 푃 ℎ퐻 (푓 )) 푒푥푐 푛 푒푥푐 푖 푖 푒푥푐 푖 푓푟푒푞 푖=1 where 푛푓푟푒푞 is the number of frequencies

49 Chapter 5

Results

In this chapter we describe tests that have been performed on the predictor and look at their results. In this chapter we first present our criteria for good performance, then we proceed to evaluate the influence of parameters on the quality of the predictions.

5.1 Quality criteria

5.1.1 Confidence interval The prediction are the estimation of a stochastic variable. The l-step ahead prediction error in a signal 휂 is hence also a stochastic variable, and is given by 푒ˆ(푘 + 푙|푘) = 휂(푘 + 푙) − 휂ˆ(푘 + 푙|푘) If we assume this error to be Gaussian with a variance 2 then we can easily 휎푙 find some confidence intervals in which the error can befound

−푧훿/2 ≤ 푒ˆ(푘 + 푙|푘) ≤ 푧훿/2 with

푃 (−푧훿/2 ≤ 푒ˆ(푘 + 푙|푘) ≤ 푧훿/2) = 훿 A way to estimate the variance 2 of the estimation error is to use the 휎푙 past history of the prediction errors :

푁 1 ∑︁ 휎2 = 푒ˆ(푘 + 푙|푘)2 푙 푁 − 1 푘=1 50 Using the normal distribution, we get

푃 (푒 < 1.65휎푙) = 0.9 and the real value of the signal has a 90% chance to be within 1.65휎 of the estimation.

5.1.2 Goodness-Of-Fit In this work we mostly use another quality criterium, the Goodness-Of-Fit. For the forecasting of a signal 휂 at horizon 푙, it is linked to the standard deviation as follows :

√︀∑︀ (휂(푘 + 푙) − 휂(푘 + 푙|푘))2 ℱ(푙) = (1 − 푘 ̂︀ ) · 100 √︀∑︀ 2 푘 휂(푘) An always exact prediction would display a value of 100, and this value goes down for higher uncertainty and variance of the error.

5.2 Choice of a method

We need to assess the performances of each and every method that has been deemed suitable to our problem. In figure 5.1, we can see the goodness- of-fit for the Wiener predictor(푅푓 version), the Kalman predictor and an application of AR to the resulting force, all of which applied on a JONSWAP spectrum with 푇푝 = 6.4푠 and 퐻푠 = 1푚. The AR method in the picture clearly underperforms compared to the other two, it can hence be ruled out. The performances for the Wiener filter and the state-space Kalman are roughly equivalent. It is not surprising as the two methods are roughly equivalent, the difference being that the Wiener method assumes the time series tobe stationary and it is identified using the JONSWAP spectral density, whereas the Kalman filter is identified on previous data and not necessarily stationary. Notice the difference of order in the two methods : 140 for the Wiener filter where only 15 are needed in the Kalman. This is due to the fact that the states of the Kalman only contain the most important information on the surface elevation, the excitation force and the pressure, whereas the Wiener filter doesn’t discriminate and has no history, it is hence forced to takeallof the values of the resulting force in the past. The Kalman filter with order 15 is

51 GOF values for three methods 100 Wiener 140 80 AR40 Kalman15 60

40 GOF (%) 20

0 10 20 30 40 50 60 70 80 90 100 Horizon Figure 5.1: Comparison of the Wiener order 140, AR order 40 and Kalman order 15 predictors for a wave period of 6.4푠 and a wave height of 1푚 actually one of the last order which doesn’t need regularization in the system identification, higher orders do and lose accuracy. Besides, the identification of a state-space system added to the application of the Kalman filter make it a significantly less attractive candidate than the Wiener filter. I hence chose the Wiener predictor for the easy computation of its coefficients and easy application.

5.3 Wiener results

This section focuses mainly on results for the Wiener predictor, as this was deemed to be the method of our choice, the other two methods presented in chapter ?? presenting some significantly worse performance or were exceed- ingly complex. A multi-input Wiener filter was then designed, and the results are much better on average. The accuracy achieved is of 85% for the horizon 0, and we can expect a Goodness-of-fit of 60% for a half-period and 40% for an entire period with good conditions(low noises, wave frequency lower than the cut-off of the pressure factor). In figure 5.2 are some predictions for visualization. Both the AR pressure estimation and the Kalman filter give decent results for short prediction horizon, however the goodness-of-fit drops very rapidly thereafter. This approach simply doesn’t account for the links between the

52 ×105 ×104 1 8

0.8 6

0.6 4

0.4 2 0.2 0 0 -2 -0.2 -4 -0.4

-6 -0.6

-0.8 -8

-1 -10 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100

×104 ×104 8 10

6 8

4 6

2 4

0 2

-2 0

-4 -2

-6 -4

-8 -6

-10 -8 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100

Figure 5.2: Prediction of the Wiener filter in blue, actual value for theex- citation force in Newton for a wave period of 5s, plotted against time in seconds hydrodynamic signals in an appropriate way.

53 5.3.1 Contributions of the pressure and resulting force to the estimation for different sea states We can ask ourselves whether both measurements are carrying useful infor- mation or whether some of this information is redundant. We have a choice between using only the pressure measurement, the resulting force measure- ment or both. One might expect that the single-input Wiener filters using either only pressure or only resulting force would perform worse than the combination of the two information sources. This is however not the case in general, as we will see shortly. The performance of the filters involving pressure is also expected to be worse for higher frequencies of waves. We used time series of pressure, resulting force and excitation force with wave periods of 4.2, 7.8 and 10.6푠 and a pressure probe at a depth of 20푚. Those periods correspond to frequencies of 0.23, 0.12 and 0.09퐻푧. Note that the pressure factor decreases with frequency and is close to 0.5 at 0.09퐻푧 and almost zero at 0.23퐻푧 for a depth of 20푚, see figure 2.6. We can see in figure 5.3 that the filter using only the resulting force usually performs better, except for really high values of 푇푝. The performance is best for lower frequencies. The pressure-only predictor barely manages to predict the current value of the excitation force in the best case (for the lowest frequency here). The 푅푓 -only and the double input are thus the only decent candidates for a predictor. We see here that the 푅푓 -only performs up to 10% better for 푇푝 = 4.2 and 8.5푠. However, the predictor with both signals is better for 푇푝 = 10.6, as the information contained in surface elevation signal is also present in the pressure signal.

5.3.2 Influence of the Signal To Noise ratios

Now that we have determined that the 푅푓 -only predictor was to be preferred, we need to know which noise parameter we need to choose. The parameter must be adjusted so that it is not too low and is robust to noise, and it must not be too high so that it can still use the past data as useful information.

In the following we noised the 푅푓 signal with several SNR values, then tried the predictor for several values of the noise parameter. Both for a low SNR of 20 and for a higher one of 80, the value 5 · 106 seems to perform best of all, see figure 5.4. We can also visualize the equivalent covariances forthe signal in figure 5.5. The noise covariances are to be adapted to the actual signal to noise ratios of the signals. Even a noiseless signal benefits from

54 GOF values for the Wiener filter with T = 4.2 GOF values for the Wiener filter with T = 8.5 p p 100 100 Both signals Both signals 80 Only pressure 80 Only pressure Only R Only R f f 60 60

40 40 GOF(%) GOF(%) 20 20

0 0

20 40 60 80 100 20 40 60 80 100 Horizon Horizon GOF values for the Wiener filter with T = 10.6 p 100 Both signals 80 Only pressure Only R f 60

40 GOF(%) 20

0

20 40 60 80 100 Horizon Figure 5.3: Outlook of the performances of wiener filters using different sub- sets of the measurements available a regularization of the Wiener-Hopf equations used to compute the filter, so it is recommended to simply test parameters on the real system and see what works best. The influence of noise on the final goodness-of-fit ofthe prediction increases as its contribution in the reduction of the mean square error is big. Hence with a pressure sensor at a very low position the SNR on the resulting force is the most important one.

5.3.3 Order of the filter We now have to study the order of the filter. We tested for a peak wave period of 6.4푠 and a significant wave height of 1푚 the performance of the Wiener filter with an input signal noised with white noise with aSNRof 40.

55 SNR R : 20 SNR R : 80 f f 100 100 Noise Param 10000 Noise Param 10000 80 5 106 80 5 106 107 107 60 60 5 107 5 107 40 40

20 20 Goodness of fit (%) Goodness of fit (%) 0 0

-20 -20 20 40 60 80 100 20 40 60 80 100 Steps ahead Steps ahead Figure 5.4: Performance of the predictor for a SNR of 20 and a SNR of 80

Assumed covariance of the input signal R ×107 f 20 Noise Param 10000 6 15 5 10 107 7 10 5 10

5 Covariance value 0

-5 0 5 10 15 20 Shift

Figure 5.5: Assumed covariances for the corrected 푅푓 signal for several values of the noise parameter

56 In figure 5.7 you can see the results of these tests for several order ofthe filter. We can see that it doesn’t get much better after the order has reached one and a half wave period, here around 100 samples. The theoretical bound becomes gradually better as we increase the order of the filter. In figure 5.6 we have the theoretical results for a wave period of 6.4s, wave height of 1m.

Theoretical bound for the GOF values for several orders 100 10 20 80 30 40 60 50 70 100 140 GOF(%) 40

20

0 20 40 60 80 100 Horizon

Figure 5.6: Theoretical GOF values for several values of filter order for 푇푝 = 6.4푠, 퐻푠 = 1푚

The theoretical bound ceases improving much after we have extended the available information to 1.5 periods in the past.

5.3.4 Comparison with the theoretical bound Let’s remember that the theoretical bound on the mean square error de- scribed in section 3.4 assumes that the noise applied is white noise and that the value of its covariance is the value used to regularize the filter. It could be seen in figures 5.6 and 5.7 that the real goodness-of-fit follows thesame wavy pattern as the theoretical lower bound does. We also emphasize that the theoretical bound becomes gradually better as the order of the filter in- creases, and the predictor of order 100 is significantly better than the one of order 50. In reality however, the 100 one is sometimes better than the 140 one, and the difference of order matters less as we go up in order.

57 GOF values for several orders of the Wiener filter 100 10 20 80 30 40 60 50 70 100 140 GOF(%) 40

20

0 20 40 60 80 100 Horizon

Figure 5.7: GOF values for several values of filter order for 푇푝 = 6.4푠, 퐻푠 = 1푚

5.3.5 Influence of the Sea State We now test our predictor on variations of parameters for the JONSWAP spectrum. The two relevant parameters are of course the peak period 푇푝 and the significant height 퐻푠 for waves. As we can see from figure 5.8, the influence of the peak wave period can be said to be . The pattern forthe theoretical Goodness-Of-Fit spreads in proportion to the wave period. We can hence say that the quality for the prediction is theoretically the same if we use wave periods as a unit of time. In practice we can see this trend appearing in 5.8, but it is rather chaotic. But when we would think the significant wave height to be of little im- portance in regard to the quality of the prediction. But, when we vary the wave height and adapt the noise variance parameter accordingly(with the square of the wave height), the theoretical bound is unchanged, but we wit- ness a great degradation in all wave height other than one. This is however corrected by increasing further the noise covariance parameter. This emphasizes the need to calibrate the predictor in order to get the best prediction. It is however not clear yet how this calibration is to be performed in real time since a measure of the actual excitation force is needed.

58 Theoretical bound for the GOF values for several wave periods GOF values for several wave periods of the Wiener filter 100 100 4.2 4.2 4.9 4.9 80 80 5.7 5.7 6.4 6.4 7.1 60 60 7.1 8.5 8.5 10.6 10.6 40 11.3 40 11.3 GOF(%) GOF(%)

20 20

0 0

-20 -20 20 40 60 80 100 20 40 60 80 100 Horizon Horizon Figure 5.8: Real and theoretical GOF of the Wiener predictor for several wave periods

5.3.6 Error in the sea-state information Sea-state estimation is subject to some error, and the spectrum might not even correspond to the JONSWAP spectrum we assumed in this work. The transition between sea-states is assumed to be slow enough so that it can be assumed as locally constant. But sometimes the sea-state might actually be different than what the sea-state estimation or the user gives. Itisthen of interest to study the influence of this error in the final goodness-of-fit. For a JONSWAP spectrum with a wave period of 8.5푠 and a wave height of 1푚, we tested our predictor computed for spectrum with neighbouring wave period values. We can see in figure 5.9 that sometimes these errors are even beneficial for the quality of the prediction! Of course, theoretically the predictor with the best estimation of 푇푝 has the best bound but not in practice.

5.3.7 Additional information One might ask how it is possible to improve the results obtained in the previous section, for example by adding some information as input to the predictor. In particular the pressure sensor below the buoy is located at a depth of about 20푚. In the WEC system, there is no question of moving this sensor, it is useful for other purposes, such as determining the tidal elevation of the water. It has however been discussed to add another sensor higher up on the buoy in order to have better information. This would not

59 GOF values for several orders of the Wiener filter 100 7.5 80 8 8.5 9 60 9.5

40 GOF(%) 20

0

-20 0 20 40 60 80 100 Horizon

Figure 5.9: GoF for a signal of 8.5푠 and predictors with different 푇푝 param- eters improve much the prediction of the excitation force, but the closer we get to the surface, the closer the pressure signal is from the surface elevation signal.This would in theory improve greatly the prediction, however putting the sensor on the moving buoy will make the movements of the buoy the main influence on the pressure. So this solution can be considered ifaway to correct the measurements is found, if possible at all. Another way to improve the prediction would be to use data from another buoy. If we know the direction of the wave then we can decompose the result- ing force of this other buoy in harmonic components and then propagate and reconstruct the resulting force for the current buoy. This approach is out- lined for a one-dimensional spectrum in appendix ??. This approach hasn’t been explored further in this thesis. The main difficulty of this approach will likely be the decomposition in several direction of the waves.

60 Chapter 6

Conclusion

In this work we have evaluated the performance of different prediction meth- ods for the excitation force. Predicting the pressure using an AR model and then using a truncated infinite response Wiener filter was a first attemptat forecasting. However this method is not to be recommended in general, as the pressure signal often doesn’t capture enough information to be able to predict the excitation force. Indeed, the position of the depth sensor implies that the pressure factor function has a cut-off at a quite low frequency and the drop in gain is very sharp, which entails that the relevant frequencies can very often be completely removed in the pressure signal. The use of another signal is then to be advocated for the prediction. A relation between the excitation force and a sum of other measured signals has been established. This new signal, which we called the resulting force, depends on the position, speed and acceleration, and the power take off of the system. Thanks toa series of approximations and linearisations we made the relation between the resulting force and the excitation force linear. In this work we assumed those as known. In practice, the CorPower team will have to figure out how to get those signals through sensors and some numerical integration. A Kalman filter is a common way to aggregate several sources of information andget a better estimation based on all the information available. The Kalman filter used an identified model with an history of the values of theexcita- tion force as states, sometimes including the pressure and surface elevation as states. Both the AR pressure estimation and resulting force give good results for short prediction horizon, however the goodness-of-fit drops very rapidly thereafter. A multi-input Wiener filter was then designed, and the results are much

61 better on average. For a peak wave period of 10.6푠, the accuracy achieved is of 85% for the horizon 0, and we can expect a Goodness-of-fit of 60% for a half-period and 40% for an entire period with good conditions(low noises, wave frequency lower than the cut-off of the pressure factor). However, the single-input Wiener predictor is in general a better one as its performance doesn’t drop sharply with the peak wave period. After tests asserted this fact, we only used the single-input 푅푓 predictor for the tests. The real goodness-of-fit of the method doesn’t match the theoretical lower bound for a Wiener filter very closely, it is constantly about 20% below its theoretical value. If the estimation of noise is exact, it is the result that minimizes mean square error for a linear combination of the past values of

푅푓 . The theoretical bound becomes gradually better as we increase the order of the filter, but it ceases improving much after we have extended the available information to 1.5 periods in the past. The noise covariances are to be adapted to the actual signal to noise ratios of the signals. Even a noiseless signal benefits from a regularization of the Wiener-Hopf equations used to compute the filter, so it is recommended to simply test parameters on the real system and see what works best. Varying the noise covariances by keeping the matrices constant, i.e. not adjusting the noise parameters in the estimation leads to over-fitting if we decrease the noise too much, and the prediction gets better as we increase the noise variance, although not as good as it could be if we adjusted the noise parameters in the filter. This is a matter of adjustments to bemade under the particular noise conditions one is facing. The sea-state will either be given by the user or the statistics will be computed on a number of past periods in the time series available. Sometimes however the sea-state might actually be different than what the sea-state estimation or the user gives. It is then useful to study the influence of this error in the final goodness-of-fit. The transition between sea-states is assumed to be slow enough so that it can be assumed as locally constant. A small error on the wave period seems to shift the Goodness-of-fit to the left or to the right, whereas the noise variance must be adjusted to account for a big change in wave height. There is a sensor below the buoy, at a depth of about 20푚. This sensor won’t be moved, it is useful for other purposes, such as determining the tidal elevation of the water. It has been discussed to add another sensor higher up on the buoy in order to have better information. This would in

62 theory improve greatly the prediction, however putting the sensor on the moving buoy will make the movements of the buoy the main influence on the pressure. So this solution can be considered if a way to correct the measurements is found, if possible at all. Now we need to address what could be done to improve those predictions. If we included a good measure of the surface elevation. A way to get this information could be by direct measurement thanks to lasers (ref. needed) or a reconstruction by several surface elevation buoy situated in the neighbour- hood of the buoy. Another way to do that might be to use the reconstructed surface elevation/excitation force of another buoy. In order to achieve that, a measure of direction must be achieved(is it even feasible?). It is maybe possible to achieve this using the accelerometers on the buoy and infer the values for roll and pitch. The formulation of this problem is expected to be more complex, as some approximations might not be valid. As we could see, the quality of the prediction is often below 80%, which makes its suitability for a MPC application questionable. My suggestion for the future is to not look at the control problem and the wave estimation problem as two separate problems, but use the covariance of the relevant signals as data and maybe use stochastic programming since the prediction cannot always be trusted. There are more ways to look at this prediction problem of course, and we haven’t looked a lot at neural networks, which are quite present in the literature for wave prediction, although with mixed results. An improvement clue would be the application of these techniques to the problem of control with the inputs mentioned in this thesis.

63 Appendix A

Up-wave measurements

Up-wave method use measurements from another location to predict the evolution of a signal at the location of interest. Those methods are presented here but will not be investigated any further since we focus for now on the operation of one device, and directional analysis needs to be performed.

Deterministic sea-wave prediction Deterministic sea-wave techniques are used to predict the sea from another point a bit further away from the point of interest. [24] says that it is sometimes inconvenient because the solu- tions computed for the sea-state are periodic while the sea itself is inherently aperiodic. Good result for a single long crested swell (at a distant site from an observation point) [25]

FIR and ARX models In [11] the possibility of using measurements from another location to predict the surface elevation for a oscillating wave col- umn was considered. The performance of a Finite Impulse Response model (up-wave measurements) and a Auto-regressive model with external input (ARX)(up-wave and surface elevation measurements). The performance of those models for the periods considered are generally not better than the ones obtained with a normal AR model. The waves propagate with a phase velocity given by 휔 푣 = 푝 푘(휔)

64 There will be a maximum speed for the waves, which are determined by the maximum frequency considered. We then have a propagation time

푑 푡푝푟표푝 = 푣푚푎푥 The full ARX model is given by

푛 푛 ∑︁푎 ∑︁푏 휂푐ℎ(푘) = 푎푖휂푐ℎ(푘 − 푖) + 푏푖푢(푘 − 푖 + 1) + 휁(푘) 푖=1 푖=1 The l-step ahead prediction is given as follows :

⎧ ∑︀푛푎 푎 휂ˆ (푘 + 푙 − 푖|푘) ⎨ 푖=1 푖 푐ℎ ∑︀푛푏 휂ˆ푐ℎ(푘 + 푙|푘) = + 푖=1 푏푖푢(푘 + 푙 − 푖 + 1) ⎩ ∑︀푛푎 푖=1 훼푖휂ˆ푐ℎ(푘 + 푙 − 푖|푘) where 푎푖, 푏푖 are the parameters of the ARX model, and the 훼푖’s are the coefficients of the AR model, all obtained by minimizing the following cost function. 푁1 ∑︁ 2 퐽퐿푆 = (휂푐ℎ(푘 + 1) − 휂ˆ푐ℎ(푘 + 1|푘)) 푘=1 which is the one-step ahead prediction vector. [26]

Use of the FFT We can also compute the current signal at one point and then propagate it using the dispersion relation to another point. In one dimension it is easy as there is only one direction for the wave to propagate. The efficiency of this method is of course dependent on the length ofthe interval used for the computation of the DFT coefficients. As in the ARX case, the propagation is only relevant in a certain period of time, that is after the fastest frequency relevant frequency component has arrived and before the slowest is passed. In figure A.1 we can see the evolution of the error for a JONSWAP spectrum with wave amplitude 5 for two points 100푚 apart. In blue is displayed the lower and higher limits of the zone where the reconstruction is actually of use. Note that this prediction could be used as input to an ARX model.

65 Error in the reconstruction of the surface elevation at 100m using FFT 7 Prediction error

100/vmax 100/vmin 6

5

4

3 Absolute error(m)

2

1

0 0 100 200 300 400 500 600 700 800 900 1000 Time(s) Figure A.1: Prediction error for the FFT prediction

66 Bibliography

[1] Kester Gunn and Clym Stock-Williams. Quantifying the global wave power resource. Renewable Energy, 44:296–304, 2012.

[2] Iea. Key World Energy Statistics 2009. Statistics, page 82, 2009.

[3] Johannes Falnes. Interaction Between Oscillations and Waves. Online, pages 43–57.

[4] Dick K. P. Yue. Marine hydrodynamics OpenCourse, Lecture notes Chapter 6 - Water Waves, 2005.

[5] F Fusco and J Ringwood. A Study on Short-Term Sea Profile Prediction for Wave Energy Applications.

[6] Francesco Fusco. Short-term Wave Forecasting as a Univariate Time Series Problem. (December), 2009.

[7] Jørgen Hals, Johannes Falnes, and Torgeir Moan. Constrained Optimal Control of a Heaving Buoy Wave Energy Converter. OMAE, 2009.

[8] B Fischer and P Kracht. Online-Algorithm using Adaptive Filters for Short-Term Wave Prediction and its Implementation.

[9] AR model estimation in Matlab.

[10] Marco P Schoen, Jørgen Hals, and Torgeir Moan. Wave Prediction and Robust Control of Heaving Wave Energy Devices for Irregular Waves. IEEE Transactions on Energy Conversion, 26(2):627–638, 2011.

[11] F. Paparella, K. Monk, V. Winands, M. Lopes, D. Conley, and J.V. Ringwood. Up-wave and autoregressive methods for short-term wave

67 forecasting for an oscillating . IEEE Transactions on Sus- tainable Energy, 6(1):171–178, 2015.

[12] J. Ross Halliday, David G. Dorrell, and Alan R. Wood. An application of the Fast Fourier Transform to the short-term prediction of sea wave behaviour. Renewable Energy, 36(6):1685–1692, 2011.

[13] E. Blondel, F. Bonnefoy, and P. Ferrant. Deterministic non-linear wave prediction using probe data. Ocean Engineering, 37(10):913–926, 2010.

[14] Francesco Fusco and John V Ringwood. A study of the prediction re- quirements in real-time control of wave energy converters. Sustainable Energy, IEEE Transactions on, 3(1):176–184, 2012.

[15] Francesco Fusco and John V. Ringwood. A Model for the Sensitivity of Non-Causal Control of Wave Energy Converters to Wave Excitation Force Prediction Errors. Proceedings of the 9th European Wave and Tidal Energy Conference, pages 1–10, 2011.

[16] Det Norske Veritas. Environmental conditions and environmental loads. Dnv, (October):9–123, 2010.

[17] Johannes Falnes. Ch. 4 : Gravity waves on water of variable depth. J. Fluid Mech, 24(04):641–659, 1966.

[18] Leo H. Holthuisen. Waves in Oceanic and Coastal Waters. 2007.

[19] Jørgen Hals. Modelling an phase control of wave-energy converters Jør- gen Hals Modelling and phase control of wave-energy converters. 2010.

[20] Wamit manual version 7.

[21] Johannes Falnes. Ch. 6 : Wave-Energy Absorption by Oscillating Bod- ies. pages 196–224, 1981.

[22] Francesco Fusco and John V. Ringwood. Short-term wave forecasting for real-time control of wave energy converters. IEEE Transactions on Sustainable Energy, 1(2):99–106, 2010.

[23] Wikipedia : Wiener deconvolution.

[24] M.R. Belmont. Filters for linear sea-wave prediction.

68 [25] Francesco Fusco and John V. Ringwood. Linear models for short term wave forecasting. Proc. World Renewable Energy Conference X, (2):6, 2008.

[26] J. Tedd. Short Term Wave Forecasting, Using Digital Filters, For Im- proved Control of Wave Energy Converters.

69

TRITA -MAT-E 2016:54 ISRN -KTH/MAT/E--16/54--SE

www.kth.se