UNIVERSIDADE ESTADUAL DE CAMPINAS Faculdade de Engenharia El´etrica e de Computa¸c˜ao

Lucas Silva de Oliveira

Granular Feedback Linearization: An Approach using Participatory Learning

Realimenta¸c˜aoGranular Linearizante - Uma Abordagem por Aprendizagem Participativa

Campinas 2019 Lucas Silva de Oliveira

Granular Feedback Linearization: An Approach using Participatory Learning

Realimenta¸c˜aoGranular Linearizante - Uma Abordagem por Aprendizagem Participativa

Thesis submitted to the School of Electrical and Computer Engineering of the University of Campinas of the requirements for the de- gree of Doctor in Electrical Engineering, in the area of Automation.

Tese apresentada `aFaculdade de Engenharia El´etrica e da Computa¸c˜aoda Universidade Estadual de Campinas como parte dos req- uisitos exigidos para a obten¸c˜aodo t´ıtulo de Doutor em Engenharia El´etrica, na ´area de Automa¸c˜ao.

Supervisor/Orientador: Prof. Dr. Fernando Antˆonio Campos Gomide Co-supervisor/Coorientador: Prof. Dr. Valter J´unior de Souza Leite

Este exemplar corresponde `avers˜ao final da tese defendida pelo aluno Lucas Silva de Oliveira, e orientada pelo Prof. Dr. Fernando Antˆonio Campos Gomide.

Campinas 2019 Ficha catalográfica Universidade Estadual de Campinas Biblioteca da Área de Engenharia e Arquitetura Luciana Pietrosanto Milla - CRB 8/8129

Oliveira, Lucas Silva de, 1982- OL4g OliGranular feedback linearization : an approach using participatory learning / Lucas Silva de Oliveira. – Campinas, SP : [s.n.], 2019.

OliOrientador: Fernando Antônio Campos Gomide. OliCoorientador: Valter Júnior de Souza Leite. OliTese (doutorado) – Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação.

Oli1. Sistemas de controle por realimentação. 2. Controle robusto. 3. Sistemas de controle ajustável. 4. Sistemas de controle inteligente. 5. Algoritmos fuzzy. I. Gomide, Fernando Antônio Campos, 1951-. II. Leite, Valter Júnior de Souza. III. Universidade Estadual de Campinas. Faculdade de Engenharia Elétrica e de Computação. IV. Título.

Informações para Biblioteca Digital

Título em outro idioma: Realimentação granular linearizante : uma abordagem por aprendizagem participativa Palavras-chave em inglês: Feedback control systems Robust control Adaptive control systems Intelligent control systems Fuzzy algorithms Área de concentração: Automação Titulação: Doutor em Engenharia Elétrica Banca examinadora: Fernando Antônio Campos Gomide [Orientador] Adrião Duarte Dória Neto Cairo Lúcio Nascimento Júnior Matheus Souza Ricardo Coração de Leão Fontoura de Oliveira Data de defesa: 14-11-2019 Programa de Pós-Graduação: Engenharia Elétrica

Identificação e informações acadêmicas do(a) aluno(a) - ORCID do autor: https://orcid.org/0000-0002-0322-8276 - Currículo Lattes do autor: http://lattes.cnpq.br/1841799475396072

Powered by TCPDF (www.tcpdf.org) COMISSAO˜ JULGADORA - TESE DE DOUTORADO

Candidato: Lucas Silva de Oliveira RA:163693 Data da Defesa: 14 de novembro de 2019 T´ıtulo da Tese: “Granular Feedback Linearization: An Approach using Participatory Learning”

Prof. Dr. Fernando Antˆonio Campos Gomide Prof. Dr. Adri˜aoDuarte D´oria Neto Prof. Dr. Cairo L´ucio Nascimento J´unior Prof. Dr. Matheus Souza Prof. Dr. Ricardo Cora¸c˜aode Le˜aoFontoura de Oliveira

A ata de defesa, com as respectivas assinaturas dos membros da Comiss˜ao Julgadora, encontra-se no SIGA (Sistema de Fluxo de Disserta¸c˜ao/Tese) e na Secretaria de P´os- Gradua¸c˜ao da Faculdade de Engenharia El´etrica e de Computa¸c˜ao. Acknowledgements

Firstly, I would like to express my sincere gratitude to my advisor Professor Fer- nando Gomide for the continuous support of my Ph.D. study and related research, for his patience, motivation, complete trust and immense knowledge. I am also greatly fortunate to have the Professor Valter Leite as my co-supervisor. Apart from providing me with numerous research feedbacks, he has been a constant source of support and stimulation in the last years. I could not have imagined having better mentors for my Ph.D. study. I would like to thank the thesis committee members, Prof. Dr. Adri˜ao Duarte, Prof. Dr. Cairo Nascimento, Prof. Dr. Daniel Leite, Prof. Dr. Eric Rohmer, Prof. Dr. Matheus Souza, Prof. Dr. Ricardo Oliveira, and Profa. Dra. Rosangela Balini for their availability to evaluate and this work. I would like to acknowledge the University of Campinas and the Federal Center for Technological Education of Minas Gerais by the structure, opportunity, and for the finan- cial support. My sincere thanks also go to the teaching staff, in particular, the professors Eric Rohmer, Fabiano Fruett, and Lucas Gabrielli, who provided me an opportunity to adventure by news paths and experiences. I would like to take this opportunity to express immense gratitude to all those persons who have given their invaluable support and assistance. In special, I would like to thank my fellow labmates: Filipe Pedrosa, Jeferson Silva, Lino Filho, T´abitha Esteves, and Tisciane Perp´etuo, for the sleepless nights we were working together before the final test, and for all the fun we have had in the last three years. My gratitude extends to Anderson Bento, Ariany Oliveira, Ign´acio Scola, Luis Filipe, Michelle Castro and Wagner Cust´odio fellows from our Systems and Signals Laboratory group at CEFET/MG. Also, I thank my friends from the dance group Proibido Cochilar. In particular, I am grateful to Aline Ferreira, Bruna Zielinski, Fernanda Brito, Ricardo Zambon, Tha´ıssa Engel, and Wenderson Rocha for each new dance choreography and fun moments. Finally, I would like to acknowledge with gratitude, the support, and love of my family - my parents, Jos´eFrancisco and Maria da Concei¸c˜ao; my sister’s family, Cynthia, Jo˜ao Miguel and Warley and my girlfriend, Marcela. Also, I would like to thank my uncle’s family, Ana Maria, Alo´ısio, Aninha, and M´arcio for the words of encouragement. They all kept me going, and this thesis would not have been possible without them. Abstract

Feedback linearization is a powerful control method based on the exact cancellation of non- linearities of nonlinear systems. Real world systems are complex, nonlinear, time-variant, and the system models are subject to uncertainties caused by neglected dynamics and im- precise parameter values. Differences between actual systems and their models preclude exact cancellation, what induce unexpected behavior of feedback linearization closed-loop control such as offset error, limit cycle, and instability. This thesis develops a granu- lar feedback linearization control approach using participatory learning, a novel adaptive control approach whose aim is to improve robustness and adaptiveness of feedback lin- earization control. The approach uses evolving participatory learning and concepts of the granular computing to estimate modeling errors, and employes the error information to mitigate its effects in the feedback control loop. Three distinct approaches are developed: the first assumes that a model subject to additive uncertainties is available. The evolving participatory learning algorithm produces estimates of the additive disturbances needed to cancel the nonlinearities. The second approach does not require a model for the system at all. The control input is computed using evolving participatory learning to estimate the system nonlinearities directly. Inspired in the certainty equivalence principle, the esti- mates replace the true values of the nonliearities in the ideal, exact feedback linearization control law. The third approach uses a high-gain state observer in the feedback lineariza- tion control loop. The participatory learning algorithm uses estimated values of the state instead of the true ones to cancel the nonlinearities of the system. Local Lyapunov sta- bility analysis of the feedback linearization control system is studied. The performance of the approaches are evaluated using the level control of a surge tank, the angular position control of a fan and plate system, knee joint control using functional electric stimula- tion, and a DC motor driven rigid arm. Numerical and experimental results indicate that the granular feedback linearization with participatory learning significantly increases the robustness and adaptability of feedback linearization control.

Keywords: Feedback control systems; Robust control; Adaptive control systems; Intelli- gent control systems; Fuzzy algorithms. Resumo

A lineariza¸c˜ao de sistemas n˜aolineares por realimenta¸c˜ao baseia-se no princ´ıpio do cance- lamento exato das n˜ao linearidades presentes na dinˆamica do sistema. Em geral, sistemas reais s˜aocomplexos e seus modelos est˜aosujeitos a dinˆamicas negligenciadas na modela- gem, a incertezas nos parˆametros e a parˆametros variantes no tempo. Por essas raz˜oes, a lineariza¸c˜aopor realimenta¸c˜ao exata pode apresentar comportamento e desempenho indesej´aveis tais como, erro em regime permanente, comportamento c´ıclico, ou instabili- dade. A realimenta¸c˜ao granular linearizante com aprendizagem participativa ´euma nova abordagem de controle adaptativo baseado na aprendizagem participativa. Esta aborda- gem agrega robustez e adapta¸c˜ao `amalha de controle da lineariza¸c˜aopor realimenta¸c˜ao. A abordagem proposta usa o algoritmo de aprendizagem participativa e conceitos da computa¸c˜ao granular para estimar o erro de modelagem e mitigar seus efeitos em malha fechada. S˜ao investigadas trˆes topologias de controle distintas: a primeira assume que um modelo nominal do sistema ´econhecido e admitindo, por´em, incertezas param´etricas e dinˆamica negligenciada durante o processo de modelagem. A segunda topologia usa a t´ecnica da lineariza¸c˜aopor realimenta¸c˜ao, por´emsem conhecimento pr´evio do modelo do sistema a ser linearizado. Neste caso, o algoritmo de aprendizagem participativa ´eo ´unico respons´avel por determinar a lei de controle linearizante. A terceira topologia ´eum esquema de controle em que um estimador de estados de alto ganho ´eassociado `amalha de lineariza¸c˜aopor realimenta¸c˜ao. Neste caso, os estados estimados s˜aoutilizados durante a granulariza¸c˜aodo espa¸code estado e pelo algoritmo de aprendizagem participativa. E´ feita uma an´alise de estabilidade local da abordagem. O desempenho de cada uma das topologias de controle s˜ao avaliadas no controle do n´ıvel de tanque, controle da posi¸c˜ao angular de um bra¸corob´otico, da posi¸c˜ao angular de uma placa acionada por fluxo de ar, e controle de rota¸c˜aoangular da junta do joelho via estimula¸c˜ao funcional el´etrica. Simula¸c˜aoe verifica¸c˜aoexperimental de alguns dos processos estudados fazem parte da avalia¸c˜aode desempenho. Os resultados indicam que a realimenta¸c˜aogranular linearizante com aprendizagem participativa aumenta significativamente a robustez e adaptabilidade da lineariza¸c˜ao por realimenta¸c˜ao.

Palavras-chaves: Sistemas de controle por realimenta¸c˜ao; Controle robusto; Sistemas de controle ajust´avel; Sistemas de controle inteligente; Algoritmos fuzzy. List of Figures

Figure 2.1 – Tracking in feedback linearization ...... 25 Figure 2.2 – Granulation-degranulation...... 26 Figure 2.3 – Open-loop state observer...... 32 Figure 2.4 – Luenberger observer...... 33 Figure 3.1 – Robust granular feedback linearization — RGFL...... 37 Figure3.2–Surgetank...... 41 Figure 3.3 – RGFL tracking a square waveform reference trajectory...... 43 Figure 3.4 – RGFL tracking a sawtooth waveform reference trajectory...... 44 Figure 3.5 – RGFL tracking a triangular waveform reference trajectory...... 45 Figure 3.6 – Actual surge tank system...... 46 Figure 3.7 – Nonlinearity in the actual surge tank ...... 46 Figure 3.8 – Nominal surge tank behavior ...... 47 Figure 3.9 – Uncertain surge tank behavior ...... 48 Figure 3.10–Lower limb modelling using FES ...... 49 Figure 3.11–Maximal values to the bounded attraction region ...... 52 Figure 3.12–Knee joint behavior and performance ...... 53 Figure 3.13–Clustering process through knee joint experiment ...... 54 Figure 3.14–Inverted Pendulum...... 56 Figure 3.15–RGFL using eTS algorithm with 휎 = 0.3 ...... 57 Figure 3.16–Performance indexes of RGFL controller with eTS algorithm ...... 59 Figure 4.1 – ReGFL control...... 63 Figure 4.2 – ReGFL tracking a square waveform reference ...... 66 Figure 4.3 – ReGFL tracking a sawtooth waveform reference ...... 67 Figure 4.4 – ReGFL tracking a triangular waveform reference ...... 67 Figure 5.1 – RegHGO control...... 72 Figure 5.2 – Fan and plate system...... 73 Figure 5.3 – Behavior of the fan and plate system with RegHGO controller . . . . . 74 Figure 5.4 – Tracking and observer error of the RegHGO controller ...... 75 Figure 5.5 – Performance indexes of the RegHGO controller ...... 75 Figure 5.6 – Behavior of the rigid arm in continous work...... 78 Figure 5.7 – Behavior of the rigid arm in a batch process...... 79 List of Tables

Table 3.1 – Performance indexes of the controllers methods...... 45 Table 3.2 – Performance indexes for the actual surge tank experiments...... 48 Table 3.3 – Min and Max Values to normalize the ePL algorithm input...... 53 Table 3.4 – Performance indexes of the controllers...... 54 Table 3.5 – Tunig conditions of the eTS algorithm...... 55 Table 3.6 – Performance of the RGFL controller with eTS modeling...... 58 Table 4.1 – Performance indexes of the controllers...... 66 Table 5.1 – Constants and parameters values to the rigid arm simulation...... 77 Table 5.2 – Performance indexes of the controllers methods...... 78 List of abbreviations and acronyms

ARX Autoregressive models with exogenous variables

BFOF Bacterial foraging fuzzy tecnique

DC Direct current

EFL Exact feedback linearization

EFLHGO Extended high-gain observer associated with the linearizing feedback ePL Evolving participatory learning algorithm eTS Evonving Takagi-Sugeno algorithm

FES Functional electrical stimulation

FL Feedback linearization

FLHGO Input-output feedback linearization with high-gain observer

HGO High-gain observer

IAE Integral absolute error

ITAE Integral of time-weighted absolute error

IVE Integral of time-weighted variability of the error

IVU Integral of time-weighted variability of the signal control

LMI Linear matrix inequality

LQR Linear quadratic regulator

PLC Programmable logic controller

RGFL Robust granular feedback linearization

ReGFL Robust evolving granular feedback linearization

RegHGO ReGFL with high-gain observer

RLS Recursive least square algorithm

RMI Robust multi-inversion

RMSE Root mean square error SISO Single-input single-output

TS Takagi-Sugeno Contents

1 Introduction ...... 14 1.1 Background ...... 15 1.2 Objective ...... 18 1.3 Contributions and Publications ...... 18 1.4 Organization ...... 20 2 Methodological Background ...... 22 2.1 Exact Feedback Linearization ...... 22 2.1.1 Reference Tracking ...... 23 2.2 GranularComputing ...... 25 2.2.1 Evolving Takagi-Sugeno Models ...... 27 2.2.2 Evolving Participatory Learning ...... 29 2.3 State Observers ...... 31 2.4 Summary ...... 34 3 Robust Granular Feedback Linearization ...... 35 3.1 Robust Granular Controller ...... 35 3.2 Lyapunov Stability Analysis ...... 38 3.3 Performance Evaluation ...... 41 3.3.1 Surge Tank Simulation Experiments ...... 41 3.3.2 Actual Surge Tank Experiments ...... 44 3.3.3 Knee Joint Simulation Experiments ...... 49 3.3.4 Evaluation of RGFL Control with Evolving Takagi-Sugeno Modeling 54 3.4 Summary ...... 58 4 Robust Evolving Granular Feedback Linearization ...... 60 4.1 Introduction...... 60 4.2 Input-Output Linearization Idea ...... 61 4.3 Robust Evolving Granular Feedback Control with Input-Output Linearization 62 4.4 Performance Evaluation ...... 65 4.5 Summary ...... 68 5 Robust Evolving Granular Feedback Linearization with Observers ...... 69 5.1 Introduction...... 69 5.2 Robust Feedback linearization Control with Observers ...... 70 5.3 Performance Evaluation ...... 72 5.3.1 Fan and Plate System ...... 72 5.3.2 Rigid Arm Driven by DC Motor ...... 75 5.4 Summary ...... 79 6 Conclusion ...... 81 References ...... 83 A Appendix: S-Procedure ...... 91 14 1 Introduction

Nowadays, we deal with an increasing number of automated and intelligent pro- cesses and systems. Self-driving technology, intelligent houses, autonomous robotics and transportation systems, airplanes, trains, electric vehicles, smart agriculture, data-driven process control are examples of systems in which machines are augmented with connectiv- ity, sensors, and intelligence to improve decision making and control. In intelligent process control, modeling is a key to capture system dynamics, and to design control laws that ful- fill closed-loop performance and feasibility requirements. Particularly crucial in intelligent control areas are continuous adaptation and robust behavior once these features are es- sential to counteract for the effects of uncertainty and imprecision in system performance and closed-loop stability. Control theory is addressed in the literature from distinct points of view. An es- sential aspect of being considered is the nature of models used to describe a process. For instance, the process can be modeled using differential equations, rule-based approaches, graphical models, and neural networks, to mention but a few. Differential equations are particularly useful to model continuous processes. This thesis focuses on continuous non- linear processes and models. A nonlinear model-based control system design task is com- plicated, especially when adaptation and robustness are demanded. Intelligent and learning approaches are mean to improve the adaptability and ro- bustness of closed-loop control of nonlinear systems. These approaches render frameworks for online data processing using unique methods and algorithms to extract knowledge from data streams. Online data stream-based modeling essentially is a computational learning approach that simultaneously processes input data and extracts knowledge that governs the process behavior. This thesis focuses on evolving participatory learning for stream database modeling in the framework of granular computing. Participatory learning is a paradigm for computational learning systems whose basic premise is that learning takes place in an environment where learning itself depends on what has already been learned and believed so far. This fact means that every aspect of the learning process is affected by the compatibility between knowledge learning with the current input data. Granular computing means that the data space is organized in clusters and that each cluster has a local model in the form of a fuzzy functional rule. The collection of the fuzzy functional rules composes the model of the process. In particular, fuzzy functional rules with affine consequents are adopted, and the model output is found as a weighted average of the local models. This thesis develops a novel control approach within the framework of feedback linearization using granular evolving participatory learning. The primary research issue Chapter 1. Introduction 15 addressed is the use of granular participatory learning to model the imprecision of process models to mitigate its effects in the closed-loop system behavior. Lyapunov stability anal- ysis of the control approach is pursued to verify the closed-loop behavior of the control loop.

1.1 Background

Control systems have been used for more than 2000 years. Some of the earliest examples are water clocks described by Vitruvius and attributed to Ktesibios 270 B.C. (Bennett, 1996). During the development and continuous progress of society, new indus- trial processes, operation mechanisms, and machines have been created. Novel conceptual approaches, sound mathematical tools, and problems with new demands have been ob- served. The 18th century is recognized as the beginning of the Control Theory (Villa¸ca and Silveira, 2013). Remarkable results were obtained through these years, notably J. C. Maxwell, in 1868, and the analysis of the stability of Watt’s flyball governor (Denny, 2002). Maxwell’s technique was based on the linearization of the differential equations of motion to find a characteristic equation for the system. He studied the effect of the system parameters on stability and showed that the system is stable if the roots of the characteristic equation have negative real parts (Leine, 2009). Another example is the term “automatic feedback control”, which is considered a recent conception, but has be- come common-sense in the area of control theory and applications. Feedback control was used for the first time by Norbert Wiener and his colleagues in 1940’s (Mayr, 1970). A few years later, the Lyapunov stability theory was introduced as a methodology to study and analyze the stability of feedback control systems (Leine, 2009; Lyapunov, 1992). Modern control theory is associated with many other areas such as modeling, op- timization, artificial intelligence, and more recently, with machine learning. Traditionally, control systems are designed using a model to describe the controlled process dynamics. From the availability of many modeling approaches as a rule-based, graph, first-order logic, state machines, and formal languages, the design of closed-loop control systems for continuous process relies on the knowledge of differential equations. In this way, if the dynamics of the system are known precisely, then a white-box model is obtained. If the system is highly complex, a black-box model can be adopted. Black-box modeling uses an input signal to excite the process and to collect process output data to record the dynam- ics and develop a model from the input-output data. Gray-box modeling combines white and black-box modeling. In computer control applications, a discrete-time equivalent of the continuous process may be required as well. Often, the dynamics of actual physical processes are complex and imprecise, as mirrored by the dynamics of populations, climatic models, robotics systems, and turbulent Chapter 1. Introduction 16

fluid flows (Sastry, 1999). Complex dynamics can be approximated reasonably well by nonlinear models. Different from the linear models, nonlinear models are more abundant in the sense that many commonly observed phenomena, such as multiple operating points, limit cycles, bifurcations, and frequency entrainment (Khalil, 2002; Isidori, 1995) can be captured in their formulation. In this thesis, we are concerned with continuous nonlinear processes and systems. The literature addresses a significant number of nonlinear control techniques, but our attention here is on feedback linearization. Exact feedback linearization – EFL is a nonlinear control technique (Khalil, 2002; Isidori, 1995; Slotine and Li, 1991) whose purpose is to exactly cancel the nonlinearities of a nonlinear system or process. To come up with an equivalent linear system to which linear control laws and design tools can be employed. Exact nonlinearity cancellation makes EFL fragile whenever there are mismatches between the model used in design and the actual process. Mismatches are encountered whenever there are structural variability, parametric imprecision, or both (Sastry, 1999). In practice, EFL is prone to fail because the actual plant behavior and the nonlinear model used in the EFL control law design differ (Esfandiari and Khalil, 1992; Oliveira et al., 2017). Currently, we witness a myriad of works in the control systems literature addressing strategies to ensure the robustness of closed-loop feedback linearized systems. For instance, Wang (1996) develops indirect adaptive fuzzy controllers based on fuzzy IF-THEN rules to estimate and compute online the tracking control input. Alternatively, Guillard and Boul`es (2000) discusses an input/output feedback linearization approach to improve robustness during the design of the linearized control law. While Park et al. (2003) derives a robust indirect adaptive fuzzy controller mechanism using approximate bounds of reconstruction errors, Lavergne et al. (2005) introduces the robust-multi inversion – RMI scheme adding a compensation loop in the linearization feedback loop to mitigate modeling errors effects. Exact feedback is explored by Soares et al. (2011) in trajectory tracking control of a mobile robot whose controller gains are found via linear matrix inequalities – LMIs. Biomimicry of social bacterial foraging approach to developing an indirect adaptive controller is located in (Banerjee et al., 2011), whereas a compensation loop based upon the RMI approach is pursued in (Oliveira et al., 2015) using differential evolution (Chakraborty, 2008, Chap. 1) to find the controller gains. A scheme based on model reference adaptive control – MRAC and the evolving fuzzy participatory learning algorithm – ePL (Pedrycz and Gomide, 2007) appears in (Oliveira et al., 2017). Most current robust and adaptive control methods assume that all state variables are available for measurement, which often is not the case (Khalil, 2002, pp. 610). Sev- eral processes have their states inaccessible due to physical restrictions or high costs of sensors. An alternative in these cases is to use state observers, provided that the process is observable (Ciccarella et al., 1993). The observer is a mathematical model that uses measurements of the process output and the control input to estimate the values of the Chapter 1. Introduction 17 state variables. Observers result in a set of sequential signals that are less susceptible to noise and disturbances than the real output measurements (Ellis, 2002). An example is the Luenberger observer for state estimation of linear systems (Chen, 2013). Linear ob- servers compare the actual process output with the one produced by the model to yield an observer error. The design of the observer allows tuning a linear gain to ensure the exponential decay of the observer error to zero. Krener and Respondek (1985) have shown that, for a specific class of nonlinear systems, the Luenberger observer can be used to estimate the state of a linearizable system whenever the model of the nonlinear system is in the canonical form, that is, the model has a representation in the form of a chain of integrators. Khalil (2002, pp. 610) suggested the use of the high-gain observer – HGO with the feedback-linearized systems. The HGO has the same structure as the Luenberger observer but differs from it in the tuning procedure. If the nonlinear system model is locally Lipschitz, then the HGO observer lessens of the effect of uncertainties (Khalil, 2017a). HGO has been used in nonlinear system control by many authors (Farza et al., 2011; Freidovich and Khalil, 2006; Khalil, 2017a). In (Freidovich and Khalil, 2006; Khalil, 2017a), an extended HGO is used to guarantee the convergence of the system to a reference signal. Such a useful characteristic comes from the use of an additional state correspond- ing to the tracking error integrator. Alternatively, Chaji and Sani (2015) uses a high-gain observer with an input-output feedback linearization scheme to control the linear position of an electrical-hydraulic servo. Guermouche et al. (2015) suggested a new control scheme that uses the high-gain observer associated with the super-twisting algorithm to control the angular position of a DC motor. Recently, in (Chen et al., 2016; Kayacan and Fos- sen, 2019) proposed the use of high-gain observers to estimate the unknown or feedback linearization mismatch error to cancel their effects in the control loop. This thesis aims at developing robust adaptive control approaches to address the fragility of feedback linearization due to modeling mismatches. It introduces the use of evolving participatory learning and granular modeling frameworks to capture the missing dynamics of the controlled processes from data streams. The idea is to estimate modeling imprecision in real-time to counteract for its effects in the feedback control loop. The approaches suggested herein are evaluated using benchmarks found in the literature such as the level control of a surge tank (Slotine and Li, 1991; Banerjee et al., 2011; Wang, 1996; Franco et al., 2016), the angular position control of a fan and plate system (Kungwalrut et al., 2011; Simas et al., 1998; Dincel et al., 2014), the knee joint control with functional electrical stimulation (Davoodi and Andrews, 1998; Kirsch et al., 2017; Li et al., 2017; Previdi and Carpanzano, 2003), and a DC motor-driven rigid arm control (Guermouche et al., 2015; Mor´an and Viera, 2017; Bento et al., 2018; Beltran-Carbajal et al., 2014; Freidovich and Khalil, 2006; Khalil, 2017a). The results suggest that closed-loop control with feedback linearization associated with evolving participatory learning is a powerful Chapter 1. Introduction 18 method to improve robustness and adaptability of feedback linearizable processes and systems.

1.2 Objective

The main objective of the thesis is to develop novel approaches to improve the robustness and adaptability of nonlinear control systems designed for feedback linearizable processes using evolving granular modeling. Several research questions arose during the development of the research. They are:

∙ how to develop adaptive control using participatory learning;

∙ how to estimate imprecision of models using rule-based granular models;

∙ how to define the data space and how to granulate the data space;

∙ how to evaluate and bound modeling errors in closed-loop;

∙ how to analyze the closed-loop stability;

∙ how to adapt the controller when the dynamics system is unknown;

∙ how to use state estimation when states are not available;

∙ how the evolving participatory learning compares with alternative procedures;

∙ what are the conditions to use the novel robust adaptive controller in actual plants.

1.3 Contributions and Publications

The contributions of this thesis can be summarized by three central control ap- proaches: robust granular feedback linearization, robust evolving granular feedback lin- earization, and robust evolving granular with high-gain observers for linearizable feedback systems. The robust granular feedback linearization – RGFL introduces a novel control approach that employs the evolving participatory learning – ePL algorithm to improve robustness and adaptability of closed-loop, linearizable feedback systems. A new frame- work is developed to map data space granules to the space of control inputs. Information granulation means clustering the data space online using data streams to build fuzzy func- tional rules using participatory learning. Each cluster of the data space is a granule, and to each granule, there is an associated fuzzy functional rule. Clusters induce the membership Chapter 1. Introduction 19 functions of the rule antecedents, and affine functions form the rule consequents. This approach assumes the availability of a precise model of the process and the state vari- able values. Modeling imprecision is modeled as deviations from known nominal process models. An analytical framework is developed to analyze the stability of the closed-loop system using Lyapunov theory. The RGFL controller is extended, assuming that no information about the dynam- ics system is available. Therefore, the data space and data granulation are modified, and a new feedback linearization law similar to the certainty equivalence principle is developed. A novel mechanism to update the consequent parameters of the affine consequent models of the fuzzy functional rules is developed. The result is the robust evolving granular feed- back linearization – ReGFL controller. The advantage of the ReGFL controller over the RGFL relies on the fact that ReGFL does not require the parameters for the closed-loop operation to be specified. The robust evolving granular with high-gain observers – RegHGO extends further the RGFL, assuming that the system states are unavailable for measurement, but that information about the dynamics system is available. In this case, the data space becomes a space of estimated states, and granulation is done using state estimates. The RegHGO controller is obtained by adding a high-gain observer in the control loop. The publications produced as a result of the research reported in this thesis are listed next. Book chapter L. Oliveira, A. Bento, V. Leite , F. Gomide (2019) “Robust Evolving Granular Feedback Linearization”. In: Kearfott R., Batyrshin I., Reformat M., Ceberio M., Kreinovich V. (eds) Fuzzy Techniques: Theory and Applications. IFSA/NAFIPS 2019. Advances in Intelligent Systems and Computing, vol 1000. Springer, Cham. Journals L. Oliveira, A. Bento, V. Leite and F. Gomide, “Evolving granular feedback lineariza- tion: Design, analysis, and applications”, Applied Soft Computing Journal(2019) 105927, https://doi.org/10.1016/j.asoc.2019.105927. L. Oliveira, A. Bento, V. Leite and F. Gomide, (2019) “Evolving granular control with high-gain observers for feedback linearizable nonlinear systems”, Evolving Systems, 2019. (Submitted). International conferences L. Oliveira, V. Leite, J. Silva and F. Gomide, (2017) “Granular evolving fuzzy robust feedback linearization”, Evolving and Adaptive Intelligent Systems (EAIS), Ljubljana, 2017, pp. 1 − 8, doi: 10.1109/EAIS.2017.7954821. Chapter 1. Introduction 20

L. Oliveira, A. Bento, V. Leite , F. Gomide (2019) “Robust Evolving Granular Feed- back Linearization”. International Fuzzy Systems Association World Congress and North American Fuzzy Information Processing Society (IFSA/NAFIPS), Lafayette, 2019, pp. 1 − 12. L. Oliveira, A. Bento, V. Leite and F. Gomide, (2019) “Robust Granular Feedback Lin- earization”. International Conference on Fuzzy Systems (FUZZ-IEEE), New Orleans, 2019, pp. 1 − 6. A. Bento, L. Oliveira, V. Leite, I. R´ubio Scola and F. Gomide, (2019) “High-gain ob- server based robust evolving granular feedback linearization”. XIV Brazilian Conference on Dynamics, Control and Applications, S˜ao Carlos, 2019, pp. 1 − 7. A. Bento, L. Oliveira, V. Leite, and F. Gomide, (2019) “Comparisons of robust methods on feedback linearization through experimental tests”. 21푠푡 IFAC World Congress, Berlin, 2020. (Submitted) National conferences L. S. Oliveira, V. J. S. Leite, J. C. Silva and F. A. C. Gomide, (2017) “Robustez em lineariza¸c˜ao por realimenta¸c˜aogranular evolutiva”, XIII Simp´osio Brasileiro de Automa¸c˜ao Inteligente (SBAI), Porto Alegre, 2017, pp. 1739 − 1746. J. Silva, L. Oliveira, F. Gomide and V. Leite, (2018) “Avalia¸c˜aoexperimental da lineariza- ¸c˜ao por realimenta¸c˜ao granular evolutiva”, Fifth Brazilian Conference on Fuzzy Systems (CBSF), Fortaleza, 2018, pp. 359 − 370. A. Bento, L. Oliveira, V. Leite and F. Gomide, (2019) “Lineariza¸c˜aopor realimenta¸c˜ao granular robusta com algoritmo evolutivo Takagi-Sugeno: An´alise e avalia¸c˜ao de desem- penho” XIV Simp´osio Brasileiro de Automa¸c˜ao Inteligente (SBAI), Ouro Preto, 2019, pp. 1 − 7.

1.4 Organization

This thesis is organized into five chapters, as follows.

∙ This chapter presents a general statement problem addressed in this thesis, its main objectives, a summary of its main contributions, and a list of the publications pro- duced during the period.

∙ Chapter 2 gives a review of the methods and techniques that are needed to proceed with the development of the approaches developed in this thesis. These include the notions of feedback linearization, granular computing focusing on the evolving Chapter 1. Introduction 21

Takagi-Sugeno – eTS modeling, the evolving participatory learning – ePL modeling, and the high-gain state observer.

∙ Chapter 3 introduces the robust granular feedback linearization – RGFL controller. Disturbances originated by parametric imprecision and neglected dynamics during the modeling process are considered. An additional control signal produced by the ePL algorithm is introduced in the feedback linearization closed-loop. A mathemat- ical framework for stability analysis is developed based on the Lyapunov theory. Simulation results concerning with the level control of a surge tank, and angular position of the knee joint show how the RGFL perform. An experimental test is also reported for the real-time level control of the surge tank. RGFL control approach is evaluated against the evolving Takagi-Sugeno – eTS alternative.

∙ Chapter 4 develops the robust evolving granular feedback linearization – ReGFL controller. The control problem is formulated using the notion of input-output feed- back linearization. An approach to the certainty equivalence principle is introduced. ReGFL assumes that the model of the controlled process is unknown, and excludes the feedback linearization control law. In this case, ePL produces the closed-loop control signal. The performance of the proposed approach is evaluated using the surge tank.

∙ Chapter 5 introduces the high-gain state observer in the ReGFL approach. ReGFL assumes that the system states are unavailable. Robust evolving granular feedback linearization with high-gain observer – RegHGO uses estimated states to build the data space. Regulation of a fan and plate system, tracking the problem of a DC motor-driven robotic arm, is used to evaluate the closed-loop performance of ReGFL.

∙ Chapter 6 concludes the thesis summarizing its contributions, and suggesting future research directions. 22 2 Methodological Background

This chapter reviews the methods and techniques used to develop the robust gran- ular feedback linearization control approach. It starts a brief review of exact feedback linearization and recalls the class of evolving fuzzy functional models called evolving Takagi-Sugeno, and the notion of evolving participatory learning and algorithms. A short review of state observers and state estimation is also given.

2.1 Exact Feedback Linearization

The complexity of contemporary industrial processes and systems in energy, air- craft, robots, communications, and transportation is expanding, and nonlinear design of closed-loop controllers (Khalil, 2002) became critical to enhance performance and ensure robust and smooth operation. Exact feedback linearization – EFL concerns a modern view of geometric nonlinear control theory. Feedback linearization began with attempts to extend the concepts of controllability and observability of linear control theory to the nonlinear case (Guardabassi and Savaresi, 2001). EFL is a nonlinear control technique (Khalil, 2002; Isidori, 1995; Slotine and Li, 1991) whose purpose is to exactly cancel the nonlinearities of a nonlinear system or process to come up with an equivalent linear system to which linear control laws and design tools can be used. Consider single-input single-output – SISO nonlinear systems of the form:

x˙ = 푓(x) + 푔(x)푢 푦 = ℎ(x) (2.1)

푇 [︁ ]︁ 푛 with x = 푥1 푥2 ··· 푥푛 ∈ D ⊆ R is the state vector, 푢 and 푦 are the input and output of the system, respectively, and ℎ(x):D → R is the output function, and 푓(x) and 푔(x) ∈ D ⊆ R푛 are nonlinear functions of the states. Assume that ℎ(x), 푓(x) and 푔(x) functions are smooth vector fields on R푛, where by smooth function we mean an infinitely differentiable function (Sastry, 1999, pp. 385). If the system (2.1) has certain structural properties, it allows us to cancel nonlinearities by means of a state feedback control law (Isidori, 1995; Khalil, 2002):

푟 ℒ푓 ℎ(x) 1 푢 = − 푟−1 + 푟−1 푣 (2.2) ℒ푔ℒ푓 ℎ(x) ℒ푔ℒ푓 ℎ(x) Chapter 2. Methodological Background 23 and a diffeomorphism ⎡ ⎤ ℎ(x) ⎢ ⎥ ⎢ ⎥ ⎢ ℒ푓 ℎ(x) ⎥ z = 푀(x) = ⎢ ⎥ (2.3) 푑 ⎢ . ⎥ ⎢ . ⎥ ⎣ ⎦ 푟−1 ℒ푓 ℎ(x) where 푟 is the relative degree of the system, 푣 is the external reference input, and ℒ푓 ℎ(x) = 휕ℎ(x) 1 휕푥 푓(x) is called the Lie Derivative of ℎ with respect to 푓 or along 푓 (Sastry, 1999). This notation is convenient when is necessary to repeat the calculation of a sequency of the derivative with respect to same vector or a new one (Khalil, 2002). For instance, we have: 휕 (ℒ ℎ(x)) ℒ ℒ ℎ(x) = 푓 푔(x) 푔 푓 휕푥 휕 (ℒ ℎ(x)) ℒ2 ℎ(x) = ℒ ℒ ℎ(x) = 푓 푓(x) 푓 푓 푓 휕푥 (︁ 푟−1 )︁ 휕 ℒ푓 ℎ(x) ℒ푟 ℎ(x) = ℒ ℒ푟−1ℎ(x) = 푓(x) 푓 푓 푓 휕푥 0 ℒ푓 ℎ(x) = ℎ(x)

In this way, pluging (2.2) in (2.1) and using the diffeomorphism (2.3) we obtain the closed loop system,

(푛) 푧푑 = 푣

푦 = 푧푑1 , (2.4) which is linear and controllable, whereas verifies that 푔(x) ̸= 0 ∀x ∈ D ⊆ R푛. Once the linear system was obtained, we can impose new feedback controls (Isidori, 1995), like for instance

푣 = Kz푑, (2.5) [︁ ]︁ with K = 푘1 푘2 ··· 푘푛 is chosen in order to adress a specific set of eigenvalues, in other words, where the closed loop system has all its roots stricly in the left-half complex plane, which leads to stable exponentially stable dynamics, i.e (Slotine and Li, 1991).

2.1.1 Reference Tracking

Without loss of generality, assume that the nonlinear system (2.1) has relative degree 푟 = 푛. Thus it can be rewritten as (Khalil, 2002, pp. 506):

x˙ = 퐴x + 퐵훾(x)[푢 − 훼(x)] 푦 = 퐶x (2.6)

1For more details see in (Yano, 2015). Chapter 2. Methodological Background 24 where 퐴 ∈ R푛×푛, 퐵 ∈ R푛×1, and 퐶 ∈ R1×푛 are the system matrices. Functions 훼(x) ∈: R푛 → R and 훾(x) ∈: R푛 → R with 훾−1(x) ̸= 0 ∀x ∈ D are nonlinear, and defined in a set D ⊂ R푛 that contains the origin. In this way, the nonlinear functions can be rewritten as: 푓(x) = 퐴푥 − 퐵훾(x)훼(x) and 푔(x) = 퐵훾(x). If the pair (퐴,퐵) is controlable, then the system (2.6) can be feedback linearizable, and there exists a diffeomorphism (2.3) that transforms the system in

z˙ 푑 = 퐴푐z푑 + 퐵푐훾(x)[푢 − 훼(x)]

푦 = 퐶푐z푑. (2.7)

푛×푛 푛×1 1×푛 where 퐴푐 ∈ R , 퐵푐 ∈ R and 퐶푐 ∈ R is a canonical form representation of a chain 푟 푟−1 ℒ푓 ℎ(x) of 푛 integrators, with 훾(x) = ℒ푔ℒ푓 ℎ(x) and 훼(x) = − 푟−1 (Khalil, 2002, pp. 516). ℒ푔ℒ푓 ℎ(x) To garantee that the output system tracks the reference signal 푟(푡), we assume that:

∙ 푟(푡) is continuous, bounded for all 푡 ≥ 0, and infinitely differentiable function. Therefore, it can be written as: [︁ ]︁푇 r(푡) = 푟 푟˙ ··· 푟(푛−1) ,

∙ all signals 푟, ··· , 푟푛 are available online.

In this case, we may define news variables: 푇 [︁ (푛−1)]︁ e = z푑(푡) − r(푡) = 푒 푒˙ ··· 푒 (2.8)

푛 where e ⊆ D푒 ∈ R is the error of reference for each state 푥푖 described in the coordinates system z by the diffeomorphism (2.3). Note that the tracking error vector is a subset of

D, that is, D푒 ⊆ D by translation of the reference from the domain of the origin system (2.7). Pluging (2.8) in (2.7) the error dynamics is described by:

(푛) ˙e = 퐴푐e + 퐵푐훾(x)[푢 − 훼(x)] − 퐵푐푟 (푡). (2.9)

Assuming that the control signal can be computed by (2.2), with 훽 = 훾−1(x), and considering 푣 = 푟(푛)(푡) − Ke in (2.9), the dynamics of the tracking error becomes:

˙e = (퐴푐 − 퐵푐K)e, (2.10) with,

⎡ ⎤ ⎡ ⎤ 0 1 0 0 ··· 0 0 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢0 0 1 0 ··· 0⎥ ⎢0⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ...... ⎥ ⎢ .⎥ 퐴푐 = ⎢ ...... ⎥ and 퐵푐 = ⎢ .⎥. ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢0 0 0 0 ··· 1⎥ ⎢0⎥ ⎣ ⎦ ⎣ ⎦ 0 0 ········· 0 1 where the closed-loop system (퐴푐 − 퐵푐K) is Hurwitz. Figure 2.1 shows the closed-loop system to EFL in the tracking reference. Chapter 2. Methodological Background 25

− e 푣 푢 x˙ = 퐴x + 퐵훾(x)[푢 − 훼(x)] 푦 r(t) 푟(푛)(푡) − Ke 훼(x) + 훽(x)푣 푦 = 퐶x + z푑 x

x

푀(x)

Figure 2.1 – Reference tracking in exact feedback linearization.

2.2 Granular Computing

In the last decade, information granulation has emerged as a powerful tool for data analysis and information processing, which is in line with the way humans process information. The perception arises by structuring our knowledge, attitudes, and acquired evidence in terms of information granules that offer abstractions of the complex world and phenomena. Being abstract constructs, information granules, and their processing referred to as granular computing, provide problem solvers with a conceptual and algo- rithmic framework to deal with several real-world problems (Pedrycz and Chen, 2011). Granular computing is a term commonly associated with the area of intelligent computing directly related to the pioneering studies by Zadeh (Pedrycz, 2013). The notion of gran- ulation comes up from the direct and immediate requirement to abstract and summarize information and data to support several processes of comprehension and decision-making (Pedrycz and Gomide, 2007). In this work, the notion of information and data granules is understood as clustering data and processing clustered data. The level of information granularity depends of the problem in which such granules are used (Bargiela and Pedrycz, 2003). Generally, in modeling and control system areas, the data space is assembled from input and output data. A collection of granules or clusters partitions the data space (Silva et al., 2013) and identifies a corresponding cluster structure. For instance, the level control of surge tanks can use the level measurement and the tracking error values as the input data space. We can cluster the data space and assign to each cluster granules with fuzzy values such as low, medium, high, and very high. Granules can consider temporal information of the input and outputs. A granular environment G can be abstracted as:

G =< x,풢,V > (2.11) where V = [v1, v2, ··· , v푐] is a family of reference information, vi = [푣1, 푣2, ··· , 푣푝], 풢 is formal framework of information granules, x = [푥1, 푥2, ··· , 푥푝] is the input data, and 푝 is the input space dimension. When dealing with numeric data, we are concerned with their representation in Chapter 2. Methodological Background 26 terms of a collection of information granules (Pedrycz, 2013). If granulation is formally done within the framework of fuzzy set theory, then we can describe this representation process as a way to express input data x in terms of the granules, and depict the result in a 푝 dimensional hypercube:

푝 푝 풢 : x ∈ R → v ∈ [0,1] . (2.12)

Furthermore, v is understood as a cluster center representing the cluster. When recon- struction from the region of information granules or clusters of x is necessary, degranula- tion 풢−1 will result in the opposite of the granulation operation:

−1 푝 푝 풢 : v ∈ [0,1] → x ∈ R . (2.13)

These steps specify maps between the data space and the cluster space. Figure 2.2 depicts the granulation and degranulation mechanism.

Space of information granules

V ⊆ [0, 1]푝 풢−1

퐺푟푎푛푢푙푎푡푖표푛 퐷푒푔푟푎푛푢푙푎푡푖표푛

x ∈ R푛 Data space

Figure 2.2 – Granulation-degranulation.

Note that the mappings enable to compare the capabilities of the cluster to reflect the structure of the original data. In this sense, we can check if the recovered data ^x differs from the original data x. In practice ^x = 풢−1 (풢(x)) with the granulation and degranulation described by 풢 and 풢−1, respectively. Clustering methods are used to identify groups of similar objects in multivariate data sets collected from fields such as marketing, biomedical, and geospatial. They are different types of clustering methods, including the partitioning methods (Hand, 2013), hierarchical clustering (Arabie et al., 1996), fuzzy clustering (Pedrycz and Gomide, 2007), density-based clustering (Sander et al., 1998), and model-based clustering (Fraley and Raftery, 2002). In this thesis, we focus on the fuzzy clustering method.

In fuzzy clustering, a way to granulate a data set X = [x1, x2, ··· , x푁 ] in 푐 clusters represented by fuzzy sets with membership functions 휇푖(x), and cluster centers v푖, 푖 = Chapter 2. Methodological Background 27

1, ··· , 푐, is to solve the optimization problem (Pedrycz, 2013):

푁 푐 ∑︁ ∑︁ 2 minimize 휇푖(x푗)‖x − v푖‖ 휇푖,v푖 푗=1 푖=1 (2.14) 푐 푁 ∑︁ ∑︁ subject to 휇푖(x) = 1 0 < 휇푖(x푗) < 푝 푖=1 푗=1

If ‖x − v푖‖ is the Euclidean distance, then granulation step means to compute cluster centers v푖 and membership functions 휇푖(x) as follows: 1 휇푖(x푗) = 2 , (2.15) ∑︀푐 (︁ ‖x푗 −v푖‖ )︁ 푝−1 푗=1 ‖x푗 −v푗 ‖

∑︀푁 푚 푗=1 휇푖(x푗) x푗 v푖 = ∑︀푁 , (2.16) 푗=1 휇푖(x푗) where 푙 is the number of data points. The degranulation phase aims to reconstruct the original input data x from 휇푖 and vi. Therefore we have (Pedrycz and Gomide, 2007):

∑︀푐 푖=1 휇푖(x푗)v푖 ^xj = ∑︀푐 . (2.17) 푖=1 휇푖(x푗

2.2.1 Evolving Takagi-Sugeno Models

Evolving Takagi-Sugeno – eTS was developed in (Angelov and Filev, 2004) in the early 00s. The eTS modeling uses an online clustering method with the ability to update and to create new clusters. The idea is to translate a cluster and its respective local model into fuzzy functional rules. The online nature makes eTS different from the classic Takagi-Sugeno – TS fuzzy modeling approach in which the fuzzy rules and rule base are developed for a data set and remain fixed afterwards (Lughofer, 2011). The eTS algorithm uses fuzzy functional rules of the form:

푇 푘 푘 푘 [︁ 푘]︁ R푖 : IF x is 풜 THEN 푦 = 훾푖 1 x 푖 푖 (2.18) 푖 = 1, ··· , 푐푘,

푘 푝 푘 where R푖 is the 푖-th fuzzy rule, x ∈ R is the input data in the 푘-th step, 풜푖 is the 푘 membership function of the rule antecedent, 푦푖 is the output of the 푖-th fuzzy rule, 훾푖 is a vector of parameters, 푝 is the dimension space of the input, and 푐푘 is the number of fuzzy rules at step 푘. The eTS method is based on the subtractive clustering algorithm, which allows the recursive estimate of the potential index from a new data sample (Angelov, 2013). The potential is a measure of compatibility between a new data sample x푘 and the cluster *푘 center x푖 . The learning process may start with an empty rule-base (Angelov et al., 2004a). Chapter 2. Methodological Background 28

1 1 The first data sample x becomes the focal point of the first cluster, x1. The initial value of the potential of the first cluster is 푃1 = 1 and the parameters of the local linear model associated with this rule are null, 훾1 = 0. Because the consequent parameters are updated using the recursive least squares – RLS algorithm, the covariance matrix starts 푝×푝 with 푄1 = ΩI , where Ω is a large number. The remaining computations are recursively done as follows (Angelov and Filev, 2004):

푘 − 1 푃 푘 = , (2.19) z (푘 − 1)(휗푘 + 1) + 휎푘 − 2푣푘 in which 푝+1 푘−1 푝+1 푘 ∑︁ 푘 2 푘 ∑︁ ∑︁ 푖 2 휗 = (푥푗 ) ; 휎 = (푥푗) ; 푗=1 푖=1 푗=1

푝+1 푘−1 푘 ∑︁ 푘 푘 푘 ∑︁ 푖 푣 = 푥푗 훽푗 ; 훽푗 = 푥푗. 푗=1 푖=1 푘 푘 푘 푘 푘 Note that 휗 and 푣 are computed from the current data input x , and 훽푗 and 휎 are updated recursively (Lughofer, 2011):

푝+1 푘 푘−1 ∑︁ (︁ 푘−1)︁2 휎 = 휎 + 푥푗 , 푗=1 푘 푘−1 푘−1 훽푗 = 훽푗 + 푥푗 . (2.20)

The potential of the cluster centers are recursively updated (Angelov et al., 2004a) as follows: 푘−1 (푘 − 1) 푃 * 푘 x푖 푃x* = (︂ )︂, (2.21) 푖 푘−1 ∑︀푝+1 (︁ 푘−1)︁2 푘 − 2 + 푃 * 1 + 푑 x푖 푗=1 푗

푘 푘−1 *푘 푘 where 푃 * is the potential of the cluster center at 푘, and 푑 = 푥 − 푥 denotes the x푖 푗 푖푗 푗 projection of the distance segment between the data sample x푘 and the cluster centers *푘 x푖 in the 푥 axis.

푘 푘 The potential 푃 is compared with the potential 푃 * of the clusters, and they z x푖 must satisfy certain conditions to update the fuzzy rule base. If the MODIFY condition is satisfied, then the new data replaces the most compatible cluster center:

*푘 푘 x푠 ← x , where 푠 is such that {︁ 푘 }︁ 푠 = argmax 푃 * . x푖 푗 = 1, ··· , 푐푘 The consequents parameters are updated using the RLS algorithm (Ljung, 1999). If the UPGRADE condition is satisfied, then a new fuzzy rule is added to the rule base. If both conditions do not hold, then the current input data is ignored, and the rule-base remains Chapter 2. Methodological Background 29 unchanged. The literature suggests several ways to define the MODIFY and UPGRADE conditions (Angelov et al., 2004a; Lughofer, 2011; Angelov, 2013).

푘 If membership function 풜푖 of the antecedent is Gaussian, then output of the model is computed using (Angelov et al., 2004b):

2 x푘−x*푘 ‖ 푖 ‖ 푘 푘 − 2 푘 풜푖 (x ) = 푒 4휎 = 휇푖 , (2.22)

푇 ∑︀푐푘 푘 [︁ 푘]︁ 푖=1 휇푖 훾푖 1 x 푦푘 = , (2.23) ∑︀푐(푘) 푘 푖=1 휇푖 푘 푘 which 휇푖 is the activation degree of the 푖-th fuzzy rule, 푖 = 1, ··· , 푐 , and 휎 is a positive constant that bounds the influence zone of the 푖-th rule.

2.2.2 Evolving Participatory Learning

Evolving systems are self-adaptive structures with learning and summarization ca- pabilities (Leite et al., 2015). They update their structural components and parameters on-demand using stream data. The stream carries data about the process input, state, output variables, as well as the process behavior and operating conditions. Evolving sys- tems adapt their range of influence, and repeatedly update their structure and parameters to accommodate new information conveyed by data (Lughofer, 2011). A self-adaptive modeling framework is called participatory if the usefulness of each input datum in contributing to the learning process depends upon its compatibility with the current model structure. In fuzzy rule-based modeling, the number and the type of fuzzy rule in the rule base specify the model structure. The antecedent parts of fuzzy rules perform a partition of a 푝-dimensional input data space, and vice versa, if we are given a fuzzy partition, then we may construct a fuzzy rule for each of the fuzzy region of data space. Information granularity is one of the most fundamental notion to search for struc- ture in data. Given a finite set of data, clustering aims at finding cluster centers v푖 to properly characterize relevant fuzzy sets 풜푖 in the data space. These are required to form a fuzzy 푐-partition (George and Yuan, 1995) called a granulation of the data space. Evolving participatory learning (ePL) is a two-step self-adaptive modeling ap- proach introduced in (Lima et al., 2006). The first step uses participatory learning to cluster stream data online and identify the model structure. Because the number of rules is the same as the number of clusters, the cluster structure settles the model structure in a one-cluster, one-rule framework. The second step develops a fuzzy functional rule for each cluster found in the first step (Oliveira et al., 2017; Lughofer, 2011). Functional fuzzy Chapter 2. Methodological Background 30 rules are fuzzy rules whose antecedents are fuzzy sets, and the consequents are functions of the input variables:

푘 푘 푘 (︁ 푘)︁ R푖 : IF x is 풜푖 THEN 푦^푖 = 푓푖 x 푖 = 1, ··· , 푐푘,

푘 푘 푝 where R푖 is the i-th fuzzy rule, 푐 is the number of fuzzy rules at k, x ∈ [0,1] is the input, 푘 푘 푦^푖 is the output of the i-th rule, 풜푖 is a fuzzy set of the antecedent whose membership 푘 (︁ 푘)︁ (︁ 푘)︁ 푘 function is 풜푖 x , and 푓푖 x is a function of the input x , respectively. Similarly as in (︁ 푘)︁ eTS, here ePL modeling assumes 푓푖 x affine whose parameters are chosen to fit a local model for the 푖-th data cluster, that is, ePL also assigns to each granule a fuzzy functional rule with a local affine model (Lughofer, 2011; Oliveira et al., 2017). In ePL cluster centers 푘 v푖 are the modal values of Gaussian membership functions of the rule antecedent fuzzy sets as in (2.22).

푘 푝 Cluster centers are such that v푖 ∈ [0,1] . At each processing step after initializa- tion, the ePL clustering algorithm verifies whether a new cluster must be created, if an existing cluster should be updated to accommodate new data, or if redundant clusters should be deleted (Lima et al., 2010). The cluster structure is updated using a compati- 푘 푘 bility measure 휌푖 ∈ [0,1], and an arousal index 푎푖 ∈ [0,1] computed as follows: ‖x푘 − v푘‖ 휌푘 = 1 − √ 푖 , (2.24) 푖 푝

푘+1 푘 푘 푘 푎푖 = 푎푖 + 휗(1 − 휌푖 − 푎푖 ), (2.25) where 휗 ∈ [0,1] is the arousal rate. If the smallest arousal index value is higher than a threshold 휏 ∈ [0,1]: {︁ 푘+1}︁ argmin 푎푗 > 휏, 푗 = 1, ··· , 푐푘 then a new cluster is created. Otherwise, the cluster center most compatible with the current input data is updated using:

푘+1 푘+1 푘 (︁ 푘)︁[1−푎푠 ] (︁ 푘 푘)︁ v푠 = v푠 + 휉 휌푠 x − v푠 , 푘 푠 = argmax 휌푗 , (2.26) 푗 = 1, ··· , 푐푘 where 휉 ∈ [0,1] is a learning rate. The parameters of the local affine models are updated using the RLS algorithm. The fuzzy rule base is checked to verify redundant rules. The compatibility between cluster centers is computed using: ⃦ ⃦ ⃦ 푘 푘⃦ ⃦v푖 − v푗 ⃦ 휌푘 = 1 − √ (2.27) 푖푗 푝 Chapter 2. Methodological Background 31

(︁ 푘 )︁ 푘 푘 푘 where 푖 = 1, ··· , 푐 − 1 , and 푗 = 푖+1, ··· , 푐 . If the compatibility between v푖 and v푗 is 푘 higher than the threshold 휆 ∈ [0,1], that is, 휌푖푗 ≥ 휆, then the cluster center v푗 is declared redundant and is removed. Otherwise, the current cluster structure remains as it is. The overall output is computed as the weighted average of the individual rule outputs: ∑︀푐푘 휇푘푦푘 푦푘 = 푖=1 푖 푖 . (2.28) ∑︀푐푘 푘 푖=1 휇푖

2.3 State Observers

This section reviews the concepts of observability and state observer. Observability is concerned with whether or not the initial state can be recovered from the output of a linear system (Chen, 2013). Observers can be used to augment or replace sensors in control systems. Observers are algorithms that combine measured signals with process knowledge to estimate the values of the state variables (Ellis, 2002). The following definition and theorem summarize the notion of observability of lin- ear systems continuous, time-invariant systems, and give away to characterize observable linear systems (Simon, 2006).

Definition 2.3.1. A continuous time-invariant system is observable if for any initial state x(0) and time 푡 > 0 the initial state can be uniquely determined from the input 푢(훿푡) and output 푦(훿푡) for all 훿푡 ∈ [0,푡].

Definition 2.3.1 states that if a liner time-invariant system is observable, then any initial state can be found from input and output measurements. We can check the observability of continuous linear time-invariant systems using the following result (Ellis, 2002; Khalil, 2002; Simon, 2006):

Theorem 2.3.1. Consider the 푛-dimensional continuous linear time-invariant system

x˙ = 퐴x + 퐵푢 푦 = 퐶x (2.29) where 퐴 ∈ R푛×푛, 퐵 ∈ R푛×1, and 퐶 ∈ R1×푛 and let the matrix 푄 be ⎡ ⎤ 퐶 ⎢ ⎥ ⎢ ⎥ ⎢ 퐶퐴 ⎥ 푄 = ⎢ ⎥ . (2.30) ⎢ . ⎥ ⎢ . ⎥ ⎣ ⎦ 퐶퐴푛−1

The system is observable if only if 푄 has rank 푛. Chapter 2. Methodological Background 32

Assume that (2.29) is observable. The observer is a structure that combines the output (sensor measurements) and input (actuator signals) of a plant using a model of plant (Ellis, 2002). The state estimation problem can be summarized as follows: develop estimates ^x(t) of the state x(푡) from the input 푢(푡), output 푦(푡), and matrices 퐴, 퐵, and 퐶. A solution is to duplicate the system using its model:

푥^˙ = 퐴^x + 퐵푢 푦^ = 퐶^x, (2.31)

The result is the open loop observer shown in Figure 2.3.

푢(푡) x˙ = 퐴x + 퐵푢 푦(푡) 푦 = 퐶x

푥^˙ = 퐴^x + 퐵푢 ^x(푡) 푦^ = 퐶^x

Figure 2.3 – Open-loop state observer.

This methodology is called open-loop state observer. If the initial states are the same, then this approach estimates the system states x(푡) = ^x(푡) for all 푡 ≥ 0. There are two disadvantages with open-loop observers (Chen, 2013). First, it needs to compute the initial state whenever the state estimator is used, which is very inconvenient. Second, small differences between x(푡0) and ^x(푡0) may depart estimates from the actual state values. To avoid these disadvantages, Luenberger suggested a closed-loop state estimator of the following form (Ellis, 2002):

푥^˙ = 퐴^x + 퐵푢 + 퐿 [푦(푡) − 퐶^x] 푦^ = 퐶^x, (2.32) where 퐿 ∈ R푛×1 is the observer gain. Let the estimation error be:

x˜(푡) = x(푡) − ^x(푡). (2.33)

Thus, differentiating (2.33), and from (2.29), (2.32), and after some algebraic manipula- tion, the observer error dynamics becomes:

x˜˙ (푡) = (퐴 − 퐿퐶)x˜(푡) (2.34)

Therefore, if the closed-loop system matrix (퐴 − 퐿퐶) is Hurtwiz, then the observer error approches zero, lim푡→∞ ^x(푡) = x(푡). Figure 2.4 shows the Luenberger observer. The Luenberger observer scheme is ideal for linear time-invariant systems. Nonlin- ear systems in the form (2.1) can benefit from state observers using mechanisms similarly Chapter 2. Methodological Background 33

푢(푡) x˙ = 퐴x + 퐵푢 푦(푡) 푦 = 퐶x

+ 퐿 −

푥^˙ = 퐴^x + 퐵푢 + 퐿 [푦(푡) − 퐶^x] 푦^ = 퐶^x 퐶^x(푡) ^x(푡)

Figure 2.4 – Luenberger observer. as used for linear systems (Krener and Respondek, 1985; Freidovich and Khalil, 2008,

2006). The observer design requires the observability of the pair (퐴푐,퐶푐). The state ob- server is: ˙ x^ = 퐴푐x^ + 퐵푐[푓푛(x^) + 푔푛(x^)푢] + H(푦 − 퐶푐x^), (2.35) where 푓푛(x) and 푔푛(x) are known vector fields, and H is a high-gain matrix. The matrices

퐴푐, 퐵푐, and 퐶푐, with the appropriate dimensions, represent the dynamics of a chain of 푛 integrators:

⎡ ⎤ ⎡ ⎤ 0 1 0 ··· 0 0 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢0 0 1 ··· 0⎥ ⎢0⎥ ⎢ ⎥ ⎢ ⎥ [︁ ]︁ ⎢ ...... ⎥ ⎢ . ⎥ 퐴푐 = ⎢ . . . . . ⎥, 퐵푐 = ⎢ . ⎥, and 퐶푐 = 1 0 0 ··· 0 . ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢0 0 0 ··· 1⎥ ⎢0⎥ ⎣ ⎦ ⎣ ⎦ 0 0 0 ··· 0 1

The state estimation error x˜ = x − x^ yields the error dynamics:

x˜˙ = (퐴푐 − H퐶푐) x˜ + 퐵푐훿, (2.36) where 훿 = [푓(x) − 푓푛(x^)] + [푔(x) − 푔푛(x^)] 푢 can be viewed as disturbance in the closed- loop caused by modeling errors. The asymptotic stability of the error is guaranteed by a suitable choice of a Hurwitz polynomial 2:

푟 (푟−1) 푠 + ℎ1푠 + ··· + ℎ푟−1푠 + ℎ푟 = 0 (2.37)

[︁ ]︁푇 where H = ℎ1 ℎ2 ··· ℎ푟 . Choosing gains such that ℎ푟 ≫ ℎ푟−1 ≫ · · · ≫ ℎ1, distur- bances are forced to decay rapidly (Khalil, 2002), that is, lim푡→∞ 훿(푡) = 0 within a short interval of time. A possible choice of the high-gain parameters is:

[︁ ]︁푇 푎1 푎2 푎푟 H(휖) = 휖 휖2 ··· 휖푟 (2.38)

2See (Khalil, 2017b, Chap. 02) for details. Chapter 2. Methodological Background 34

with 휖 ∈ (0,1], and 푎푖 are chosen such that the roots of

푟 (푟−1) 푠 + 푎1푠 + ··· + 푎푟−1푠 + 푎푟 = 0, (2.39) are in the open left-half plane. An alternative observer approach, called extended high-gain observer, can be de- veloped from (2.35) (Freidovich and Khalil, 2006; Khalil, 2017b). In such an approach, an additional state is added in the chain of integrators to lift the closed-loop system to R푛+1. In this case, from (2.38)-(2.39), the state observer (2.35) becomes: ˙ x^ = 퐴푐x^ + 퐵푐[^휒 + 푓푛(x^) + 푔푛(x^)푢] + H(휖)(푦 − 퐶푐x^), 푎 휒^˙ = 푟+1 (푦 − 퐶 x^). (2.40) 휖푟+1 푐 Therefore, the feedback linearization control law (2.2) can be expressed as: 1 푢 = (푣 − 푓(x^) − 휒^) . (2.41) 푔(x^)

The choice 푣 = Kx^ such that K turns the nominal closed-loop system asymptotically stable, ensures robust closed-loop behavior, and leads to exponentially stable closed-loop trajectories.

2.4 Summary

This chapter has addressed the methods and algorithms that are essential for the development of subsequent chapters. We reviewed the notions of feedback linearization, granular computing, evolving participatory learning, and state observers. Particular em- phasis was given to the evolving Takagi-Sugeno – eTS and the evolving participatory learning – ePL modeling algorithms. 35 3 Robust Granular Feedback Linearization

The process of linearization by state feedback involves the exact cancellation of nonlinearities expressed by the nonlinear functions 푓 and 푔. Its success relies on the pre- cise description of 푓 and 푔, what is unlikely to happen in practice (Isidori, 1995; Khalil, 2002; Sastry, 1999). This section develops the robust granular feedback linearization con- trol scheme. We assert the assumptions, and next we build the machinery needed to compensate the effects of modeling mismatches in feedback linearization control.

3.1 Robust Granular Controller

Section 2.1 introduced the notions of exact feedback linearization and closed-loop tracking control. Exact feedback linearization control assumes that a precise model of the process is available. The tracking error is:

푇 [︁ (푛−1)]︁ e = z푑(푡) − r(푡) = 푒 푒˙ ··· 푒 (3.1)

푇 [︁ (푛−1)]︁ 푛 where r(푡) = 푟 푟˙ ··· 푟 is the reference, and e ∈ D푒 ⊆ R is the error in the coordinate system z푑 through the diffeomorphism (2.3). The subset D푒 is a vector displacement of D. From (3.1) and (2.7) the error dynamics becomes:

(푛) ˙e = 퐴푐e + 퐵푐훾(x)[푢 − 훼(x)] − 퐵푐푟 (푡). (3.2)

푟 ℒ푓 ℎ(x) −1 If the control is computed as in (2.2), then using 훼(x) = − 푟−1 , 훾 (x) = ℒ푔ℒ푓 ℎ(x) 1 (푛) 푟−1 , 푣 = 푟 (푡) − 퐾e, and (3.2) the tracking error becomes: ℒ푔ℒ푓 ℎ(x)

˙e = (퐴푐 − 퐵푐퐾)e. (3.3)

However, if the nonlinearities 훼(x) and 훾(x) are not known precisely, but have their values affected by the disturbances Δ훾(x) and Δ훼(x) such that

훾(x) = 훾푛(x) + Δ훾(x),

훼(x) = 훼푛(x) + Δ훼(x), (3.4) where 훾푛(x) and 훼푛(x) are known nominal nonlinearities of the model used to design the control law, then replacing (2.2) in (2.7) and considering (3.4), assuming 푣 = 푟(푛)(푡)−퐾e, gives:

e˙ = (퐴푐 − 퐵푐퐾)e + 퐵푐푤 (3.5) Chapter 3. Robust Granular Feedback Linearization 36

where 푤 = Δ훾(x)[푢 − 훼푛(x) − Δ훼(x)] − 훾푛(x)Δ훼(x). Note that 푤 can be viewed as an exogenous disturbance of the closed-loop system that may cause unexpected behavior and closed-loop instability (Franco et al., 2006; Guillard and Boul`es, 2000; Leite et al., 2013). Additive uncertainties Δ훾(x) and Δ훼(x) in (3.4) cause mismatches between the tracking error dynamics (3.3) of the actual system, and the tracking error dynamics ob- tained when the nonlinearities of the nominal system is used to design the feedback linearized control. Looking at (3.3) and (3.5) we see that the mismatch is due to 푤 in (3.5). Because 푤 is unknown, the effects caused by Δ훾(x) and Δ훼(x) in 푤 can be reduced if an estimate 푤^ is obtained. Therefore, a compensation input signal of the form:

−1 푢푐 = −훾푛 푤^ (3.6) is added in the control loop, as shown in Figure 3.1. In this case, the compensation signal

푢푐 is used to mitigate the effects on the closed-loop caused by the additives disturbances shown in (3.4). The input signal 푢푐 is computed in the RGFL Algorithm Loop (dashed blue box in Figure 3.1), as follows:

1. Get the current values of 푦, x, and 푟(푡).

2. Compute the tracking error 푒1 = 푟(푡) − 푦.

3. Compute estimate 푤^ using the ePL algorithm of Section 2.2.2.

4. Compute the compensation input signal using (3.6).

The value 푤^ is estimated by the ePL algorithm which takes the state vector of the system (x) and the tracking error (푒1) as antecedents variables of the fuzzy functional [︁ ]︁ rules learned by ePL. Cluster centers v in the z = x 푒1 data space are the modal values of Gaussian membership functions of the fuzzy rules antecedents, v ∈ [0,1] ∀x ∈ D ⊆ 푛 R and 푒1 ∈ D푒 ⊆ R. These vectors are treated by the ePL algorithm at each sampling time, that is, at each step 푘. Therefore, whenever a new sample z푘 is input at step 푘, we have a collection of 푐푘 fuzzy functional rules:

푘 푘 푘 푘 푘 R푖 : IF z is 풜 THEN 푤^ = 휋 휒 + 퐾푝푒1 푖 푖 푖 푖 (3.7) 푖 = 1, . . . , 푐푘

푘 푘 푘 is developed by ePL to produce an estimate 푤^푖 of 푤푖 . Here 휋푖 is the vector of coefficients 푘 of the 푖-th consequent affine model, 퐾푝 is a proportional gain, and 휒푖 is a regressor vector built similarly as in autoregressive models with exogenous variables – ARX, namely:

푘 [︁ ]︁푇 휒 = 푤^푘−1 ··· 푤^푘−푛푤^ x푘−1 ··· x푘−푛푥 . (3.8)

푘 where 푛푤^ and 푛푥 are the order of the respective terms. Note that the output 푤^푖 of the 푘 푘 rule (3.7) is analogous to the output 푦푖 in (2.28), and likewise 푓(z ) in (2.28) is analogous Chapter 3. Robust Granular Feedback Linearization 37

RGFL Control Algorithm 푟(푡)

+ −훾−1(x) 푛 푒푃 퐿 푒 푤^ 1 − x x

푢푐 + + + 푢푠 푦 e 푇 푛−1 푣 −1 푢 x˙ = 푓(x) + 푔(x)푢푠 r(푡) −퐾 e + 푟 훼(x) + 훾푛 (x)푣 + 푦 = 푥1 z푑 x Feedback linearization loop

x 푀(x) State feedback control loop

Figure 3.1 – Robust granular feedback linearization — RGFL.

푘 푘 푘 to 푓(z ,푤^푖) = 휋푖 휒푖 + 퐾푝푒1. The vector of coefficients of the 푖-th consequent affine model is updated using the RLS algorithm with forgetting factor (Ljung, 1999, pp. 363):

Φ푘−1휒푘 Λ푘 = 푖 , 푖 푘 푇 푘−1 푘 (휒 ) Φ푖 휒 + 휁 푘 푘−1 푘 푘 휋푖 = 휋푖 + Λ푖 퐾푝푒1, ⎡ 푇 ⎤ 푘−1 푘 (︁ 푘)︁ 푘−1 1 Φ푖 휒 휒 Φ푖 Φ푘 = ⎢Φ푘−1 − ⎥ , (3.9) 푖 ⎣ 푖 푘 푇 푘−1 푘 ⎦ 휁 (휒 ) Φ푖 휒 + 휁

푘 푘 where Λ푖 is the Kalman gain, and Φ푖 is the covariance matrix. The output of the ePL is the weighted average of the local models of the rules consequents, that is:

푐푘 휇푘푤^푘 푤^푘 = ∑︁ 푖 푖 (3.10) ∑︀푐푘 푘 푖=1 푖=1 휇푖

푘 with membership values 휇푖 computed using (2.22). Algorithm 3.1 gives the detailed steps 푘 −1 to compute estimates 푤^푖 , and input signal 푢푐 = −훾푛 푤^. Note that if we add 푢푐 in the control loop, the new control input becomes 푢푠 (see Figure 3.1):

−1 푢푠 = 훼(x) + 훽푣 − 훾 (x)푤 ^ (3.11) ⏟ ⏞ ⏟ ⏞ 푢 푢푐

(푛) and, because 푢 = 푢푠 from (3.11), replacing 푢 in (2.7) considering (3.4) with 푣 = 푟 (푡) − 퐾e, we have:

e˙ = (퐴푐 − 퐵푐퐾)e + 퐵푐(푤 − 푤^). (3.12)

Therefore, if lim푡→∞ 푤^(푡) = 푤, then (3.12) approaches (3.3) which means that effects of modeling mismatches are successfully circumvented (Khalil, 2002; Lima et al., 2010; Lughofer, 2011). Next section shows that this is, under certain assumptions, the case. Chapter 3. Robust Granular Feedback Linearization 38

Algorithm 3.1 RGFL control algorithm. 푘 [︁ 푘 푘]︁ 푝 1: Input: z = x 푒1 ∈ [0,1] , ∀푘 ≥ 1 푘 2: Output: 푢^푐 3: Initialize the RGFL parameters: 휉,휗,휏,휆, 휎, 휁 and 퐾푝 4: Initialize the first cluster center: v1 ← z1 5: while 푘 ≤ ∞ do 6: z푘 ←Read new data. 7: 휒푘 ←Update the consequent vector using (3.8). 8: for 푖 = 1 to 푐푘 do 푘 9: Compute the compatibility index 휌푖 using (2.24) 푘+1 10: Compute the arousal index 푎푖 using (2.25) 11: end for {︁ 푘}︁ 12: if argmin 푎푗 > 휏 then 푗=1,··· ,푐푘 푘 13: v푐푘+1 ← z 14: else 푘 15: Update the most compatible cluster v푠 using (2.26) 푘 16: Update the parameters 휋푖 using (3.9) 17: end if 18: for 푖 = 1 to 푐푘 − 1 and 푗 = 푖 + 1 to 푐푘 do 푘 19: Compute the compatibility index 휌푖푗 using (2.27) 푘 20: if 휌푖푗 ≥ 휆 then 푘 21: Delete the cluster center v푗 22: Update the cluster number: 푐푘 = 푐푘 − 1 23: end if 24: end for 25: for 푖 = 1 to 푐푘 do 푘 26: Compute firing degree 휇푖 using (2.22) 27: end for 28: Compute estimate 푤^푘 using (3.10) 푘 29: Compute control signal 푢^푐 using (3.6) 30: Update the counter: 푘 = 푘 + 1 푘 31: Return 푢^푐 32: end while

3.2 Lyapunov Stability Analysis

In this section, we develop convex optimization procedures and conditions to verify the stability of the closed-loop system (3.12). The conditions can be used to guarantee that, under certain conditions, the closed loop control system tracks the reference. First 푛 we assume that all trajectories of e(푡) remain inside a set D푒 ⊆ R , which is the domain 푛 D ⊆ R of (2.7) shifted by the reference vector r(푡). Moreover, assume that 푤 =푤 ^ + 훿푤 where 훿푤 is the estimation error of the ePL algorithm whose values belong to the the set 풲 {︁ 푇 }︁ 풲 = 훿푤 ∈ R; 훿푤 훿푤 ≤ 휖0 . (3.13) which means that 훿푤(푡) is bounded by a quadratic norm (Tarbouriech et al., 2011). Also, assume that the estimation error of ePL vanishes, that is, lim푡→∞ 훿푤(푡) = 0 (Khalil, 2002, Chapter 3. Robust Granular Feedback Linearization 39

Cap. 9). Thus, the error equation (3.12) can be rewritten as follows:

˙e = (퐴푐 − 퐵푐퐾)e + 퐵푐훿푤. (3.14)

We note that (3.14) is a linear equation, and that 훿푤(푡) can be viewed as a disturbance in the closed-loop system induced by an exogenous signal. The stability of the closed-loop must be ensured for 훿푤 ∈ 풲 and under both 훿푤(푡) = 0 푡 ≥ 0 and 훿푤(0) ̸= 0. In addition, the closed-loop system must approach to the origin asymptotically. According to the Lyapunov stability principle (Rantzer, 2001), the continuous-time system (3.14) is locally stable (with 훿푤 = 0 ∀푡 ≥ 0 and e ∈ D푒) if there exists a function

푉 (e) and 휅0, 휅1, 휅2 ∈ 풦 and such that (Khalil, 2002, pp. 144):

1. 휅0(‖e(푡)‖) ≤ 푉 (e(푡)) ≤ 휅1(‖e(푡)‖) and ˙ 2. 푉 (e(푡)) ≤ −휅2(‖e(푡)‖),

for all e(푡) ∈ D푒. Functions 휅푖(‖e(푡)‖), 푖 ∈ {0, 1, 2}, are class 풦 functions, that is, 휅푖 :

[0, 푎) → [0, ∞) is strictly increasing, 휅푖(0) = 0, and ‖e(푡)‖ ≤ 푎 > 0 for e(푡) ∈ D푒. If (3.14) is (locally) stable, than there exists a Lyapunov function 푉 (e) = e푇 푃 e with a matrix 푇 푛×푛 0 < 푃 = 푃 ∈ R . From this function, we can define a compact level set ℛ(푃,휖1) ⊆ D푒 as follows: 푇 ℛ(푃,휖1) = {e ∈ D푒 : e 푃 e ≤ 휖1}. (3.15)

Because of the hypothesis that 푉 (e) is a Lyapunov function, we can conclude that ℛ(푃,휖1) is a contractive set with 휖1 > 0. Additionally, we can note that if e ∈ 휕ℛ(푃,휖1), then we 푇 푇 have 휖1 − e 푃 e ≥ 0. Similarly, from (3.13), we have 휖0 − 훿푤 훿푤 ≥ 0. Observe that the ˙ negativity of 푉 (e(푡)) must be fulfiled whenever 훿푤 ∈ 풲 and e ∈ 휕ℛ(푃,휖1) yielding, by S-procedure1 the following inequality:

˙ 푇 푇 푉 (e) + 휏1(휖0 − 푤 푤) + 휏2(e 푃 e − 휖1) < 0, (3.16)

˙ 푇 푇 where 푉 (e) = e˙ 푃 e + e 푃 e˙, 휏푗 > 0, 푗 = 1,2. Thus, in such a case, all trajectories of

(3.14) emanating from e(0) ∈ ℛ(푃,휖1) remains in ℛ(푃,휖1) for all 훿푤 ∈ 풲, meaning that the set ℛ(푃,휖1) is positively invariant.

푛 Assume that the subset D푒 ⊆ R can be described by a polyhedral ensemble given by, 푛 D푒 = {e ∈ R ; |풱(ℓ)e| ≤ 휈(ℓ)}, (3.17)

1×푛 where 휈(ℓ) > 0, 풱(ℓ) ∈ R , ℓ = 1,..., 푚˜ , and 푚˜ is the number of linear constraints 푛 required to limit the region D푒 ⊂ R .

1See explanation in the Appendix A. Chapter 3. Robust Granular Feedback Linearization 40

Theorem 3.1. Consider the error dynamics (3.14) under assumption (3.13). Assume 푛 there are positive real scalars 휖0, 휖1 , 휏1, 휏2, a state feedback gain 퐾 ∈ R , and a symmetric 푛×푛 positive definite matrix 푃 ∈ R . Let the domain D푒 be given by (3.17), and that the linear matrix inequalities

⎡ 푇 ⎤ (퐴푐 − 퐵푐퐾) 푃 + 푃 (퐴푐 − 퐵푐퐾) + 휏2푃 푃 퐵푐 ⎣ ⎦ < 0, (3.18) ⋆ −휏1

휖0휏1 − 휖1휏2 < 0, (3.19) and ⎡ ⎤ 푃 풱푇 (ℓ) ≥ 0, ℓ = 1,..., 푚,˜ (3.20) ⎣ 2 ⎦ ⋆ 휈(ℓ)/휖1 are verified. Then,

1. for 훿푤 = 0, the origin of (3.14) is locally exponentially stable;

2. for 훿푤 ̸= 0, with 훿푤 ∈ 풲, the trajectories of (3.14) emanating from e(0) ∈ ℰ(푃,휖1)

not leave the set ℛ(푃,휖1), for all 푡 ≥ 0;

3. the trajectories starting in e(0) ∈ ℰ(푃,휖1) do not leave the D푒.

Proof. Assume that with 푃 > 0 the inequality (3.18) is verified. Then 푉 (e) = e푇 푃 e veri- 2 2 fies that 휅0(‖e(푡)‖) ≤ 푉 (e(푡)) ≤ 휅1(‖e(푡)‖) with 휅0 = 휆min(푃 )‖e‖ and 휅1 = 휆max(푃 )‖e‖ for all e ∈ ℛ(푃,휖1), where 휆min and 휆max are the minimum and maximum eigenvalues of [︁ ]︁ 푃 , respectively. Left and right multiplying inequality (3.18) by e푇 푤푇 and its transpose to get 푇 푇 푇 푇 e˙ 푃 e + e 푃 e˙ − 휏1푤 푤 + 휏2e 푃 e < 0. (3.21) This fact with the verification of (3.19) means that (3.16) is verified. Moreover, in case ˙ 푇 푇 푇 of 푤(푡) = 0, inequality (3.21) ensures 푉 (e) = e˙ 푃 e + e 푃 e˙ ≤ −휏2e 푃 e = −휏2푉 (e), meaning that 푉 (e(푡)) ≤ 푒−휏2 푉 (e(0)). Therefore, the exponential stability of (3.14) is verified.

It remains to prove that the trajectories emanating from e(0) ∈ ℛ(푃,휖1) do not leave the region defined by (3.17). From the feasibility of (3.20), we apply Schur comple- ment to get 푇 −2 푇 −2 −1 풱(ℓ)풱(ℓ)휖1휈(ℓ) ≤ 푃 ⇔ 풱(ℓ)풱(ℓ)휈(ℓ) ≤ 휖1 푃.

푇 푇 푇 By pre- and post-multiplying by e(푡) and its transpose, we get e(푡) 풱(ℓ)풱(ℓ)e(푡) ≤ 2 −1 푇 휈(ℓ)휖1 e(푡) 푃 e(푡). Since −1 푇 휖1 e(푡) 푃 e(푡) ≤ 1, we can conclude that |풱(ℓ)e(푡)| ≤ 휈(ℓ) is ensured and, thus, the trajectories emanating from e(0) ∈ ℛ(푃,휖1) do not leave the subset D푒, completing the proof. Chapter 3. Robust Granular Feedback Linearization 41

It is worth to say that we can choose 휖1 = 1 without loss of generality in the conditions of Theorem 3.1. Therefore, a issue of interest is to verify the maximal allowed √ ePL error, 휖0, for given values of 휈 and 휏2. By replacing 휇 = 휖0휏1 in (3.19), we can solve the following optimization procedure:

min 휏1 − 휇 휏1,푃 (3.22) subject to 푃 > 0, (3.18) − (3.20).

The solution of this procedure leads to the maximization of 휖0 = 휇/휏1, and thus, to the maximization of the amplitude error of the ePL algorithm.

3.3 Performance Evaluation

This section evaluates the behavior and performance of robust granular feedback linearization – RGFL control using simulation and actual experiments concerning the level control of a surge tank, and simulation experiments for the angular position control of the knee of the lower limb system, and the tracking control of an inverted pendulum.

3.3.1 Surge Tank Simulation Experiments

This section addresses the level control problem of a surge tank. The surge tank model, depicted in Figure 3.2, is a benchmark adopted by many authors (Banerjee et al., 2011; Passino and Yurkovich, 1997; Silva et al., 2018; Oliveira et al., 2019) to evaluate feedback linearization control approaches. The following differential equation models the dynamics of the tank of Figure 3.2:

푞푖푛

푞표푢푡 푐

Figure 3.2 – Surge tank.

√ −푐 2푔ℎ 1 ℎ˙ = + 푢 (3.23) 퐴(ℎ) 퐴(ℎ) where ℎ is the tank level (푚), 푔 is the gravity constant (푚/푠2), 푐 is the cross-sectional area of the output pipe (푚2), 푢 is the input control (푚3/푠), and 퐴(ℎ) is the cross-sectional Chapter 3. Robust Granular Feedback Linearization 42 area of the tank (푚2) given by: 퐴(ℎ) = 푎ℎ + 푏, where 푎 = 0.01 and 푏 = 0.2 (Banerjee et al., 2011). To address the level control problem as a tracking reference control problem, we can use here the control law (2.2), in discrete form, to obtain: √︁ ⎡ 푘 ⎤ 푘 푘 푘 푐 2푔ℎ 푢 = 퐴(ℎ ) ⎣푣 + ⎦ . (3.24) 퐴(ℎ푘)

The state feedback controller 푣푘 = −K푒푘 with gain K = 1.15 is used to stabilize the closed-loop system. We also assume that the actuator saturates at ±50 푚3/푠, that is:

⎧ ⎪50, if 푢푘 > 50 ⎪ ⎨⎪ 푘 푘 푢푠(푘) = 푢 , if − 50 ≤ 푢 ≤ 50 (3.25) ⎪ ⎪ ⎩⎪−50, if 푢푘 < −50

The simulation experiments use the discrete form of the tank model: √ [︃ 19.6ℎ푘 1 ]︃ ℎ푘+1 = ℎ푘 + 푇 −(1 + 휍 ) + (1 + 휍 ) 푢푘 (3.26) 0 0.01ℎ푘 + 0.2 1 0.01ℎ푘 + 0.2 where 푇 is the sampling time, 휍0 and 휍1 are assumed to be uncertain. Note that 휍0 and

휍1 play the role of Δ훼(x) and Δ훾(x) in (3.2) considering (3.4). The sampling time is 푇 = 0.1s, a value short enough to approximate reasonably well the continuous dynamics of the tank (Passino and Yurkovich, 1997). Performance of RGFL was evaluated against EFL for three references 푟(푡) trajectories with square, saw-tooth, and triangular waveform, respectively. These reference trajectories were suggested in (Banerjee et al., 2011; Passino, 2005). Two simulation scenarios were considered. First, we assume perfect knowledge of plant, that is, the values of 휍0 and 휍1 in (3.26) are null. We call it the exact nominal model. Second, we modify the tank model turning 휍0 = −0.05 and 휍1 = 0.10 to cause a (parametric) mismatch between the tank and the model used in the feedback linearization law. We call it the unknown model for short. Simulation results are shown in Figures 3.3, 3.4 and 3.5 with the following conven- tion: nominal exact tank model with EFL (blue line) and RGFL controller (green line), and unknown model with EFL (cyan line) and RGFL controller (red line). Reference trajectories 푟(푡) are plotted in black dashed line. The parameters of the ePL algorithm were chosen according to the guidelines offered in (Lughofer, 2011, Cap. 4): 휉 = 0.125, 휗 = 0.01, 휏 = 0.00125, 휆 = 0.85, and

휎 = 0.02. The RLS algorithm uses forgetting factor 휁 = 0.99 and gain 퐾푝 = 7.5. The LMIs (3.18) and (3.19) of Theorem 3.1 were solved by using the CVXOPT [︁ ]︁ [︁ ]︁ solver (Andersen et al., 2012, 2018) for 퐴푐 = −1.15 , 퐵푐 = −1 , 휏2 = 2.25, and 훿 = 1 and Chapter 3. Robust Granular Feedback Linearization 43

10

5 h (m )

0 0 20 40 60 80 100

10 /s) 3

u (m 0

0 20 40 60 80 100 5

4

3

2 Clusers number 1 0 20 40 60 80 100 Tim e (s)

Figure 3.3 – RGFL controller tracking a square waveform reference trajectory with nomi- nal exact tank model with EFL (blue line) and RGFL controller (green line), and unknown model with EFL (cyan line) and RGFL controller (red line). Reference trajectories 푟(푡) are plotted in black dashed line.

[︁ ]︁ the value of 휛 has been maximized. The result is 푃 = 0.0399 , 휛 = 1.40, and 휏1 = 1.60 which shows that the closed-loop control system is stable for 푤푇 푤 ≤ 1 (thus the ePL error must be less or equal to 1) and the trajectory of the error vector e remains e푇 푃 e ≤ 휛, √ i.e., the maximum allowed output error in this case is given by 휛푃 −1 ≈ 35.08m. Such an error value is larger than the ones usually verified in the control of the level. Figures 3.3, 3.4 and 3.5 give a qualitative view of the RGFL controller behavior. Clearly RGFL outperforms EFL in all scenarios. In particular, looking at Figure 3.5 it is clear that the tracking error is much smaller than the allowed maximum value computed before (35.08m). To further evaluate the RGFL approach, we quantify the closed-loop performance using classical control indexes as integral of absolute error – IAE and the integral of time-weighted absolute error – ITAE, and compare its performance with the indirect adaptive fuzzy certainty equivalence controller based upon bacterial foraging fuzzy technique – BFOF (Banerjee et al., 2011). The bacterial population size is 40. IAE and ITAE are computed as follows:

퐼퐴퐸 = ∑︁ | 푒(푘) | 푇 (3.27) 푘

퐼푇 퐴퐸 = ∑︁ | 푒(푘) | 푘푇 2 (3.28) 푘 Chapter 3. Robust Granular Feedback Linearization 44

10

8

6

h (m ) 4

2

0 0 20 40 60 80 100

20

15 /s) 3 10 u (m 5

0 0 20 40 60 80 100

4

3

2 Clusers number

1 0 20 40 60 80 100 Tim e (s)

Figure 3.4 – RGFL controller tracking a sawtooth reference trajectory with EFL (blue line) and RGFL controller (green line), and unknown model with EFL (cyan line) and RGFL controller (red line). Reference trajectories 푟(푡) are plotted in black dashed line.

where 푒(푘) = 푟(푘) − ℎ(푘). Table 3.1 summarizes the results. Note that BFOF relies on a high computational cost, once the number of rules is more significant than the proposed approach. The RGFL is noticeably superior to EFL and BFOF, especially for the unknown model case.

3.3.2 Actual Surge Tank Experiments

The RGFL scheme was evaluated in the real world, particularly in a real-time tracking control system scenario using the surge tank system depicted in Figure 3.6. This system has four tanks with a nominal capacity of the 200 l each, and two water reservoirs with a nominal capacity of 400 l each. To measure the level, each tank is equipped with a pressure sensor model 26푃 퐶퐵퐹 퐴6퐷. The control system responds to the control signal through two three-phase 1 HP hydraulic pumps commanded by two WEG CFW09 invert- ers. The controller is implemented using a low-cost computer, and the data acquisition is made via a Simatic S7-300 programmable logic controller – PLC. The computer and the PLC are connected via Ethernet protocol and the controller programming is developed in Python. Here we are interested to control the level of the tank - T3 because it is highly Chapter 3. Robust Granular Feedback Linearization 45

10

8

6

h (m ) 4

2

0 0 20 40 60 80 100

20

15 /s) 3 10 u (m 5

0 0 20 40 60 80 100

4

3

2 Clusers number

1 0 20 40 60 80 100 Tim e (s)

Figure 3.5 – RGFL controller tracking a triangular reference trajectory with EFL (blue line) and RGFL controller (green line), and unknown model with EFL (cyan line) and RGFL controller (red line). Reference trajectories 푟(푡) are plotted in black dashed line.

Table 3.1 – Performance indexes of the controllers methods.

푟(푡) System Method IAE ITAE # Rules BFOF 34.554 813.88 576 Nominal EFL 68.186 3589.4 - Model RGFL 10.861 518.63 5

Square Unknown EFL 511.34 25881 - Model RGFL 16.878 779.44 5 BFOF 52.930 1596.9 576 Nominal EFL 53.075 2570.2 - Model RGFL 8.8980 317.49 4 Unknown EFL 531.59 26759 - Saw-Tooth Model RGFL 14.551 618.25 4 BFOF 54.064 1789.8 576 Nominal EFL 54.584 2624.3 - Model RGFL 4.3849 112.44 4 Unknown EFL 479.77 24144 - Triangular Model RGFL 9.7100 330.95 4 nonlinear and complicated to be managed robustly. Tank T3 has a hard nonlinearity, as Figure 3.7 shows (Franco et al., 2016). Chapter 3. Robust Granular Feedback Linearization 46

Figure 3.6 – Actual surge tank system.

푞푖푛

70푐푚 ℎ

푞표푢푡 62푐푚

Figure 3.7 – Nonlinearity of the surge tank - T3.

The dynamics of the tank T3, determined experimentally, is: 17.1624푢 11.5228ℎ + 508.006 ℎ˙ = − (3.29) 퐴(ℎ) 퐴(ℎ) where ℎ ∈ [8, 70] is the level (푐푚), 푢 ∈ [0, 100] is the control signal sent to the pump (%), and 퐴(ℎ) is the cross-sectional area of the tank (푐푚2).

−(0.01(ℎ−8)−0.4)2 퐴(ℎ) = 1556.82 − 1349.1948 cos(2.5휋(0.01(ℎ − 8) − 0.4)푒 0.605 .

The experiments use the reference signal: ⎧ 3 ⎨⎪ℎ0(푖), if 0 ≤ 푡 ≤ 푖푡푠푡푒푝 푟(푡) = 4 ⎪ ℎ(푖+1)−ℎ(푖) 3 ⎩ 5 푡trans + ℎ0(푖), if 4 푖푡푠푡푒푝 < 푡 ≤ 푖푡푠푡푒푝 Chapter 3. Robust Granular Feedback Linearization 47

where 푖 = 1,..., 13 index a sequence of 13 values for setpoints, 푡푠푡푒푝 = 250푠 is the duration of each step, 푡trans ∈ [0,62]푠 is the transient time interval between setpoint values, and ℎ0 = [15 23 31 39 47 39 31 23 15 44 15 44 15] encode the 13 setpoint values. Two experiments were run using the EFL and the RGFL, and in both the nominal system (3.29) was used to compute control signals (2.2) and (3.6). The first experiment is performed under the same operating conditions as the ones assumed during the controllers design. In the second experiment, the tank output flow valve increases its flow 30%, but the models used to design the controllers remain the same as in the first experiment. Figures 3.8 and 3.9 show the results.

50 EFL RGFL 40

30

h(cm ) 20

10

0

80

70

60

50 u(%)

40

30

20

0 500 1000 1500 2000 2500 3000 3500 4000 Tim e (s)

Figure 3.8 – Surge tank level control: EFL (blue line) and RGFL (green line).

The state feedback controller is 푣푘 = −Ke푘 with gain K = 0.05, and the sampling time is 푇 = 1s. The parameters the ePL algorithm uses were chosen by trial and error following the guidelines found in (Lughofer, 2011, Chap. 3). All parameters values are in the range [0, 1]. Experimental evidence suggests to choose the Gaussian spread (훿푟) values about the same as the standard deviation of measurements, which in our case is ±2%. Thus, we set: 휉 = 0.05, 휗 = 0.0001, 휏 = 0.025, 휆 = 0.85, and 휎 = 0.02. The local RLS algorithm has forgetting factor 휁 = 0.98, and gain 퐾푝 = 0.04. A first order digital filter 푘 푘−1 푘 given by: ℎ푓 = 0.8ℎ푓 + 0.2ℎ is used to filter high frequency noise. By using Theorem 3.1, the LMIs (3.18) and (3.19) were solved by using the CVXOPT solver (Andersen et al., [︁ ]︁ [︁ ]︁ 2012, 2018) with 퐴푐 = −0.05 , 퐵푐 = −1 , 휏2 = 0.075, and 훿 = 1 and maximizing the value of 휛, that is, we are searching for the maximum output tracking error for which Theorem 3.1 ensures the robust stability of the controlled system. The optimal solution [︁ ]︁ found is 푃 = 0.0799 , 휛 = 85.35, and 휏1 = 6.40. Thus, we have a maximum error ≈ 32.66cm, which is much larger than what is achieved by the closed-loop system. Chapter 3. Robust Granular Feedback Linearization 48

50

40

30 h(cm ) 20

10 100

75

50 u(%)

25 RGFL EFL

3

2

1 Clusters Numbers 0 0 500 1000 1500 2000 2500 3000 3500 4000 Tim e (s)

Figure 3.9 – Surge tank level control: second experiment with EFL (blue line) and RGFL (green line).

Figure 3.8 gives a qualitative view of the result, and clearly shows that both RGFL and EFL controllers perform as expected. The output tracking error for this experiment reached the maximum absolute value of 2.7cm which is less than 10% of the maximum allowed error. Also note that, the maximum ePL error is required to be 푤푇 푤 ≤ 1. When the outflow of the surge tank is increased, Figure 3.9, the RGFL performs much better than EFL. Quantification of the performance of both controllers are done using normalized IAE and ITAE indexes: 퐼퐴퐸푥 퐼퐴퐸푛 = , 퐼퐴퐸푅퐺퐹 퐿

퐼푇 퐴퐸푥 퐼푇 퐴퐸푛 = , 퐼푇 퐴퐸푅퐺퐹 퐿 where 퐼퐴퐸푛 and 퐼푇 퐴퐸푛 are the normalized indexes, 퐼퐴퐸푥 and 퐼푇 퐴퐸푥 are the indexes of each controller: EFL or RGFL. Table 3.2 summarizes the results and assertss the superior performance of RGFL.

Table 3.2 – Performance indexes for the actual surge tank experiments.

Scenario Method 퐼퐴퐸푛 퐼푇 퐴퐸푛 # Rules EFL 2.55 3.24 - 1 RGFL 1 1 3 EFL 8.46 9.49 - 2 RGFL 1 1 3 Chapter 3. Robust Granular Feedback Linearization 49

3.3.3 Knee Joint Simulation Experiments

The focus of this section is on the control of the angular position of a knee joint using functional electrical stimulation – FES. Control of knee angle is a challenge for con- trol theoreticians, practitioners, and rehabilitation engineers, and is a major benchmark adopted by many authors (Davoodi and Andrews, 1998; Kirsch et al., 2017; Li et al., 2017; Previdi and Carpanzano, 2003). The dynamic of the knee joint can be modeled as an open kinematic chain composed of two rigid segments (Ferrarin and Pedotti, 2000): the thigh, and the shank/foot complex as shown in Figure 3.10.

V

Ta l

ɸ

Ɵ Ɵeq

mg

Figure 3.10 – The lower limb uses functional electrical stimulation of the quadriceps mus- cles to produce knee extension. The system uses electrodes in the surface of the thigh.

During FES stimulation the leg dynamics can be modeled as: ¨ 퐽휃 + 퐺 + 푇푠 = 푇푎 (3.30) where 퐽 is the inertial moment of the shank/foot complex, 휃, 휃,˙ 휃¨ ∈ R are the angular position, velocity, and acceleration of the shank-foot complex, 퐺 = 푚푔푙 sin(휃 + 휃푒푞) is the gravitational torque, 푚 is the mass of shank/foot complex, 푔 is the acceleration of the gravity, 푙 is the distance between the knee joint and the center of mass of the shank/foot complex, and 휃푒푞 is the equilibrium angle between the shank and the vertical axis. 푇푠 is the degenerative torque resulted from the stiffness and damping of the knee joint, and 푇푎 is the input of the system, that is, the torque produced by the quadriceps muscles due to the FES induced muscle contraction. Similarly as in previous studies (de Proen¸caet al., 2012; Franken et al., 1993; Giat et al., 1996; Mansour and Audu, 1986), the stiffness and the damping component is the exponential function depending of (휃 + 휃푒푞):

휋 (−퐸(휃+휃푒푞+ 2 )) ˙ 푇푠 = 휆푒 (휃 + 휃푒푞 − 휑) + 휔휃, (3.31) Chapter 3. Robust Granular Feedback Linearization 50 where 휆 and 퐸 are coefficients of the rigidity and damping components, 휑 is the elastic resting knee angle, and 휔 is the coefficient of the viscous friction. Note that all parameters needed to compute the degenerative torque have individualized values, and are unique for each patient. Moreover, as a patient recovers from an injury, these parameters are expected to change with the progress of the treatment. The FES input can be characterized by a first order transfer function (Ferrarin and Pedotti, 2000): 푇 (푠) 퐾 푎 = 푠 (3.32) 푃 (푠) 1 + 휂푠 with 푃 (푠) the electrical pulse signal caused by the voltage input (푉 ), 퐾푠 the static gain, and 휂 the time constant. Similarly as for degenerative torque parameters, the values of the static gain and the time constant are unique for each patient, as are pulse frequency and stimulation pattern (de Proen¸caet al., 2012; Peckham and Knutson, 2005). ˙ Let 푥1 = 휃 + 휃푒푞 be the angular position, 푥2 = 휃 the angular velocity, and 푥3 = 푇푎 be the torque necessary to move the leg. Pluging (3.31) and the inverse Laplace transform of (3.32) in (3.30) we get the following state space model of the lower limb:

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ 푥˙ 1 푥2 0 ⎢ ⎥ ⎢ −퐸 푥 + 휋 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 푥 −퐺(푥 )−휆푒 ( 1 2 )(푥 −휑)−휔푥 ⎥ ⎢ ⎥ ⎢푥˙ 2⎥ = ⎢ 3 1 1 2 ⎥ + ⎢ 0 ⎥ 푃, (3.33) ⎣ ⎦ ⎢ 퐽 ⎥ ⎣ ⎦ ⎣ −1 ⎦ −1 푥˙ 3 −휂 푥3 휂 퐾푠

푦 = 푥1. where 푦 is the output, and 푃 is the electrical pulse directed to quadriceps muscles to activate the knee. Here 푃 is the control input. The lower limb (3.33) is a nonlinear system that has the form shown in (2.1). Using the Euler approximation, the discrete lower limb model (3.33) becomes:

푘+1 ⎡ ⎤ ⎡ 푘 푘 ⎤ 푥1 푥1 + 푇 푥2 ⎢ ⎥ ⎢ [︁ (︁ )︁ (︁ )︁]︁⎥ ⎢ ⎥ ⎢ 푘 푇 푘 푘 푘 푘 ⎥ ⎢푥2⎥ = ⎢푥2 + 퐽 푥3 − 퐺 푥1 − 푇푠 푥1,푥2 ⎥ (3.34) ⎣ ⎦ ⎣ 푘 −1 [︁ 푘 푘]︁ ⎦ 푥3 푥3 + 푇 휂 퐾푠푃 − 푥3

푘 푘 From (3.34) and recalling that 푦 = 푥1, the Lie derivatives can be used to verify that the system own relative degree 푟 = 3. From the control law (2.2) and diffeomorphism (2.3) we can certify that the system is feedback linearizable2, which results in:

⎡ 푘 ⎤ 푥1 (︁ )︁ ⎢ ⎥ 푘 ⎢ 푘 ⎥ 푀 x = ⎢ 푥2 ⎥ , (3.35) ⎣ −1 [︁ 푘 (︁ 푘)︁ (︁ 푘 푘)︁]︁⎦ 퐽 푥3 − 퐺 푥1 − 푇푠 푥1,푥2

2The specific structural properties to allow feedback linearization are detailed in (Khalil, 2002, Cap. 13). Chapter 3. Robust Granular Feedback Linearization 51

(︁ 푘)︁ −1 훽 x = 퐽휂퐾푠 , (3.36)

(︁ )︁ 휂 [︂ [︂ 휋 푘 (︁ )︁ (︁ )︁]︂ 푘 푘 −퐸( 2 +푥1 ) 푘 푘 푘 훼 x = 푥2 휆푒 + 퐺 푥1 + 푇푠 푥1,푥2 + 퐾푠 ]︃ (3.37) 푥푘 휔 [︁ (︁ )︁ (︁ )︁]︁ 3 + 푥푘 − 퐺 푥푘 − 푇 푥푘,푥푘 , 휂 퐽 3 1 푠 1 2

The parameters of the model are: 퐽 = 0.362 [푘푔 · 푚2], 푚 = 4.37 [푘푔], 푙 = 0.238 [푚], 퐵 = 0.27 [푁 · 푚 · 푠/푟푎푑], 휆 = 41.208 [푁 · 푚/푟푎푑], 퐸 = 2.024 [푟푎푑−1], 휔 = 2.918

[푟푎푑], 휂 = 0.951 [푠], and 퐾푠 = 42500 [푁 · 푚/푠]. These parameters values are those obtained from the anthropometric characteristics measured from a patient by Ferrarin and Pedotti (2000). Simulation was conducted to evaluate RGFL to regulate the knee angle in different reference positions, as suggested in (Kirsch et al., 2017). The references are 푟(푡) = 60표, 0 ≤ 푡 < 5, 푟(푡) = 20표, 5 ≤ 푡 < 10, and 푟(푡) = 40표, 10 ≤ 푡 ≤ 15. The sampling time is 푇 = 0.001 [푠], which is short enough to approximate reasonably well the continuous dynamics of the lower limb model. We also assume that the model (3.33) has imprecise parameters and neglected dynamics. More precisely, we assume that the mass of shank-foot complex and the degenerative torque are in error of 7.5%, 15% and −5%, respectively, with respect to the nominal values, which means that 푚푢 = 1.075푚 and 푘 휋 (−퐸(푥1 + 2 )) 푘 푘 푘 푇푠푢 = 1.15휆푒 [푥1 − 휑] + 0.95휔푥2. Additionally, we consider that the state 푥3 is disturbed by an unmodeled signal 푑푘 whose value is uniformly distributed [−0.02, 0.02]. Disturbance 푑푘 induces a fatigue effect into the lower limb model. [︁ ]︁ The feedback gain of the linear closed-loop was set as 퐾 = 251.975 61.68 7.9 . The ePL parameters are: 휉 = 0.002, 휗 = 0.001, 휏 = 0.01, 휆 = 0.875, and 휎 = 0.02, chosen as suggested by (Lughofer, 2011, Cap. 4). Local linear models of the fuzzy functional rules of ePL are updated using the RLS algorithm with forgetting factor 휁 = 0.98 and gain

퐾푝 = 0.13. A fuzzy functional Takagi-Sugeno (TS) controller was also designed following (Gaino et al., 2017).

The optimization model (3.22) was solved with 휈 = 50 for 휏2 ∈ [0.05, 2.10]. The 휈 value was chosen as the maximal tracking error assumed for the closed-loop system. √︁ The volume of the estimate region of attraction is proportional to 1/ det(푃 ). Figure √︁ 3.11 shows the values of 휖0 and Vol(푃 ) = 1/ det(푃 ) as a function of 휏2. The maximal allowed bound ePL error is achieved at 휏2 = 1.50 indicating the optimal convergence rate to maximize the acceptable ePL errors (휖0 = 955). On the other hand, the volume of the estimated region of attraction reduces as 휏2 increases. Once the stability bounds are found, four simulations were performed. The results are shown in Figure 3.12: the nominal exact lower limb model with EFL control is depicted in continuous cyan line, the uncertain model with EFL control is shown by the continuous Chapter 3. Robust Granular Feedback Linearization 52

√︁ Figure 3.11 – Maximal square bound of ePL error 휖0 (left axis), and Vol(푃 ) = 1/ det(푃 ) (right axis) for given values of 휏2 (guaranteed exponential convergence of the tracking error, e). red line, the uncertain model with TS fuzzy control is shown by the dashed green line, and the uncertain model with RGFL control depicted by the dotted-dashed blue line. The dashed black line is the reference signal 푟(푡). As we can note, the maximum tracking error occurs at the initial step, within the bound 휈 = 50. Figure 3.12 gives a qualitative overview of the behavior of the controllers. Under ideal situations, that is, when the model used during design fits perfectly the lower limb of the patient, the EFL behaves as intended (continuous cyan line) and is successful. However, when the lower limb model used in design differs from the actual system, the behavior of the closed-loop depends on the control approach. We see that the EFL (contin- uous red line) and TS Fuzzy controller (dashed green line) show an offset error during the simulation period. Contrary to EFL and TS control, RGFL controller (dotted-dashed blue line) behaves closely to the ideal case. The price paid by RGFL to surpass EFL and TS controllers is a small increase in the torque to banish the offset error. The ePL algorithm developed six fuzzy rules, as the lower part of Figure 3.13 shows. It is interesting to notice the adaptation nature of RGFL by looking at the first 1.5 [푠] when the algorithm develops five fuzzy rules. The adaptation in such a period occurs because of the angular position changes quickly, and the ePL is learning from the lower limb states model, as the upper part of Figure 3.13 shows. At 2.0 [푠], the algorithm has learned enough to eliminate one rule and keep canceling the current nonlinear effects. Next, when the reference changes from 60표 to 20표, two new rules are created to counteract for the corresponding nonlinear changes. Overall, RGFL control needs at most six fuzzy rules to ensure performance for all ranges of knee joint angles. Chapter 3. Robust Granular Feedback Linearization 53

Figure 3.12 – Behavior of closed-loop system with: nominal model using EFL (continuous cyan line), actual system using EFL (continuous red line), TS Fuzzy con- troller (dashed green line), RGFL (dotted-dashed blue line), and reference (dashed black line).

Table 3.3 – Min and Max Values to normalize the ePL algorithm input.

푥1 푥2 푥3 푒1

푥max푖 휋/2 0.325 15 휋/4

푥min푖 0 −0.325 0 −휋/4

The illustrative video shows how the evolving participatory learning works, and how robust granular feedback linearization behaves. The video exhibits the knee position and the active fuzzy rules at each time step. We remark the following: 1) the input of the fuzzy rules z at each step 푘 is composed by the state vector and the tracking error, 4 and hence z ∈ R , but the video displays states 푧1 and 푧4 only. 2) The vector z ∈ [0,1] is normalized using:

푥푘 − 푥 푧푘 = 푖 min푖 푖 = 1, ··· , 4 (3.38) 푑푖 푥max푖 − 푥min푖 with the minimum and maximum values given in the Table 3.3. Because the tracking error can be either negative or positive, the normalized error is null for 푧4 = 0.5. 3) The zone of influence (spread) of the 푖-th fuzzy rule is constant and chosen by the designer when the value of 휎 is specified. Note that all rules may contribute to produce an output at each step, and that ePL only updates the most compatible rule, or equivalently, the most Chapter 3. Robust Granular Feedback Linearization 54

Figure 3.13 – Number of fuzzy rules built by the RGFL controller to control the angular position of the knee.

Table 3.4 – Performance indexes of the controllers. IAE ITAE RMSE IVU 퐸퐹 퐿푛 1.0485 4.4537 0.1698 0.1538 EFL 2.3959 16.080 0.1964 0.1531 TS 2.0326 12.799 0.1768 0.1871 RGFL 1.0350 4.8150 0.1604 0.1559

푘 compatible cluster at each step. The video shows the updates of the cluster centers v푠 as in (2.26). To quantify and to compare the performance of RGFL controller with EFL and TS Fuzzy controllers we measure the integral of absolute error – IAE, integral of time- weighted absolute error – ITAE, and root mean square error – RMSE, and integral of time-weighted variability of the signal control – IVU. Computation of values of IAE, ITAE, RMSE, and IVU follows (Oliveira et al., 2017). Table 3.4 summarizes the results. The performance of RGFL is noticeably superior to EFL and TS Fuzzy controllers, and its behavior is the closest to the ideal performance of EFL under the nominal system, denoted 퐸퐹 퐿푛 in Table 3.4.

3.3.4 Evaluation of RGFL Control with Evolving Takagi-Sugeno Modeling

This section evaluates the behavior and performance of the RGFL controller when ePL modeling is replaced the eTS modeling algorithm of Section 2.2.1. The eTS modeling is detailed in Algorithm 3.2. Several MODIFY and UPGRADE conditions were used to evaluate the eTS algo- rithm. These conditions were suggested in the literature as C1- (Angelov and Filev, 2004), C2- (Ramos and Dourado, 2003), and C3- (Angelov et al., 2004a), and are summarized Chapter 3. Robust Granular Feedback Linearization 55

푘 Algorithm 3.2 Compute of 푢^푐 using the eTS algorithm. 1: Input: z푘 ∈ [0,1]푛, 푘 = 1, ··· 푘 2: Output: 푢^푐 3: Choose the parameters 퐾푝 and 휎 4: Set the first input data as the first focal point 5: while 푘 ≤ ∞ do 6: z푘 ← Read new data 푘 7: Compute potential 푃z using (2.19) 8: for 푖 = 1 to 푐푘 do 푘 9: Compute potential 푃 * using (2.21) x푖 10: if MODIFY condition holds then *푘 푘 11: x푠 ← x 12: else if UPGRADE condition holds then *푘 푘 13: x푐푘+1 ← x 14: else 15: Ignore input data 16: end if 17: end for 18: for 푖 = 1 to 푐푘 do 푘 19: Compute activation degree 휇푖 using (2.22) 20: end for 21: Compute estimate 푤^푘 using (3.10) 푘 22: Compute control signal 푢^푐 using (3.6) 23: Update the counter: 푘 = 푘 + 1 푘 24: Return 푢^푐 25: end while in Table 3.5.

Table 3.5 – Modify and Upgrade conditions of eTS algorithm.

Source Scenario UPGRADE Condition MODIFY Condition * * * 푑푚푖푛 푃 C1 A 푃푧 > 0.5푃 푃푧 > 0.15푃 푎푛푑 + < 1 푟 푃푧 * 푑푚푖푛 푃 B 푃푧 > 푃푚 푃푧 > 푃푚 푎푛푑 푟 + 푃 < 1 C2 *푧 * * 푑푚푖푛 푃 C 푃푧 > 푃 푃푧 > 푃 푎푛푑 + < 1 푟 푃푧 * * 푑푚푖푛 D 푃푧 > 푃 푃푧 > 푃 푎푛푑 푟 < 0.5 * * * * * * 푑푚푖푛 C3 E 푃푧 > 푃 표푟 0.5푃 < 푃푧 < 0.675푃 (푃푧 > 푃 표푟 0.5푃 < 푃푧 < 0.675푃 ) 푎푛푑 푟 < 0.5 * * 푑푚푖푛 F 푃푧 > 푃 표푟 푃푧 < 푃* (푃푧 > 푃 표푟 푃푧 < 푃*) 푎푛푑 푟 < 0.5 * * 푑푚푖푛 푑푚푖푛 G 푃푧 > 푃 표푟 푃푧 < 푃* (푃푧 > 푃 푎푛푑 푟 < 0.5) 표푟 (푃푧 < 푃* 푎푛푑 푟 < 0.85)

RGFL with the eTS algorithm was evaluated using an inverted pendulum as con- sidered by many authors (Slotine and Li, 1991; Wang, 1994; Park et al., 2003) to access adaptive feedback linearization control approaches. The inverted pendulum is shown in Figure 3.14. Chapter 3. Robust Granular Feedback Linearization 56

˙ 푥2 = 휃

푥1 = 휃

Figure 3.14 – Inverted Pendulum.

The dynamics of the inverted pendulum is:

푥˙ 1 = 푥2 푚푙푥2 cos 푥 sin 푥 −cos 푥 푢 푔 sin 푥 − 2 1 1 1 (3.39) 1 푚푐+푚 푥˙ 2 = 2 푙(4/3 − 푚 cos 푥1 ) 푚푐+푚 ˙ where 푥1 = 휃 [푟푎푑] is the angle with the vertical axis, 푥2 = 휃 [푟푎푑/푠] is the angular speed, 2 푔 = 9.8푚/푠 is the gravity acceleration, 푚푐 [퐾푔] is the cart mass, 푚 [퐾푔] is the pole mass, 푙 [푚] is the pole half-length, and 푢 [푁] is control input. The inverted pendulum is an instance of (2.1), with the diffeomorphism (2.3) [︁ ]︁푇 푀(x) = 푥1 푥2 . The control law (2.2) in discrete form is:

푘 1 [︁ 푘 푘 푘 푘]︁ 푢푒 = 푘 푣 − 푓푛(x ) − 푔푛(x )푢푟 , (3.40) 푔푛(x ) with 2 푚푙 cos 푥푘 sin 푥푘(푥푘) 푔 sin 푥푘 − 1 1 2 푘 1 푚푐+푚 푓푛(x ) = , (︂ 푚 cos2 푥푘 )︂ 푙 4/3 − 1 푚푐+푚

푘 푘 cos 푥1 푔푛(x ) = , (︂ 푚 cos2 푥푘 )︂ 푙(푚 + 푚) 4/3 − 1 푐 푚푐+푚

(푛) 푘 푘 푟 − 푓푛(r ) 푢푟 = 푘 . 푔푛(r ) Simulations use the discrete form of the pendulum model:

푘+1 푘 푘 푥1 = 푥1 + 푇 푥2 (3.41) 푘+1 푘 [︁ 푘 푘 푘]︁ 푥2 = 푥2 + 푇 푓(x ) + 푔(x )푢 where 푇 is the sampling time. Functions 푓(x푘) and 푔(x푘) are considered imprecise because we assume that the parameters are 푚 = 푚푛(1+휍0) and 푙 = 푙푛(1+휍1), where 푚푛 and 푙푛 are the nominal values, and 휍푖 are uncertain deviations. The nominal parameters were chosen as suggested by Park et al. (2003), namely: 푚푐 = 1 푘푔, 푚푛 = 0.1 푘푔 and 푙푛 = 0.5 푚. Chapter 3. Robust Granular Feedback Linearization 57

휋 The reference signal to be tracked is 푟(푡) = 30 sin(푡). The state feedback gain is set as [︁ ]︁ K = 2 1 . Gaussian function consider spreads 휎 = 0.3, 0.4, or 0.5. Two scenarios were considered in the simulations experiments. The first scenario considers the ideal, precise modeling with no uncertainty. The second scenario assumes that parameters 푚 and 푙 have deviations 휍0 = −0.2 and 휍1 = −0.15, respectively. The results for 휎 = 0.3 are shown in Figure 3.15: EFL controller (blue dashed line), RGFL controller using eTS algorithm and condition A and C (red continuous line), with D and E (green dash-dotted line), B (grey dotted line), and with conditions F and G (black continuous line). The reference signal is shown the black dashed line.

rad −

s ̇ rad −

N u −

ts

Figure 3.15 – RGFL with eTS algorithm and 휎 = 0.3 using EFL controller (blue dashed line), RGFL controller using eTS algorithm and condition A and C (red continuous line), with D and E (green dash-dotted line), B (gray dotted line), and with conditions F and G (black continuous line). The reference signal is shown the black dashed line.

Figure 3.15 shows that EFL behaves as expected, that is, if the model fits the process perfectly (no uncertainty), then lim푡→∞ 푒(푡) = 0. However, when parameters de- viate form their nominal values, the output shows an offset error, 푒(푡) ̸= 0. RGFL with conditions D and E behaves similarly as EFL in the ideal case. Notice that conditions F and G are not efficient to turn the tracking error null. The performance of the controllers was also quantified using (Dorf and Bishop, ∫︀ 푡푓 2000) the integral of absolute error – 퐼퐴퐸 = 푡 |푒(푡)|푑푡, integral of time-weighted vari- √︂ 0 ability of the signal control – 퐼푉 푈 = 1 ∫︀ 푡푓 (푢(푡) − 푢)2푑푡, integral of time-weighted 푡푓 −푡0 푡0 √︂ variability of the error – 퐼푉 퐸 = 1 ∫︀ 푡푓 (푒(푡) − 푒)2푑푡. Here the indexes are normalized 푡푓 −푡0 푡0 Chapter 3. Robust Granular Feedback Linearization 58 by the EFL values as follows: 퐼푥 퐼푛표푟푚 = , 퐼퐸퐹 퐿 where 퐼푛표푟푚 is the normalized index value, 퐼푥 is the index to be normalized, and 퐼퐸퐹 퐿 is the index produced by the EFL controller. A controller performs better if its normalized index is such that 퐼푛표푟푚 < 1. Otherwise, it performs worse. The results are summarized in Table 3.6 and Figure 3.16.

Table 3.6 – Performance of the RGFL controller with eTS modeling. Nominal System Scenario 휎 Rules 퐼퐴퐸 퐼푉 퐸 퐼푉 푈 0.03 24 0.948 0.761 1.407 A/C 0.04 23 0.973 0.768 1.435 0.05 23 0.998 0.78 1.484 0.03 122 2.077 1.498 4.243 B 0.04 102 2.612 1.903 4.581 0.05 92 3.158 2.34 4.805 0.03 1 2.58 1.768 1.87 D/E 0.04 1 3.348 2.278 2.342 0.05 1 3.957 2.703 2.803 Uncertain System 0.03 28 0.651 0.697 1.583 A/C 0.04 27 0.655 0.701 1.585 0.05 26 0.665 0.705 1.586 0.03 119 1.526 1.422 4.975 B 0.04 111 2.048 1.928 5.550 0.05 95 2.451 2.347 5.798 0.03 1 1.681 1.514 2.047 D/E 0.04 1 2.330 2.076 2.607 0.05 1 2.808 2.494 3.179

From Table 3.6, we can verify that the RGFL controller provided a superior per- formance when submitted to A and C conditions. However, a significant number of rules was built: 24 to nominal system and 28 to uncertain system. At the same time, Figure 3.16 shows that the RGFL controller acts in the closed-loop in order to mitigate the dis- turbances caused by uncertain parameters. Further, the IVU index with RGFL control scheme are always higher than one.

3.4 Summary

Feedback linearization has continuously faced the problem of how to achieve ro- bustness and adaptation in closed-loop feedback linearizable control systems. Exact feed- Chapter 3. Robust Granular Feedback Linearization 59

IAE

4.5

4.0

3.5

3.0

2.5

2.0

1.5

1.0

0.5

IVU IVE

Figure 3.16 – Performance indexes of the RGFL controller with eTS modeling, 휎 = 0,3. back linearization control fails whenever the actual system dynamics differs from the model dynamics used to design the controller. A novel robust granular feedback linearization approach was developed in this chapter, to improve the robustness and adaptability of the feedback linearization control loops. The approach uses the state and the tracking error as the input of the evolving participatory learning algorithm. These inputs are employed to estimate disturbances to counteract for their effects in the control loop. Lyapunov stability theory guarantees closed-loop stability under mild conditions such as bounded tracking errors and distur- bances. Level of a surge tank, and angular position control of a knee joint were considered to evaluate the behavior and performance of the RGFL under strong nonlinearities and parametric deviations. RGFL was also evaluated when ePL is replaced by evolving Takagi- Sugeno modeling. Simulations and actual experiments with the surge tank control suggest that the robust granular feedback linearization approach RGFL outperforms the remaining approaches. RGFL control is capable to adapt its behavior and to guarantee performance when disturbances are introduced in the control loop. 60 4 Robust Evolving Granular Feedback Linearization

The granular feedback linearization controller of Chapter 2 was designed for a class of nonlinear systems of form (2.1). In practice, however, many nonlinear systems func- tions 푓(x) and 푔(x) of (2.1) rarely are precisely known. This chapter addresses nonlinear systems of the form (2.1), but assumes that the functions 푓(x) and 푔(x) are unknown. The controller design requires the output of the process to track a reference 푟(푡).

4.1 Introduction

Strategies have been conceived to add robustness and adaptability in feedback linearizable feedback control of unknown processes and systems. For instance, Wang (1996) develops an indirect adaptive fuzzy rule-based approach to estimate and to compute online the control input to track a reference signal. Alternatively, Park et al. (2003) suggests an indirect adaptive fuzzy control algorithm that is robust against reconstruction errors for single-input-single-output nonlinear dynamical systems with unknown nonlinearities. In (Passino, 2002), the biomimicry of the social bacterial foraging approach is used to develop an indirect adaptive controller. Recently, a scheme developed upon the notion of model reference adaptive control and evolving fuzzy participatory learning algorithm – ePL (Lima et al., 2010) was developed by Oliveira et al. (2017). A difficulty in achieving tracking control is that the output 푦 depends indirectly on the input 푢 through the state 푥. This chapter develops the robust evolving granular feed- back linearization – ReGFL using the notion of indirect adaptive control, similarly as in (Park et al., 2003; Wang, 1996). Differently from the previous approaches, ReGFL employs the ePL algorithm to estimate online the values of the functions 푓(x) and 푔(x) needed by the control law. The ReGFL relies on the certainty equivalence principle (de Water and Willems, 1981). ReGFL control is based on the direct relationship between the system output and input, as dictated by the idea of feedback linearization, explained in Section 4.2. Section 4.3 develops the ReGFL controller, and Section 4.5 compares the performance of ReGFL and EFL in surge tank level control. Chapter 4. Robust Evolving Granular Feedback Linearization 61

4.2 Input-Output Linearization Idea

An input-output linearization is an approach for nonlinear control design that has attracted considerable interest in the nonlinear control community during the last decade (Wang, 1994). Input-output feedback linearization can be summarized as follows (Isidori, 1995; Khalil, 2002): differentiate the output 푦 repeatedly until the input 푢 appears; next design 푢 to cancel the nonlinearities; and design a controller based on linear control methods. This scheme aims to cancel the nonlinearities similarly as the full state feedback linearization of Section 2.1, but using system input and output instead of the state in the control law (Sastry, 1999). For instance, consider a magnetic suspension system (Khalil, 2002):

푥˙ 1 = 푥2 2 푘 퐿0푎푥3 푥˙ 2 = 푔 − 푥2 − 2 푚 2푚(푎 + 푥1) (︃ )︃ 1 퐿0푎푥2푥3 푥˙ 3 = −푅푥3 + 2 + 푢 퐿(푥1) (푎 + 푥1)

푦 = 푥1, (4.1) where 푥1 is the vertical position of the ball, 푥2 is the linear velocity, 푥3 is the electrical current of the coil, and 푢 is the voltage applied in the electrical circuit of the coil, 푔 is the gravitational acceleration, 푚 is the ball mass, 퐿0 is inductance value, 푘 and 푎 are positive constants, 푅 is the series resistance of the circuit, and 퐿(푥1) is the magnetic flux linkage. To produce a relationship between 푦 and 푢, we differentiate the output:

푦˙ =푥 ˙ 1 = 푥2. (4.2)

Because 푦˙ is not directly related with 푢, (4.2) is differentiated again:

2 (2) 푘 퐿0푎푥3 푦 =푥 ˙ 2 = 푔 − 푥2 − 2 . (4.3) 푚 2푚(푎 + 푥1) Expression (4.3) is similar to (4.2). Differentiating the output gives:

(3) 퐿0푎푥3 푦 = 퐹 − 2 푢, (4.4) 푚퐿(푥1)(푎 + 푥1)

2 [︁ ]︁ 푘 퐿0푎푥3 1 푥2 where 퐹 = − 푥˙ 2 + 2 (푅 − 푥2) − . Now the input signal appears directly 푚 푚(푎+푥1) 퐿(푥1) 푎+푥1 in the output, which means that the relative degree is 3, and we choose the control input:

푚퐿(푥 )(푎 + 푥 )2 푢 = − 1 1 [푣 − 퐹 ] (4.5) 퐿0푎푥3 to obtain 푦(3) = 푣, (4.6) Chapter 4. Robust Evolving Granular Feedback Linearization 62 where 푣 is the new input of the linearized system, proceeding similarly as in Section 2.1, the closed-loop control system can asymptotically drive the output to the origin by an appropriate design of 푣.

4.3 Robust Evolving Granular Feedback Control with Input-Output Linearization

This section develops a robust evolving granular feedback linearization controller inspired in the certainty equivalence principle (de Water and Willems, 1981), and the input-output feedback linearization (Slotine and Li, 1991). The idea is to use estimates 푓^(x,e) and 푔^(x,e) of 푓(x) and 푔(x) in (2.1) using the ePL algorithm, and to apply the control input: 1 푢 = [푣 − 푓^(x,e)] (4.7) 푔^(x,e)

The system model (2.1) and control input (4.7) give:

푥(푛) = 푣 + [푓(x) − 푓^(x,e)] + [푔(x) − 푔^(x,e)]푢. (4.8)

To develop a state feedback control law such that the output 푦 asymptotically tracks a smooth reference signal 푟(푡), we may choose the linear control law:

푣 = 푟(푛) − Ke (4.9)

푇 (푛) [︁ (푛−1)]︁ where 푟 is the n-th derivative of 푟(푡), e = r − x = 푒1 푒˙1 ··· 푒 , and r = 푇 [︁ (푛−1)]︁ 푛 푟 ··· 푟 is the vector of the reference values for each state 푥푖, and K ∈ R is a vector with the coefficients of the Hurwitz polynomial. Therefore, because 푒 = 푟(푡) − 푦, 푒(푛) = 푟(푛) − 푦(푛), from (4.8) we get the following expression for the error dynamics:

푒(푛) = Ke + [푓^(x,e) − 푓(x)] + [^푔(x,e) − 푔(x)]푢 (4.10)

^ Let function estimates be such that 푓(x,e) = 푓(x) + 푤푓 and 푔^(x,e) = 푔(x) + 푤푔, where

푤푓 and 푤푔 are the estimation errors of the ePL algorithm. Thus, the error dynamics can be expressed as: 푒(푛) = Ke + 푤 (4.11) where 푤 = 푤푓 + 푤푔푢. Expression (4.11) can be rewritten as:

e˙ = 퐴푐e + 퐵푐푤 (4.12) where Chapter 4. Robust Evolving Granular Feedback Linearization 63

⎡ ⎤ ⎡ ⎤ 0 1 0 0 ··· 0 0 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 0 0 1 0 ··· 0 ⎥ ⎢0⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ...... ⎥ ⎢ . ⎥ 퐴푐 = ⎢ ...... ⎥ and 퐵푐 = ⎢ . ⎥. ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 0 0 0 0 ··· 1 ⎥ ⎢0⎥ ⎣ ⎦ ⎣ ⎦ 푘푛 푘푛−1 ········· 푘1 1

The ReGFL control approach is shown in Figure 4.1. Notice that the plant state x, the error vector e, and the linear control signal 푣 are inputs to the ePL algorithm which produces estimates 푓^(x,e) and 푔^(x,e) to compute the control signal 푢.

e + e 푣 ReGFL 푢 푥(푛) = 푓(x) + 푔(x)푢 푦 r(푡) 푟(푛) − Ke x x Controller 푦 = 푥1

Figure 4.1 – ReGFL control.

More specifically, the control input 푢 is produced by the ePL algorithm using fuzzy functional rules of the form:

IF z푘 is 풜푘 THEN 푓^푘(x,e) = 훾푘 휒푘 AND 푔^푘(x,e) = 훾푘 휒푘 푖 푖 푓푖 푖 푔푖

푇 [︂ 푇 푇 ]︂ [︁ ]︁푇 where 훾푘 and 훾푘 are vectors of parameters, 휒푘 = (︁ 푘)︁ (︁ 푘)︁ and z푘 = 푘 푘 . 푓푖 푔푖 x e 1 x 푒1 The fuzzy rules use affine functions as local approximators of 푓 and 푔. As mentioned in Section 2.2.2, the RLS can be used to estimate the parameters 훾푘 and 훾푘 . Estimates 푓푖 푔푖 ^푘 푘 ^ ^푘 푓 (x,e) and 푔^ (x,e) of 푓(x,e) and 푔^(x,e) at each step 푘 are computed from 푓푖 (x,e) and 푘 푘 푔^푖 (x,e), 푖 = 1, ··· , 푐 . We use the RLS algorithm with forgetting factor (Ljung, 1999): Φ푘−1휒푘 ϒ푘 = 푓푖 푓푖 (휒푘)푇 Φ푘−1휉푘 + 휁 푓푖 [︂(︁ )︁ (︁ )︁푇 ]︂ 훾푘 = 훾푘−1 + ϒ푘 퐾 푒푘 + 퐾푖 Σ푘 푒푘 − 휒푘 훾푘−1 푓푖 푓푖 푓푖 푝 1 푓 푗=0 1 푓푖 ⎡ 푇 ⎤ 푘−1 푘 (︁ 푘)︁ 푘−1 1 Φ푓 휒 휒 Φ푓 Φ푘 = ⎢Φ푘−1 − 푖 푖 ⎥ (4.13) 푓푖 휁 ⎣ 푓푖 (휒푘)푇 Φ푘−1휒푘 + 휁 ⎦ 푓푖 and Φ푘−1휒푘 ϒ푘 = 푔푖 푔푖 (휒푘)푇 Φ푘−1휒푘 + 휁 푔푖 [︂(︁ )︁ (︁ )︁푇 ]︂ 훾푘 = 훾푘−1 + ϒ푘 퐾푖 Σ푘 푒푘 − 휒푘 훾푘−1 푔푖 푔푖 푔푖 푔 푗=0 1 푔푖 ⎡ 푇 ⎤ 푘−1 푘 (︁ 푘)︁ 푘−1 1 Φ푔 휒 휒 Φ푔 Φ푘 = ⎢Φ푘−1 − 푖 푖 ⎥ (4.14) 푔푖 휁 ⎣ 푔푖 (휒푘)푇 Φ푘−1휒푘 + 휁 ⎦ 푔푖 Chapter 4. Robust Evolving Granular Feedback Linearization 64 where ϒ푘 is the Kalman gain of the i-th rule at the k-th step (푞 = 푓 or 푔), Φ푘 is the 푞푖 푞푖 covariance matrix, and 휁 the forgetting factor. Thus, at each step 푘, estimates 푓^푘(x,e) and 푔^푘(x,e) are computed using (4.13), (4.14) and the weighted average to the local estimates similary as in (2.28). Next the control input is computed from (4.7). Algorithm 4.1 details the robust evolving granular feedback linearization control algorithm.

Algorithm 4.1 ReGFL control algorithm. 1: Input: 푣푘,e푘, x푘, 푘 = 1, ··· 2: Output: 푢푘 3: Choose initial values for 휉,휗,휏,휆 ∈ [0,1] 4: Choose 퐾푝, 퐾푖푓 , 퐾푖푔, 휎 and 휁 푇 1 푘 [︁ 푘 푘]︁ 1 5: Set the initial clusters to 푐 ≥ 2, v푖 ← x 푒1 and 푖 = 푘 = 1, ··· , 푐 푘 1 6: Set the initial arousal index 푎푖 ← 0, 푖 = 1, ··· , 푐 7: while 푘 ≤ ∞ do [︁ ]︁ 8: 푣푘 e푘 x푘 ←Read new data 푇 푘 [︁ 푘 푘]︁ 9: z ← x 푒1 푇 [︂(︁ )︁푇 (︁ )︁푇 ]︂ 10: 휒푘 ← x푘 e푘 1 11: for 푖 = 1 to 푐푘 do 푘 12: Compute the compatibility index 휌푖 using (2.24) 푘+1 13: Compute the arousal index 푎푖 using (2.25) 14: end for 푘 15: if argmin{푎푗 } > 휏 then 푗=1,··· ,푐푘 푘 16: v푐 +1 ← z푘 17: 푐푘 ← 푐푘 + 1 18: else 푘 19: Update the most compatible cluster v푠 using (2.26) 20: Update parameter vectors 훾푘 and 훾푘 using (4.13) and (4.14) 푓푖 푔푖 21: end if 22: for 푖 = 1 to 푐푘 − 1 and 푗 = 푖 + 1 to 푐푘 do 푘 23: Compute the compatibility index 휌푖푗 using (2.27) 푘 24: if 휌푖푗 ≥ 휆 then 푘 25: Delete the cluster center v푗 and rule R푗 26: 푐푘 ← 푐푘 − 1 27: end if 28: end for 29: for 푖 = 1 to 푐푘 do 푘 30: Compute firing degree 휇푖 using (2.22) 31: end for 32: Compute estimates 푓^푘(x,e) and 푔^푘(x,e) using (2.28) 33: Compute control signal 푢푘 using (4.7) 34: end while Chapter 4. Robust Evolving Granular Feedback Linearization 65

4.4 Performance Evaluation

In this section, we revisit the surge tank control problem of Section 3.3.1. This tank has the dynamics: √ −푐 2푔ℎ 1 ℎ˙ = + 푢 (4.15) 퐴(ℎ) 퐴(ℎ) where ℎ is the level (푚), 푔 is the gravity constant (푚/푠2), 푐 is the cross-sectional area of the output pipe (푚2), 푢 is the input (푚3/푠), and 퐴(ℎ) is the cross-sectional area of the tank at ℎ (푚2), given by 퐴(ℎ) = 푎ℎ + 푏, where 푎 and 푏 are known constants. Assuming Euler approximation, the discrete model of the tank is: √︁ ⎡ 푘 ⎤ 푘+1 푘 푐 2푔ℎ 1 푘 ℎ = ℎ + 푇 ⎣− + 푢 ⎦ (4.16) 퐴(ℎ푘) 퐴(ℎ푘) where 푇 is the sampling time. Tank simulation uses (4.16) and a state feedback controller 푣푘 = −Ke with gain set at K = 1.25 to stabilize the closed-loop system. The constants are 푎 = 0.01, 푏 = 0.2 and 푐 = 0.05, and the sampling time 푇 = 0.1푠. Notice that the sampling time is short enough to approximate reasonably well the continuous dynamics of the tank (Passino and Yurkovich, 1997). The ePL parameters are set as 휉 = 0.005, 휗 = 0.000125, 휏 = 0.0075, 휆 = 0.85, 휎 = 0.25 and 휁 = 0.98. The values of these parameters were chosen as in (Lughofer, 2011, Cap. 3). Parameter update formulas use 퐾푝 = 0.55, 퐾푖푓 = 0.01 3 and 퐾푖푔 = 0.04. Moreover, the actuator saturates at ±50푚 /푠, that is, the control inputs are constrained as follows (Slotine and Li, 1991): ⎧ ⎪50 if 푢푘 > 50 ⎪ ⎨⎪ 푢푘 = 푢푘 if − 50 ≤ 푢푘 ≤ 50 (4.17) ⎪ ⎪ ⎩⎪−50 if 푢푘 < −50

To evaluate the performance of the ReGFL, we consider three scenarios character- ized by three reference signals, a square, a sawtooth, and a triangular waveform, respec- tively, as in (Banerjee et al., 2011). Simulation results for each scenario are depicted in Figures 4.2, 4.3, and 4.4. The following convention is adopted: control performance with nominal tank parameter with EFL (green line); control performance with the value of the tank parameter 푐 50% smaller than the nominal value with EFL (red line) and with ReGFL (blue line). The reference is depicted in a black dashed line. The EFL uses the same state feedback gain as ReGFL. Looking at Figures 4.2, 4.3, and 4.4, we can visually verify that when a precise tank model is used, the EFL behaves as expected (green line). However, when there is a modeling mismatch (due to a 50% variation in the value of 푐), one clearly observes the online adaptivity and superior performance of ReGFL. Alternatively, performance evaluation of the closed-loop system behavior can be quantified using the integral absolute Chapter 4. Robust Evolving Granular Feedback Linearization 66

Figure 4.2 – Feedback linearization in surge tank level control: Square waveform with nominal tank parameter with EFL (green line); control performance with the value of the tank parameter 푐 50% smaller than the nominal value with EFL (red line) and with ReGFL (blue line), and the reference is depicted in a black dashed line.

error (IAE) and integral of time-weighted absolute error (ITAE), as usual in process control. Table 4.1 summarizes the results and shows that the ReGFL outperforms EFL. In Table 4.1, the lower the index value, the better the performance.

Table 4.1 – Performance indexes of the controllers. Waveform Method 퐼퐴퐸 퐼푇 퐴퐸 Rules EFL Nominal 164.4 22500 - Square EFL Mismatch 326.6 50850 - ReGFL Mis- 48.0 6720 2 match EFL Nominal 132.6 18190 - Sawtooth EFL Mismatch 323.3 49780 - ReGFL Mis- 72.1 7090 3 match EFL Nominal 142.3 19300 - Triangular EFL Mismatch 334.5 37890 - ReGFL Mis- 46.3 6315 3 match Chapter 4. Robust Evolving Granular Feedback Linearization 67

Figure 4.3 – Feedback linearization in surge tank level control: Sawtooth waveform with nominal tank parameter with EFL (green line); control performance with the value of the tank parameter 푐 50% smaller than the nominal value with EFL (red line) and with ReGFL (blue line), and the reference is depicted in a black dashed line.

Figure 4.4 – Feedback linearization in surge tank level control: Triangular waveform with nominal tank parameter with EFL (green line); control performance with the value of the tank parameter 푐 50% smaller than the nominal value with EFL (red line) and with ReGFL (blue line), and the reference is depicted in a black dashed line. Chapter 4. Robust Evolving Granular Feedback Linearization 68

4.5 Summary

This chapter has developed a robust granular feedback linearization adaptive con- trol approach based on evolving participatory learning, certainty equivalence principle, and input/output feedback linearization. The evolving granular feedback controller was evaluated using level control of the classic surge tank benchmark. Its performance was compared against the exact feedback linearization. Simulation results have shown that the evolving robust granular feedback linearization outperforms exact feedback linearization. Future research should compare the controller developed herein with alternative adap- tive controllers, pursue convergence analysis of the algorithm, and derive estimates of performance bounds under load disturbances and parameter drifts. 69 5 Robust Evolving Granular Feedback Linearization with Observers

Whenever applicable, feedback linearization control of nonlinear systems is a simple and practical design approach whenever a reasonably good, faithful model of the system is available. Otherwise, feedback linearization may suffer from unpredictable, eventually unstable behavior if there is a mismatch between the model used during design and the actual system. In addition, often, feedback control assumes that the system state is accessible, which may not be the case in many practical circumstances. This chapter introduces a high-gain observer-based approach to surpass situations where the states of the controlled process are not available for measurement and feedback control.

5.1 Introduction

In practice, nonlinearities affect processes and machines control due to structural and parametric uncertainties (Dinh et al., 2018). Robust and adaptive control schemes became fundamental to handle uncertain systems; an investigation has been conducted during the last decades to improve control loops effectiveness (de Jes´us Rubio, 2018). For instance, Freidovich and Khalil (2006); Khalil (2017a) suggests an extended high gain observer that guarantees the system to follow a reference signal using an addi- tional state of the tracking error integrator. Alternatively, Chaji and Sani (2015) uses a high-gain observer with input-output feedback linearization to control the position of an electro-hydraulic servo. A control scheme with a high-gain observer associated with the super-twisting algorithm to control the angular position of a DC motor is reported in (Guermouche et al., 2015). Recently, Chen et al. (2016); Kayacan and Fossen (2019) pro- posed high-gain observers to estimate unknown feedback linearization errors and cancel their effects in the control loop. Motivated by the nonlinear methods discussed in Chapters 2 and 4 this chapter introduces an observer-based approach to overcome situations in which, differently from (Oliveira et al., 2019), the states of the controlled process are not available for input- output feedback linearization. High-gain observers are used to estimate the state needed as input by the ReGFL. The combination of state estimation, and the granular robust and adaptive control strategy ReGFL results in the robust evolving granular control with high-gain observers – RegHGO. Two benchmark control problems are used to evaluate the performance of the RegHGO controller. The first concerns a fan and plate process (Kungwalrut et al., 2011; Simas et al., 1998; Dincel et al., 2014), and the second addresses Chapter 5. Robust Evolving Granular Feedback Linearization with Observers 70 the angular position control of an arm driven by a DC motor (Guermouche et al., 2015; Mor´an and Viera, 2017; Freidovich and Khalil, 2006; Khalil, 2017a). We exploit the fan and plate process to evaluate the closed-loop performance of RegHGO in the regulation control mode. The RegHGO has its tracking characteristics evaluated to control the angu- lar position of a rigid arm. We pay attention to the reference and observer errors during regulation and tracking control. In both cases, simulations consider modeling with ne- glected dynamics and parameter uncertainties. We compare the closed-loop performance of the RegHGO controller with the exact FL with full state feedback with full state- observer (Chaji and Sani, 2015) and the extended HGO (Khalil, 2017a). Quantitative indexes measured by the root mean square error (RMSE), integral absolute error (IAE), the integral of time-weighted absolute error (ITAE), and the Euclidean norm of the con- trol signal (‖푢‖2) are used to evaluate and compare performances. The results suggested that the RegHGO outperforms the remaining controllers.

5.2 Robust Feedback linearization Control with Observers

Assume a nonlinear system described by (2.1), and that the following assumption is verified:

Assumption 1: 푓(x) and 푔(x) are unknown continuously differentiable functions with locally Lipschitz derivatives as in (Gauthier and Kupka, 2001).

From (2.35) we can design a state observer for system (2.1) as follows:

˙ [︁ ^ ]︁ x^ = 퐴푐x^ + 퐵푐 푓(x^,푒1) +푔 ^(x^,푒1)푢 + H(휖)(푦 − 퐶푐x^)

푦 = 퐶푐x. (5.1)

For smooth reference signals 푟(푡) the feedback linearization control law (4.7) can be com- puted using the estimates x^ of the state as:

1 [︁ ^ ]︁ 푢 = 푣 − 푓(x^,푒1) . (5.2) 푔^(x^,푒1) Replacing (5.2) in (2.1) the closed-loop dynamics becomes:

x˙ = 퐴푐x + 퐵푐푣 + 퐵푐Δ푤, (5.3)

(︁[︁ ^ ]︁ )︁ with Δ푤 = 푓(x) − 푓(x^,푒1) + [푔(x) − 푔^(x^,푒1)] 푢 . We also assume the following:

Assumption 2: Δ푤 caused by the ePL estimation error can be viewed as an exogenous bounded and vanishing disturbance. Chapter 5. Robust Evolving Granular Feedback Linearization with Observers 71

Replacing the control law (5.2) in the observer (5.1) results in: ˙ x^ = 퐴푐x^ + 퐵푐푣 + H(휖)(푦 − 퐶푐x^). (5.4) [︁ ]︁ Because the reference is r(푡) = 푟 푟˙ 푟¨ ··· 푟(푛−1) , a natural choice for the linear control law is 푣 = 푟(푛) − Ke as in (Chen, 2013), with the tracking error vector e(푡) = r(푡) − x^(푡).

Thus, applying the linear feedback control in (5.4), and recalling that 푦 = 퐶푐x, the dynamics of the closed-loop error is:

e˙ = (퐴푐 − 퐵푐K)e − H(휖)퐶푐e˜. (5.5)

The feedback gain K can be chosen using any linear control design method, such as pole placement, or linear quadratic regulator – LQR, to make the closed-loop system matrix (퐴푐 − 퐵푐K) Hurwitz (Freidovich and Khalil, 2006). It is worth to mention that the stability of the resulting closed-loop system can be verified with the aid of the Lyapunov Theory. A Lyapunov candidate function is 푉 (e) = e푇 푃 e where matrix 푃 is such that 푃 = 푃 푇 > 0. The time-derivative of 푉 (e) yields:

푇 푃 (퐴푐 − 퐵푐K) + (퐴푐 − 퐵푐K) 푃 < 0.

Consequently, the observer error dynamics are obtained by taking the difference between (5.3) and (5.4): (︁ )︁ e˜˙ = 퐴푐 − H(휖)퐶푐 e˜ + 퐵푐Δ푤. (5.6) The same idea as that used to choose the feedback gain K can be adopted to design the observer gain H(휖). Hence, choosing H(휖) as indicated in (2.38) turns the observer to gain Hurwitz. The observer error decays exponentially, which can be proved using Lyapunov similarly as we did for the tracking error. Expressions (5.5) and (5.6) describe the closed-loop system with the proposed controller, called robust evolving granular high-gain observer – RegHGO. The dynamics of the controller and the observer can be combined as follows, ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ e˙ (퐴푐 − 퐵푐K) −H(휖)퐶푐 e 0 ⎣ ⎦ = ⎣ (︁ )︁⎦ ⎣ ⎦ + ⎣ ⎦ Δ푤, (5.7) e˜˙ 0 퐴푐 − H(휖)퐶푐 e˜ 퐵푐 ⏟ ⏞ A where 0 is the null matrix with appropriate dimensions, notice that the closed-loop matrix A is block triangular. Therefore, the separation principle applies, and the design of the state feedback control gain K, and of the observer gain H(휖) can be done independently. The RegHGO control approach is shown in Figure 5.1. Differently from ReGFL control of Figure 4.1, the RegHGO controller works with the estimated state x^ instead of the actual state. Thus the ePL modeling algorithm shown in Algorithm 4.1 remains applicable for RegHGO as well, provided that the actual state is replaced by the estimated state. Chapter 5. Robust Evolving Granular Feedback Linearization with Observers 72

^ 푒1 푓(x^,푒1) + e 푢 x˙ = 퐴푐x + 퐵푐(푓(x) + 푔(x)푢) 푦 (푛) x^ 푔^(x^,푒 ) 1 ^ r(푡) 푟 + 퐾e ePL 1 푢 = 푔^(x^,푒 ) (푣 − 푓(x^,푒1)) 1 푦 = 퐶푐x 푣

x^ 푣 x^˙ = 퐴 x^ + 퐵 푣 + H (푦 − 퐶 x^) 푐 푐 (휖) 푐 푦 RegHGO

Figure 5.1 – RegHGO control.

5.3 Performance Evaluation

In this section, we analyze the behavior and evaluate the performance of the pro- posed RegHGO controller applied to two systems. The first is a fan and plate system, and the purpose is to address the position regulation of the plate. The second con- cerns the angular position tracking of a rigid arm driven by a DC motor. To quantify performance, we use the root mean square error (RMSE), the integral absolute error (IAE), the integral of time-weighted absolute error (ITAE), and the Euclidean norm of the control signal (‖푢‖2). To facilitate comparison, we normalize these indexes using

푖푛푑푒푥normalized = 푖푛푑푒푥푥/푖푛푑푒푥RegHGO, where 푖푛푑푒푥 can be either IAE, ITAE, RMSE, or

‖푢‖2, and the subscript 푥 indicates the name of the control method used in the comparison.

5.3.1 Fan and Plate System

The fan and plate system is an important and hard benchmark used to evaluate many control systems alternatives (Kungwalrut et al., 2011). The system has two main features: high nonlinearity and susceptibility to environment disturbances (Simas et al., 1998). The plate has the characteristics of a dynamic pendulum. The position of its surface is affected by the thrust force of the airflow. These facts act on the dynamics of the system in a nonlinear way, yielding different dynamic behavior for each angular position of the plate. The forward dynamics (positive direction) differs from the backward one (negative direction) because the gravitational force acts favorably in the backward movement (Dincel et al., 2014). The fan and plate system is depicted in Figure 5.2. According to Figure 5.2, the system has two essential parts, the fan driver and the plate. The aim is to keep the angular position of the plate 휃 with respect to the normal axis N at given angular positions by changing the airflow. We adopt the fan and plate model proposed in (Oliveira, 2015), namely:

[︁ 2 ]︁ x˙ = 퐴푐x + 퐵푐 푎0 sin(푥1) + 푎1 cos (푥1)푢 , (5.8) 푦 = 퐶푐, x Chapter 5. Robust Evolving Granular Feedback Linearization with Observers 73

N

푥1 = 휃

푎푖푟푓푙표푤

˙ 푥2 = 휃

Figure 5.2 – Fan and plate system.

where 푥1 is the angular position, and 푥2 the angular speed (푟푎푑/푠). The control input is

푢, the airflow speed (푚/푠). Constants 푎0 and 푎1 depends on mechanical properties of the plate such as the size, mass, and the position of center mass. Looking at (2.6) and (5.8) 2 we note that 푓(x) = 훼(x) = 푎0 sin(푥1) and 푔(x) = 훾(x) = 푎1 cos (푥1). Therefore, there exists a feedback linearization control law (2.2) such that feedback linearize the system (5.8). We run the numerical experiments with the discrete-time model of the system (5.8) obtained by Euler approximation, yielding

푘+1 푘 [︁ 푘 (︁ (︁ 푘)︁ 2 (︁ 푘)︁ 푘)︁]︁ x = x + 푇 퐴푐x + 퐵푐 푎0 sin 푥1 + 푎1 cos 푥1 푢 (5.9) where 푇 = 0.001 (푠) is the sampling time, and 푎0 = −1471.5 and 푎1 = 26.81. We adopt the [︁ ]︁ state feedback gain as suggested in (Oliveira, 2015) with K = 17.0668 6.1538 . The high- [︁ ]︁푇 gain observer was designed using the pole placement method with H(휖) = 200 29600 and 휖 = 0.01. The matrix A of the closed-loop system (5.7) with K and H(휖) has all its eigenvalues in the left half-plane, which means that the closed-loop system is made asymptotic stable. Three reference signal values were chosen for the angular positions, 휋/4, 휋/3, and 휋/6, respectively. Each angular position remains fixed during for 3 seconds. The ePL parameters are set as 휙 = 0.003, 휗 = 0.002, 휏 = 0.030, 휆 = 0.875, 휎 = 0.02, and 휁 = 0.99. These values were chosen by trial and error following the guidelines (search range) found in (Lima et al., 2006; Lughofer, 2011). The RLS algorithm uses 퐾푝 = 1, 퐾푓 = 0.7 and 푇 [︁ 휋 ]︁ 퐾푔 = 0.01. The initial conditions of the fan and plate system are x(0) = 20 0 , and [︁ ]︁푇 of the observer x^(0) = 0 0 . We also assume that there are neglected dynamics in the form of a sinusoidal function 푑(푡) = 휋 sin (2휋푡) applied during testing from 푡 = 2.0 up to 푡 = 7.0 seconds. The performance of the RegHGO controller is compared with the input-output feedback linearization with high-gain observer (FLHGO) of (Khalil, 2002; Chaji and Sani, 2015). Simulation results are summarized in Figure 5.3: from bottom up Chapter 5. Robust Evolving Granular Feedback Linearization with Observers 74

we see 푑(푡), the control input 푢, and states 푥2 and 푥1.

0 rad

0

x a

00 0 0

s

rad 0

b x

0 00 0 0 00

00 s 00 m 0 00 0 00 0 0 c 00

0 0

rad 0

d d

0 s

Figure 5.3 – Behavior of the fan and plate system with RegHGO controller (continu- ous blue line), FLHGO controller (dashed red line), and the reference signal (black dash-dotted line).

It is clear from Figure 5.3 that the RegHGO controller continuous blue line, achieves better performance than FLHGO dashed red line: controlled variable 푥1 attains the reference value faster and remains there despite disturbance 푑(푡). Notice that the dis- turbance is introduced in the closed-loop system at 푡 = 2 seconds and that RegHGO con- troller quickly counteracts to the disturbance effects (Figure 5.3(c)). Contrary to RegHGO, the FLHGO controller produces a control signal that tries to mitigate the disturbance, but it is not effective as the state 푥1 clearly oscillates around the reference value. Similar behavior is observed at 푡 = 7 seconds when the disturbance signal does not interfere in the control loop. Here, the FLHGO controller acts to bring the angular position back to the desired value. The RegHGO acts consistently in the state 푥2 when the reference is changed (Figure 5.3(b)), bring the angular position to the reference value in a shorter time than the FLHGO controller.

The tracking error 푒1 and the observer error 푒˜1 for the same simulation period are shown in Figure 5.4. It is easy to see that the RegHGO turns the tracking error null in a shorter time than the FLHGO controller. It is interesting to note the chattering behavior that appears in the observer error. The first chattering is caused by a rule is added to the rule base. The second occurs around 푡 = 6 seconds due to an abrupt change in the reference value. The normalized performance indexes, run over the entire experiment time-interval, are shown in Figure 5.5. We recall that if the normalized index is greater than one, it means Chapter 5. Robust Evolving Granular Feedback Linearization with Observers 75

0 rad

00 e

0 0 00

0 000 00 rad

00 0 0 0̃ 0 0 0 0

e 00

0 0

0 0 s

Figure 5.4 – Tracking and observer error of the RegHGO controller (continuous blue line) and the FLHGO controller (dashed red line). that the RegHGO performs better than the control methods used in the comparison.

Besides, we can see that the FLHGO has only one index, ‖푢‖2, which is similar to RegHGO controller. All the remain normalized indexes are superior to one, meaning that the results presented in Figure 5.5 confirm the better performance of RegHGO.

RMSE

3.0 2.5 2.0 1.5 1.0 0.5

||u||2 IAE

ITAE

Figure 5.5 – Performance indexes of the RegHGO controller (blue continuous line) and the FLHGO controller (red dashed line).

5.3.2 Rigid Arm Driven by DC Motor

The direct current – DC motor is an electromechanical device used to convert electrical energy into mechanical energy (Mor´anand Viera, 2017). This motor has been an object of study by the scientific and industrial communities because it has a wide range Chapter 5. Robust Evolving Granular Feedback Linearization with Observers 76 of applications, including robotic manipulators, positioning systems, industrial process drives, electric vehicles like trains, airplanes, cars, bikes, among others (Rizzoni, 2003). However, DC motors dynamics have strong nonlinearities such as friction, backlash, dead zone, or in some cases, time-varying parameters (Guermouche et al., 2015). The design of robust, adaptive controllers remains a challenge for many researchers (Guermouche et al., 2015; Mor´an and Viera, 2017; Bento et al., 2018; Beltran-Carbajal et al., 2014; Freidovich and Khalil, 2006; Khalil, 2017a). The dynamic model of a DC motor attached to a rigid arm is as follows (Freidovich and Khalil, 2006): ¨ 퐽휃 = 푘푐푢 − 휏퐹 , (5.10) ¨ where 퐽 is the inertial moment of the arm, 휃 is the angular acceleration, 푘푐 is the motor constant, 푢 is the control input, and 휏퐹 is an unknown friction torque. We assume that the friction torque can be modeled by the dynamic LuGre model (Khalil, 2017a), that is : (︃ ˙ )︃ ˙ | 휃 | ˙ 휏퐹 = 푧 + 휎1 휃 − 푧 + 휎2휃, (5.11) 푔(휃˙) ˙ where 푧 is the zero-dynamic, 휃 is the angular velocity, 휎1 is the damping coefficient of the ˙ bristles, 휎2 is the viscous friction coefficient, and 푔(휃) is the Stribeck curve:

⎧ (︁ )︁ ˙ 2 −(휃/푣푠) ˙ ⎪휏퐹푐+ + 휏퐹푠+ − 휏퐹푐+ 푒 , 휃 > 0, ⎪ ⎨ (︁ )︁ ˙ 2 ˙ −(휃/푣푠) ˙ 푔(휃) = 휏퐹 + 휏퐹 − 휏퐹 푒 , 휃 < 0, ⎪ 푐− 푠− 푐− ⎪ ⎩⎪ 푔(0+)+푔(0−) ˙ 2 , 휃 = 0,

where 휏퐹푐± is the Coulomb friction, 휏퐹푠± is the static friction, and 푣푠 is the Stribeck velocity. As suggested in (Freidovich and Khalil, 2006; Khalil, 2017a), we assume that the zero-dynamics satisfy the following condition: | 푧(0) |≤ min{휏퐹푠− ,휏퐹푐+ }. The system (5.10) can be rewritten in the state space form as follows:

푥˙ 1 = 푥2, 푘 휏 푥˙ = 푐 푢 − 퐹 , (5.12) 2 퐽 퐽

푦 = 푥1, where 푥1 = 휃 is the angular position of the rigid arm. This system has the same form as (2.1). Thus, according to the linearizing feedback control design of Section 4.2, the control law (2.2) is: 퐽 푢 = 푣. (5.13) 푘푐 Here the aim of the RegHGO controller is to track the reference signal 푟(푡) = sin(2푡). Simulation experiments consider that the values of the parameters 푘푐 and 퐽 of Chapter 5. Robust Evolving Granular Feedback Linearization with Observers 77

−10% (Freidovich and Khalil, 2006) are imprecise, and use the discrete dynamic model of (5.10) approximated by Euler method:

푘+1 푘 푘 푥1 = 푥1 + 푇 푥2, (︃0.81푘 휏 푘 )︃ 푥푘+1 = 푥푘 + 푇 푐 푢푘 − 퐹 , 2 2 퐽 0.9퐽 퐽 푢푘 = 푣푘, (5.14) 푘푐 with sampling time 푇 = 0.0001 seconds (Freidovich and Khalil, 2008). We use the feedback 푘 푘 [︁ ]︁ [︁ 2]︁ control law 푣 =푟 ¨ +Ke , with K = 14.0625 2.625 , H(휖) = 2/휖 2.96/휖 , and 휖 = 0.01. The values of the remaining parameters are given in Table 5.1 (Freidovich and Khalil, 2006).

Table 5.1 – Constants and parameters for simulation of the rigid arm. Parameter Value Parameter Value 푘푐 2.5 휙 0.005 퐽 0.095 휗 0.001 휎1 1.5 휏 0.02 휎2 0.004 휆 0.875

휏퐹푐+ 0.023 휎 0.01

휏퐹푐− 0.021 휁 0.98

휏퐹푠+ 0.058 퐾푝 131.45

휏퐹푠− 0.052 퐾푖푓 7.0

푣푠 0.01 퐾푖푔 1.06

Two scenarios are considered to evaluate the performance of the RegHGO con- troller. First, we assume that the arm works continuously, and the aim is to evaluate controller performance during the transient and steady-state response. A batch process of a manufacturing line is simulated in the second scenario. Each batch cycle has a length of 1.5휋 seconds, with 0.5휋 seconds of sleep time between the cycles. In this case, we are interested in the transient-response at the beginning of each cycle. The RegHGO controller is compared with two feedback linearization-based con- trollers. The first is the exact feedback linearization controller with a high-gain observer – FLHGO, as proposed in (Chaji and Sani, 2015). The second is an extended high-gain ob- server associated with the linearizing feedback – EFLHGO developed in (Khalil, 2017a). The simulation results are summarized in Figures 5.6 and 5.7. In Figure 5.6 we notice that during the transient-response, from 푡 = 0 to 푡 = 1.5 seconds, the RegHGO controller is more effective than the remaining controllers. This leads to the saturation of the control signal at high values of the angular speed. The presence of the extended state in the EFLHGO controller changes the dynamics of the feedback linearization with observer (FLHGO), turning it slowly. Looking at the steady- state response from the instant 푡 = 1.5 to 푡 = 10.0 seconds, RegHGO and EFLHGO Chapter 5. Robust Evolving Granular Feedback Linearization with Observers 78

rad 0 x 0 0

rad

00 x

0 0 0 Nm

0 0 s

Figure 5.6 – Behavior of the system (5.14) during continuous operation (first scenario of test) with RegHGO controller (continuous blue line), FLHGO (dash-dotted black line), EFLHGO (dashed red line) and reference signal (dashed gray line). behave similarly, but the FLHGO controller shows an offset error while RegHGO does not. Figure 5.7 also shows different behavior during the initial transient response that during 푡 = 0 up to 푡 = 1.5, and the second transient response at 푡 = 2휋 up to 푡 = 7.78 seconds. The best performance is achieved by the RegHGO controller. Because of its learning ability, ePL gives a faster and preciser response because knowledge about the system dynamics has been acquired during learning. The EFLHGO achieved the worse performance due to the need of an additional state in the closed-loop system. Quantitative measures using normalized indexes are given in Table 5.2.

Table 5.2 – Performance indexes of the controllers in the rigid arm simulation.

Time Interval[푠] Controller 푅푀푆퐸 퐼퐴퐸 퐼푇 퐴퐸 ‖푢‖2 RegHGO 1.000 1.000 1.000 1.000 0 − 1.5 EFLHGO 1.642 1.821 2.168 0.294 FLHGO 1.541 1.579 1.563 0.281 RegHGO 0.348 0.635 4.231 0.682 2휋 − 7.78 EFLHGO 1.719 3.935 27.05 0.296 FLHGO 1.516 3.230 21.87 0.277 RegHGO 1.000 1.000 1.000 1.000 1.5 − 10.0 EFLHGO 1.750 1.238 0.974 0.935 FLHGO 9.214 10.35 11.43 0.900

The results of Table 5.2 show that the RegHGO controller performs best. Notice, however, that the norm of the RegHGO control input is higher than the remaining meth- Chapter 5. Robust Evolving Granular Feedback Linearization with Observers 79

1 ) d a r

( 0

1 x 1 0 2 4 6 8 10 )

d 2.5 a r (

2 0.0 x

2.5 0 2 4 6 8 10 1 ) m

N 0 (

u

1 0 2 4 6 8 10 Time (s)

Figure 5.7 – Behavior of the rigid arm during batch operation of a manufacturing line (second scenario of test) with RegHGO controller (continuous blue line), FL- HGO (dash-dotted black line), EFLHGO (dashed red line), and the reference signal (dashed gray line). ods. This is expected because the RegHGO starts control with no information about the system dynamics, and continuously counteracts the effects of disturbances and reference values changes.

5.4 Summary

Feedback linearization associated with high-gain observers has originated an adap- tive and robust approach to control a class of nonlinear processes. The extended high-gain and full state observers for feedback linearizable systems show robust closed-loop behavior, but they can not adapt fast to changes in the process dynamics. This chapter has introduced a novel control approach called robust evolving gran- ular with the high-gain observer, which is effective for an important class of linearizable feedback systems. The new controller adapts to changes in processes dynamics and im- proves the robustness of the closed-loop control system. These features are due to the capabilities of the evolving participatory learning algorithm combined with a high-gain observer. A fan and plate system was used to evaluate the performance of the proposed controller in regulation while tracking control performance was examined using a DC motor attached to a rigid arm. In both cases, the closed-loop control system was dis- Chapter 5. Robust Evolving Granular Feedback Linearization with Observers 80 turbed by simulated neglected dynamics and parameter variation and the new control scheme, RegHGO outperformed the alternative feedback linearizing controllers with state observers. The most significant property of the controller is its ability to adapt to changes in the dynamics of the controlled processes despite the lack of precise knowledge of processes and their models. Future work shall address the use of the robust evolving granular with the high-gain observer in complex, distributed control architectures with communication constraints and noisy channels. 81 6 Conclusion

Feedback linearization control within the framework of control theory, evolving intelligent systems, and machine learning was the central subject addressed in this thesis. It is well known that a feedback linearization is a powerful tool if a precise model of the controlled process and all state variables of the process are available for measurement. Also well know is the fact that for most industrial systems and processes, this is not the case: models are imprecise, and state measurements may be hard, or too costly to obtain. This thesis has introduced a novel control approach based on feedback lineariza- tion, evolving participatory learning, and high-gain observers. The approach used the notion of data granulation and granular computing to cluster the data space and learn functional rule-based fuzzy models. Functional fuzzy rules use clusters to find the mem- bership functions of the antecedents of the rules, and affine models in the consequents of the rules. Each cluster corresponds to a fuzzy functional rule. The output of functional fuzzy rule-based models is given by the weighted average of the affine function of each rule of the rule base. The idea is to use evolving functional fuzzy modeling with partici- patory learning to estimate the imprecision and neglected dynamics that can disturb the closed-loop control system performance. Stability analysis was done from the Lyapunov stability theory point of view. Three robust, adaptive control feedback linearization approaches were developed. The first is a robust granular feedback linearization – RGFL control scheme that uses a known, but an imprecise model of the process caused by additive modeling mismatches. In this case, the evolving participatory learning is used to estimate the modeling errors caused by the difference between the actual plant and the model. The idea is to reduce the effects of the errors in closed-loop control. The second is a robust evolving granular feedback linearization – ReGFL approach, an extended version of the RGFL. ReGFL as- sumes that the model of the process is unknown or unavailable. The feedback linearization control is derived from the participatory learning and inspired in the certainty equivalence principle. The third approach adds a high order observer in the ReGFL controller. The robust evolving granular with a high-gain observer – RegHGO control approach inherits all features of the RGFL and ReGFL controllers but uses process output to estimate the states. All three approaches were evaluated using benchmark systems and processes available in the literature. They are the level control of a surge tank, the angular posi- tion control of a fan and plate system, the knee joint control using functional electrical stimulation, and a DC motor-driven rigid arm. Experimental tests were also done using the RGFL controller using an actual surge tank system. The results were quantified using classic process control indexes, and they indicate that the RGFL, ReGFL, and RegHGO Chapter 6. Conclusion 82 controllers improve the robustness and adaptability of feedback linearizable control system loops. Despite its contributions, some issues deserve future investigation. For instance, there is a need to develop a systematic mechanism to select the thresholds and learning rates of the participatory learning algorithm. The idea here is to turn the linearizable feedback controllers autonomous as much as possible. Another issue concerns feature selection and analysis of the participatory learning algorithms in high dimensional data spaces. Simulations have shown small differences in closed-loop performance when some components of the error vector are omitted. The use of granulation of the data space as a means to dimensionality reduction remains an open issue. The thesis addressed single-input single-output nonlinear systems only. Thus, all the approaches developed could be extended for multiple-input and multiple-output non- linear systems as well. Further extensions should consider higher-dimensional data space issues mentioned in the previous paragraph, and eventually consider new online clustering and modeling procedures. Finally, because numerous real-world processes and systems are large scales, there is a need to develop similar robust adaptive approaches to tackle the distributed, eventu- ally the hierarchical structure of large scale systems such as smart grids, transportation, logistic, chemical, to mention but a few. 83 References

Andersen, M. S., Dahl, J., Liu, Z., and Vandenberghe, L. (2012). Interior-point methods for large-scale cone programming, pages 55–83. MIT Press, 1푠푡 edition.

Andersen, M. S., Dahl, J., and Vandenberghe, L. (2018). CVXOPT: A python package for convex optimization, version 1.2.2.

Angelov, P. (2013). Autonomous Learning Systems: From Data Streams to Knowledge in Real-time. John Wiley & Sons, Inc., 1푠푡 edition.

Angelov, P., Victor, J., Dourado, A., and Filev, D. (2004a). On-line evolution of Takagi- Sugeno fuzzy models. IFAC Proceedings Volumes, 37(16):67 – 72.

Angelov, P., Xydeas, C., and Filev, D. (2004b). On-line identification of MIMO evolv- ing Takagi- Sugeno fuzzy models. In 2004 IEEE International Conference on Fuzzy Systems, volume 1, pages 55–60.

Angelov, P. P. and Filev, D. P. (2004). An approach to online identification of Takagi- Sugeno fuzzy models. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 34(1):484–498.

Arabie, P., Hubert, L. J., and De Soete, G. (1996). Clustering and Classification. WORLD SCIENTIFIC, Singapore, 1푠푡 edition.

Banerjee, S., Chakrabarty, A., Maity, S., and Chatterjee, A. (2011). Feedback linearizing indirect adaptive fuzzy control with foraging based on-line plant model estimation. Applied Soft Computing, 11(4):3441 – 3450.

Bargiela, A. and Pedrycz, W. (2003). Granular Computing: An Introduction. Klumer Academic Publishers, Boston, Dordrecht, London, 1푠푡 edition.

Beltran-Carbajal, F., Favela-Contreras, A., Valderrabano-Gonzalez, A., and Rosas-Caro, J. C. (2014). Output feedback control for robust tracking of position trajectories for DC electric motors. Electric Power Systems Research, 107:183 – 189.

Bennett, S. (1996). A brief history of automatic control. IEEE Control Systems Magazine, 16(3):17–25.

Bento, A. V., Oliveira, L. S., Scola, I. R., and Oliveira, A. C. (2018). State estimator appli- cation in DC motors of a robot mobile. In XXII Brazilian Conference on Automation, pages 1–6. References 84

Boyd, S., El Ghaoui, L., Feron, E., and Balakrishnan, V. (1994). Linear Matrix Inequalities in System and Control Theory, volume 15 of Studies in Applied Mathematics. SIAM, Philadelphia, PA.

Chaji, A. and Sani, S. K. H. (2015). Observer based feedback linearization control for electro-hydraulic servo systems. In 2015 International Congress on Technology, Com- munication and Knowledge (ICTCK), pages 226–231.

Chakraborty, U. K. (2008). Advances in Differential Evolution. Springer-Verlag, Berlin, Heidelberg, 1푠푡 edition.

Chen, C. (2013). Linear System Theory and Design. Oxford University Press, Inc., New York, NY, USA, 4푡ℎ edition.

Chen, Z., Sun, W., Zhang, X., and Wang, Z. (2016). Output feedback control for a class of uncertain non-affine nonlinear systems. In 2016 35th Chinese Control Conference (CCC), pages 842–846.

Ciccarella, G., Dalla Mora, M., and Germani, A. (1993). A Luenberger-like observer for nonlinear systems. International Journal of Control, 57(3):537–556.

Davoodi, R. and Andrews, B. J. (1998). Computer simulation of FES standing up in para- plegia: a self-adaptive fuzzy controller with reinforcement learning. IEEE Transactions on Rehabilitation Engineering, 6(2):151–161. de Jes´us Rubio, J. (2018). Robust feedback linearization for nonlinear processes control. ISA Transactions, 74:155 – 164. de Proen¸ca, D., Telles, D. N., Bueno, L. H. R., Covacic, M. R., and Gaino, R. (2012). Mod- elo fuzzy Takagi-Sugeno para controle do ˆangulo de articula¸c˜ao do joelho de pacientes parapl´egicos. Semina: CiˆenciasExatas e Tecnol´ogicas, 33(2):215–228. de Water, H. V. and Willems, J. (1981). The certainty equivalence property in stochastic control theory. IEEE Transactions on Automatic Control, 26(5):1080–1087.

Denny, M. (2002). Watt steam governor stability. European Journal of Physics, 23(3):339– 351.

Dincel, E., Yal¸cın, Y., and Kurtulan, S. (2014). A new approach on angular position control of fan and plate system. In 2014 International Conference on Control, Decision and Information Technologies (CoDIT), pages 545–550.

Dinh, T., Marco, J., Yoon, J., and Ahn, K. (2018). Robust predictive tracking control for a class of nonlinear systems”. Mechatronics, 52:135 – 149. References 85

Dorf, R. C. and Bishop, R. H. (2000). Modern Control Systems. Prentice-Hall, Inc., Upper Saddle River, NJ, 9푡ℎ edition.

Ellis, G. (2002). Observers in Control Systems: A Practical Guide. Academic Press, Orlando, FL, USA, 1푠푡 edition.

Esfandiari, F. and Khalil, H. K. (1992). Output feedback stabilization of fully linearizable systems. International Journal of Control, 56(5):1007–1037.

Farza, M., M’Saad, M., Triki, M., and Maatoug, T. (2011). High gain observer for a class of non-triangular systems. Systems & Control Letters, 60:27 – 35.

Ferrarin, M. and Pedotti, A. (2000). The relationship between electrical stimulus and joint torque: a dynamic model. IEEE Transactions on Rehabilitation Engineering, 8(3):342– 352.

Fraley, C. and Raftery, A. E. (2002). Model-based clustering, discriminant analysis, and density estimation. Journal of the American statistical Association, 97(458):611–631.

Franco, A. E. O., Oliveira, L. S., and Leite, V. J. S. (2016). S´ıntese de ganhos para compensa¸c˜aorobusta de sistema linearizados por realimenta¸c˜ao. In Congresso Brasileiro de Autom´atica, pages 2695 – 2700, Vit´oria, ES.

Franco, A. L. D., Bourl`es, H., De Pieri, E., and Guillard, H. (2006). Robust nonlinear

control associating robust feedback linearization and 퐻∞ control. Automatic Control, IEEE Transactions on, 51:1200 – 1207.

Franken, H. M., Veltink, P. H., Tijsmans, R., Nijmeijer, H., and Boom, H. B. K. (1993). Identification of passive knee joint and shank dynamics in paraplegics using quadriceps stimulation. IEEE Transactions on Rehabilitation Engineering, 1(3):154–164.

Freidovich, L. B. and Khalil, H. K. (2006). Robust feedback linearization using extended high-gain observers. In Proceedings of the 45푡ℎ IEEE Conference on Decision and Con- trol, pages 983–988.

Freidovich, L. B. and Khalil, H. K. (2008). Performance recovery of feedback-linearization- based designs. IEEE Transactions on Automatic Control, 53(10):2324–2334.

Gaino, R., Covacic, M., Teixeira, M., Cardim, R., ao, E. A., de Carvalho, A., and Sanches, M. (2017). Electrical stimulation tracking control for paraplegic patients using T-S fuzzy models. Fuzzy Sets and Systems, 314:1 – 23.

Gauthier, J. P. and Kupka, I. (2001). Deterministic Observation Theory and Applications. Cambridge University Press, Cambridge,UK, 1푠푡 edition. References 86

George, J. K. and Yuan, B. (1995). Fuzzy Sets and Fuzzy Logic: Theory and Applications. Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 1푠푡 edition.

Giat, Y., Mizrahi, J., and Levy, M. (1996). A model of fatigue and recovery in para- plegic’s quadriceps muscle subjected to intermittent FES. Journal of Biomechanical Engineering, 118(3):357–366.

Guardabassi, G. O. and Savaresi, S. M. (2001). Approximate linearization via feedback: an overview. Automatica, 37(1):1 – 15.

Guermouche, M., Ali, S., and Langlois, N. (2015). Super-twisting algorithm for DC motor position control via disturbance observer. IFAC-PapersOnLine, 48:43–48.

Guillard, H. and Boul`es,H. (2000). Robust feedback linearization. In Proceedings of the 14th International Symposium on Mathematical Theory of Networks and Systems, pages 1–6, Perpignan, France.

Hand, D. J. (2013). Data Mining Based in part on the article “Data mining” by David Hand, which appeared in the Encyclopedia of Environmetrics. American Cancer Society.

Isidori, A. (1995). Nonlinear Control Systems. Springer-Verlag, Berlin, Heidelberg, 3푡ℎ edition.

Kayacan, E. and Fossen, T. I. (2019). Feedback linearization control for systems with mis- matched uncertainties via disturbance observers. Asian Journal of Control, 21(3):1064– 1076.

Khalil, H. K. (2002). Nonlinear systems. Prentice-Hall, Upper Saddle River, NJ, 3푡ℎ edition.

Khalil, H. K. (2017a). Extended High-Gain observers as disturbance estimators. SICE Journal of Control, Measurement, and System Integration, 10(3):125–134.

Khalil, H. K. (2017b). High-Gain Observers in Nonlinear Feedback Control. SIAM - Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 1푠푡 edition.

Kirsch, N., Alibeji, N., and Sharma, N. (2017). Nonlinear model predictive control of functional electrical stimulation. Control Engineering Practice, 58:319 – 331.

Krener, A. J. and Respondek, W. (1985). Nonlinear observers with linearizable error dynamics. SIAM Journal on Control and Optimization, 23(2):197 – 216.

Kungwalrut, P., Thumma, M., Tipsuwanporn, V., Numsomran, A., and Boonsrimuang, P. (2011). Design MRAC PID control for fan and plate process. In SICE Annual Conference 2011, pages 2944–2948. References 87

Lavergne, F., Villaume, F., Jeanneau, M., Tarbouriech, S., and Garcia, G. (2005). Non- linear robust autoland. In AIAA Guidance, Navigation, and Control Conference and Exhibit, pages 1–16.

Leine, R. I. (2009). The historical development of classical stability concepts: Lagrange, Poisson and Lyapunov stability. Nonlinear Dynamics, 59(1):173.

Leite, D., Palhares, R. M., Campos, V. C. S., and Gomide, F. (2015). Evolving granular fuzzy model-based control of nonlinear dynamic systems. IEEE Transactions on Fuzzy Systems, 23(4):923–938.

Leite, V., Tarbouriech, S., and Garcia, G. (2013). Energy-peak evaluation of nonlinear control systems under neglected dynamics. volume 46, pages 646 – 651. 9th IFAC Symposium on Nonlinear Control Systems.

Li, M., Meng, W., Hu, J., and Luo, Q. (2017). Adaptive sliding mode control of functional electrical stimulation (FES) for tracking knee joint movement. In 2017 10th Interna- tional Symposium on Computational Intelligence and Design (ISCID), volume 1, pages 346–349.

Lima, E., Gomide, F., and Ballini, R. (2006). Participatory evolving fuzzy modeling. In 2006 International Symposium on Evolving Fuzzy Systems, pages 36–41.

Lima, E., Hell, M., Ballini, R., and Gomide, F. (2010). Evolving Fuzzy Modeling Using Participatory Learning, pages 67–86. Wiley-IEEE Press, 1푠푡 edition.

Ljung, L. (1999). System Identification: Theory for the User. Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 2푛푑 edition.

Lughofer, E. (2011). Evolving Fuzzy Systems: Methodologies, Advanced Concepts and Applications. Springer-Verlag, Berlin, Heidelberg, 1푠푡 edition.

Lyapunov, A. M. (1992). The general problem of the stability of motion. International Journal of Control, 55(3):531–534.

Mansour, J. and Audu, M. (1986). The passive elastic moment at the knee and its influence on human gait. Journal of Biomechanics, 19(5):369 – 373.

Mayr, O. (1970). The origins of feedback control. Scientific American, 223(4):110–119.

Mor´an, M. E. F. and Viera, N. A. P. (2017). Comparative study for DC motor position controllers. In 2017 IEEE Second Ecuador Technical Chapters Meeting (ETCM), pages 1–6. References 88

Oliveira, L., Bento, A., Leite, V., and Gomide, F. (2019). Robust evolving granular feedback linearization. In Kearfott, R. B., Batyrshin, I., Reformat, M., Ceberio, M., and Kreinovich, V., editors, Fuzzy Techniques: Theory and Applications, pages 442–452, Cham. Springer International Publishing.

Oliveira, L., Franco, A., and Leite, V. (2015). Estrat´egia para s´ıntese do ganho da malha de controle robusto em sistemas com realimenta¸c˜ao linearizante via algoritmo diferencial evolutivo. In Simp´osioBrasileiro de Automa¸c˜aoInteligente, pages 1824 – 1829, Natal, RN.

Oliveira, L., Leite, V., Silva, J., and Gomide, F. (2017). Granular evolving fuzzy robust feedback linearization. In 2017 EAIS Evolving and Adaptive Intelligent Systems, pages 1–8.

Oliveira, L. S. (2015). Compensa¸c˜aopor invers˜ao dinˆamica robusta aplicada a sistemas linearizados por realimenta¸c˜ao. Master’s thesis, Federal Center for Technological Edu- cation of Minas Gerais and Federal University of S˜aoJo˜ao del-Rei.

Park, J., Seo, S., and Park, G. (2003). Robust adaptive fuzzy controller for nonlinear system using estimation of bounds for approximation errors. Fuzzy Sets and Systems, 133(1):19 – 36.

Passino, K. and Yurkovich, S. (1997). Fuzzy Control. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA, 1푠푡 edition.

Passino, K. M. (2002). Biomimicry of bacterial foraging for distributed optimization and control. IEEE Control Systems Magazine, 22(3):52–67.

Passino, K. M. (2005). Biomimicry for Optimization, Control, and Automation. Springer- Verlag, London,UK, 1푠푡 edition.

Peckham, P. H. and Knutson, J. S. (2005). Functional electrical stimulation for neuro- muscular applications. Annual Review of Biomedical Engineering, 7(1):327–360.

Pedrycz, W. (2013). Granular Computing Analysis and Design of Intelligent Systems. CRC Press, Boca Raton, FL, 1푠푡 edition.

Pedrycz, W. and Chen, S. M. (2011). Granular Computing and Intelligent Systems: Design with Information Granules of Higher Order and Higher Type. Springer-Verlag, Berlin, Heidelberg, 1푠푡 edition.

Pedrycz, W. and Gomide, F. (2007). Fuzzy Systems Engineering: Toward Human-Centric Computing. Wiley-IEEE Press, Hoboken, New Jersey, 1푠푡 edition.

Petersen, I. R., Ugrinovskii, V. A., and Savkin, A. V. (2000). Robust Control Design Using

퐻∞ Methods. Springer London, London. References 89

Previdi, F. and Carpanzano, E. (2003). Design of a gain scheduling controller for knee- joint angle control by using functional electrical stimulation. IEEE Transactions on Control Systems Technology, 11(3):310–324.

Ramos, J. V. and Dourado, A. (2003). Evolving Takagi-Sugeno fuzzy models. Techni- cal report, Centre Inf. Syst. Adaptive Comput. Group, Univ. of Coimbra, Coimbra, Portugal, Tech. Rep.

Rantzer, A. (2001). A dual to Lyapunov’s stability theorem. Systems & Control Letters, 42(3):161 – 168.

Rizzoni, G. (2003). Principles and Applications of Eletrical Engineering. McGraw-Hill, New Yook, USA, 4푡ℎ edition.

Sander, J., Ester, M., Kriegel, H., and Xu, X. (1998). Density-based clustering in spatial databases: The algorithm gdbscan and its applications. Data Mining and Knowledge Discovery, 2(2):169–194.

Sastry, S. (1999). Nonlinear Systems: Analysis, Stability and Control. Springer-Verlag, Mineloa, New York, 1푠푡 edition.

Silva, A. M., Caminhas, W. M., Lemos, A. P., and Gomide, F. (2013). Evolving neo- fuzzy neural network with adaptive feature selection. In 2013 BRICS Congress on Computational Intelligence and 11th Brazilian Congress on Computational Intelligence, pages 341–349.

Silva, J., Oliveira, L., Gomide, F., and Leite, V. (2018). Avalia¸c˜aoexperimental da lin- eariza¸c˜ao por realimenta¸c˜ao granular evolutiva. In Proceedings of the 5푡ℎ Brazilian Conference on Fuzzy Systems, pages 359–370, Fortaleza, CE, Brazil.

Simas, H., Bruciaplaglia, A. H., and Coelho, A. A. R. (1998). Advanced control using three low-cost models: Experiments and design issues. In Procedings of 12푡ℎ Automatic Control Conference (CBA), volume 2, pages 431–436.

Simon, D. (2006). Optimal State Estimation - Kalman, 퐻∞, and nonlinear approaches. John Wiley & Sons, Inc., Hoboken, NJ, USA, 1푠푡 edition.

Slotine, J. E. and Li, W. (1991). Applied nonlinear control. Pearson, Upper Saddle River, NJ, 1푠푡 edition.

Soares, S., Nepomuceno, E., and Leite, V. (2011). Controle de um robˆom´ovel omnidire- cional baseado em lineariza¸c˜aopor realimenta¸c˜ao robusta. In X Simp´osioBrasileiro de Automa¸c˜aoInteligente, pages 815 – 820. References 90

Tarbouriech, S., Garcia, G., Gomes da Silva Jr., J. M. G. S., and Queinnec, I. (2011). Stability and Stabilization of Linear Systems with Saturating Actuators. Springer-Verlag London, 1푠푡 edition.

Villa¸ca, M. V. M. and Silveira, J. L. (2013). Uma breve hist´oria do controle autom´atico. Revista Ilha Digital, 4:3 – 12.

Wang, L. (1994). Adaptive Fuzzy Systems and Control: Design and Stability Analysis. Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 1푠푡 edition.

Wang, L. (1996). Stable adaptive fuzzy controllers with application to inverted pendulum tracking. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 26(5):677–691.

Yano, K. (2015). The Theory of Lie Derivatives and Its Applications. Scholar’s Choice, New York, US, 1푠푡 edition. 91 A Appendix: S-Procedure

Theorem A.1. Let 푥 be a real linear vector space and 퐺0(푥) and 퐺1(푥) be quadratic functions of 푥. That is, 퐺0(푥) and 퐺1(푥) are functionals of the form:

퐺0(푥) = 퐺0(푥,푥) + 푔0(푥) + 훾0

퐺1(푥) = 퐺1(푥,푥) + 푔1(푥) + 훾1 (A.1) where 퐺0(푥) and 퐺1(푥) are bilinear forms of 푥 × 푥, 푔0(푥), 푔1(푥) are linear functional of

푥, and 훾0, 훾1(푥) are constants. Assume that there exists a vector 푥0 such that 퐺1(푥0) > 0. Then, the following conditions are equivalent:

1. 퐺0(푥) ≥ 0 for all 푥 such that 퐺1(푥) ≥ 0;

2. There exists a constant 휏 ≥ 0 such that

퐺0(푥) − 휏퐺1(푥) ≥ 0 (A.2)

for all 푥 ∈ 풳 .

The proof is found in (Petersen et al., 2000, Cap. 4) and (Boyd et al., 1994, Cap. 2).