Controlled Autonomous Vehicle Drift Maneuvering

Thesis

Presented in Partial Fulfillment of the Requirements for the Degree Master of Science in the Graduate School of The Ohio State University

By

Mohamed Lamine Kaba, B.S.

Graduate Program in Electrical and Computer Engineering

The Ohio State University

2019

Thesis Committee: Dr. Wei Zhang, Advisor

Dr. Lisa Fiorentini c Copyright by

Mohamed Lamine Kaba

2019 Abstract

Drifting is an extreme maneuver which brings a vehicle into a relatively unstable and often hard to control configuration. Yet, it is an often necessary maneuver completed by profes- sional race car drivers; or to a limited extent, regular drivers aiming to regain control of their vehicle during bad road/weather conditions (e.g. vehicle skidding during snow). The study of autonomous vehicles during drifting can give insights into how to improve standard safety systems for both modern automobiles, as well as future autonomous vehicles, which must self-maneuver through such ”obstacles”.

As complex as vehicle drifting is to maneuver, it is as complex (from a mathematical stand- point) to design drift trajectories/sequences for autonomous vehicles to track. The dynamic behavior of a vehicle during drifting is highly nonlinear; and nonlinear systems in general are typically hard to control. In this work, a set of scheduled linear controllers are used to give a vehicle the ability to autonomously drift itself about a globally planned path made using clothoids. This allows for the use of control methods such as LQR (Linear Quadratic

Regulation) and PID (Proportional Integral Derivative) control in designing drift controllers capable of drifting and path following. The autonomous vehicle drifting algorithms are sim- ulated using MATLAB R /Simulink R . They are also tested on a modified 1:28 scale car, where a motion capture system is used for location tracking. Lastly, ’ system identification is covered in this work as well.

ii This thesis is dedicated to my loving family.

iii Acknowledgments

I would like to express my gratitude to Prof. Zhang for having been my advisor over the course of this research, and for his teachings. His State-Space controls course solidified my interest in controls, and his approach to teaching the course (and in general) has funda- mentally changed the way in which I learn new information.

I would also like to thank Prof. Fiorentini for taking time out of her schedule during a very busy week to be part of my thesis committee. Her feedback gave me a lot to think about in regards to future applications of this work.

I would also like to thank Hao Yan for his collaboration on this project while he was completing his master’s project. I enjoyed having a like-minded person to swap ideas with.

Finally my mom, Oumou, for her endless sacrifices over the years which have allowed me to get where I am today. And to the rest of my family for their endless support of my endeavors.

iv Vita

December 2016 ...... B.S., Electrical and Computer Engineering, Ohio State University, Columbus, Ohio

February 2017 - January 2018 ...... Systems Engineer, Tata Consultancy Services, Auburn Hills, Michigan

June 2018 - August 2018 ...... Research Assistant, Ohio State University, Columbus, Ohio

Fields of Study

Major Field: Electrical and Computer Engineering

v Table of Contents

Page Abstract...... ii Dedication...... iii Acknowledgments...... iv Vita...... v List of Figures ...... viii List of Tables ...... x List of Abbreviations ...... x

Chapters

1 Introduction1 1.1 Background on Autonomous Vehicles...... 1 1.1.1 The Need for Autonomous Vehicles...... 1 1.1.2 Levels of Autonomy ...... 2 1.2 Motivation ...... 4 1.3 Consulted Literature and Objective...... 6 1.4 Thesis Work...... 9 1.5 Thesis Overview ...... 10

2 Background Theory 11 2.1 Systems and Modeling Theory ...... 11 2.1.1 Systems Theory ...... 11 2.1.2 Systems Modeling ...... 13 2.2 Modeling of Dynamical Systems...... 14 2.2.1 Discretization...... 16 2.3 Linear Systems...... 17

3 Vehicle Model 24 3.1 Vehicle Modeling Overview and Assumptions...... 27 3.2 Nonlinear Single Track Model Dynamics...... 29 3.3 Modeling...... 32 3.3.1 Dugoff Tire Model...... 35 3.3.2 Modified Fiala Tire Model...... 36 3.4 Linear Vehicle Model...... 40

vi 4 Control Systems Planning and Design 42 4.1 Path Trajectory Creation ...... 42 4.2 Vehicle Dynamics Equilibrium...... 46 4.2.1 Equilibrium Analysis Overview...... 46 4.2.2 Phase Portrait ...... 51 4.3 Controls Overview ...... 56 4.3.1 Tracking and Regulation...... 56 4.4 Deviation Definitions and Control ...... 61 4.4.1 Deviation Definitions...... 61 4.4.2 Vehicle Controller ...... 66

5 System Architecture and Identification 70 5.1 Vehicle Specifications...... 71 5.2 Vehicle Architecture ...... 73 5.3 System Identification...... 78 5.3.1 Physical Parameters Identification ...... 81 5.3.2 Tire and Road Parameters Identification...... 83 5.3.3 and Force Identification...... 88

6 Experimentation 99 6.1 Simulation Overview...... 99 6.2 Steady State Drifting...... 101 6.3 Transient Drifting ...... 103

7 Conclusion and Future Work 107

Bibliography 108

vii List of Figures

Figure Page

1.1 Example of vehicle during drifting (Courtesy of: Borna Bevanda on Un- splash)...... 5 1.2 Mixed open and closed loop transient drift [41]...... 7 1.3 Minimum time cornering through friction-circle acceleration planning [20].. 8 1.4 Steady state drifting (courtesy of [38])...... 9

3.1 Vehicle inertial frame. Dashed line is parallel to x-axis of coordinate. . . . . 25 3.2 Vehicle’s body frame...... 26 3.3 Vehicle’s tire frame contact patch...... 26 3.4 Friction circle of longitudinal/lateral force Coupling ...... 33 3.5 Rear tire Fiala lateral force as critical , αcr, is varied. αcr,mx corre- sponds to maximum critical slip e.g. when ξ = 1...... 38

4.1 Generic clothoid map adapted from [24]. (Left) Path in x-y plane starting from origin. (Right) Curvature, κ, of path as arc length, s, increases.. . . . 44 4.2 r and β over a sweep of steering angles, at Vx = 1.05 m/s...... 48 f r 4.3 Fy and Fy over a sweep of steering angles, at Vx = 1.05 m/s...... 48 4.4 Longitudinal force equilibria over a sweep of steering angles, at Vx = 1.05 m/s. 49 ◦ 4.5 r−β phase dynamic when vx = 1.1 and δ = 5 , with marked equilibria. Red: Cornering, Blue: Counterclockwise drift, Green: Clockwise drift ...... 53 4.6 r and β fo Vx = 0.3, 0.6, 0.9 and 1.1 m/s...... 55 4.7 Combined path error (ela) and its components: orientation error (∆ψ) and position error (e)...... 65 4.8 Arbitrary clothoid map from 3 types of curvatures (straight, spiral and arc) 67 4.9 Vehicle control architecture...... 68

5.1 Project’s scaled vehicle (Side view)...... 71 5.2 Project architecture...... 73 5.3 Motive screenshot of OptiTrack system during object selection...... 75 5.4 Three of 6 of our OptiTrack system’s cameras...... 76 5.5 Vehicle balancing...... 82 5.6 Raw OptiTrack data of vehicle location during a sample ramp steer maneuver. 85

viii 5.7 Yaw rate (r) estimated from OptiTrack yaw (ψ) data. Blue: Finite difference estimation. Brown: Kalman filter estimation (using former)...... 85 5.8 Identified absolute front/rear lateral forces overlaid with Modified Fiala Model (Yellow), and cornering stiffness line (Orange)...... 87 5.9 Identified force coefficients over multiple wr runs. Vertical line correspond to trimmed average of a given coefficient...... 92 5.10 Vehicle trajectories at fixed steering signals versus fitted radius of trajectory. 94 5.11 Radius fit for multiple input Arduino signals (hence steering angles) super- imposed...... 94 5.12 Fit of Arduino signal ws to steering angle (degrees) using piece-wise linear and quadratic fit, respectively...... 95 5.13 Less accurate linear fit of Arduino signal ws to steering angle (degrees). . . 96

6.1 Vehicle dynamics simulation...... 100 6.2 Vehicle dynamics inner control system overview...... 100 6.3 Vehicle model started near drifting equilibrium at open loop...... 101 6.4 Induced vehicle drift...... 102 6.5 Project’s clothoid testing map...... 103 6.6 Resulting vehicle drift in simulation...... 104 6.7 Drifting test on scaled vehicle...... 105

ix List of Tables

Table Page

3.1 Vehicle variables and parameters definitions...... 32

5.1 Tire and vehicle parameter values...... 78 5.2 Identified steering coefficient values...... 97

x Chapter 1 Introduction

1.1 Background on Autonomous Vehicles

1.1.1 The Need for Autonomous Vehicles

In autonomous vehicle research literature, a commonly given reason for the need for au- tonomous vehicles is the inherent issue associated with human error in . This concern is warranted. In the U.S. alone, according to a 2015 report by the U.S. Department of Trans- portation, driver errors still account for 94% of vehicle crashes on U.S. roads [35]. Though, it is important to note that this figure includes most human-caused factors (e.g. non-vehicle or non-environment caused malfunctions): from speeding, driving under the influence to distracted driving. The U.S. Department of Transportation also states that 35,092 people were killed (and 2.4 million injured) in U.S. road crashes in 2015 alone, where the yearly crash fatality amount has been typically higher than 40,000 deaths per year from 1975 to around 2008 [1]. Besides the loss of human life, car crashes cost the U.S. economy an esti- mated $277 billion in 2010 alone (latest available figure), which takes into account factors such as: the cost of loss in productivity, property damage, traffic congestion costs etc. [9].

Taking into account other indirect costs like ”harm from the loss of life and the pain and decreased quality of life due to injuries” the total figure is almost $1 trillion dollars[9]. As such, from a safety stand-point, vehicle autonomy which limit drivers’ control of vehicles

1 can greatly lower the loss of lives and negative economic impact of accidents.

Vast improvements in vehicle safety over the past few years has already started mitigating the number of vehicle crash fatalities in the U.S. For instance, the number of road-crash related deaths dipped below 40,000 in 2008 for the first time in decades, and steadily de- creased between 2008 to 2014 (though there was a slight increase in 2015 [1] and 2016). The safety improvements correlated with these lower car-crash related deaths are attributable to a range of things: from the use of much stronger steel in vehicle frames, better crash test- ing capabilities to things such as the addition of semi-autonomous car features like active avoidance systems and anti-lock braking systems etc. [8]. According to the U.S. National

Highway Traffic Safety Administration, from the 2000 to 2010, advanced safety features were added to many newer vehicles, such as: blind spot detection, electronic stability con- trol and lane departure warning. From 2010 onward, ADAS (Advanced Driver Assistance) features such as automatic emergency braking, lane centering assist etc. started being added to vehicles. The year 2016 saw the introduction of partially automated safety features such as: lane keeping assist, adaptive cruise control and self-parking [26]. Hence at the current trend, fully automated vehicles seem like an inevitable next step to having much safe cars.

1.1.2 Levels of Autonomy

Many of the newer vehicle safety features added from 2010 onward are so-called level 1 and

2 automation features. The Society of Automotive Engineers (SAE) has defined 6 levels of vehicle autonomy [26, 28]:

Level 0: Vehicle has no autonomy.

At this level the driver has full control of the vehicle. This encompasses the vast majority of pre-2010 vehicles.

Level 1 and 2: Vehicle has partial autonomy.

These two levels encompass some of the aforementioned safety features from 2010 onward,

2 such as automatic braking and stability control. In level 1, standard safety features (e.g.

ESC or Electronic Stability Control) only aid the driver (who has full control) in safely maneuvering the vehicle. Level 2 vehicles can have multiple autonomous and/or safety features working simultaneously to keep the driver safe, such as Automatic Emergency

Braking (AEB) and Lane Keeping Assist (LKA). Both of these can automatically take appropriate actions when danger is detected to keep the driver safe, without the driver’s expressed input [26, 28].

Tesla, Inc.’s current autopilot feature can be said to be in the level 2 category. In very specific situations (e.g. highway) it is capable of speed adjustments, and even following the road while also taking actions to prevent the aforementioned dangers. Though Tesla’s system technically falls between level 2 and level 3 (as discussed next). The Mercedes-Benz

Drive Pilot and Volvo Pilot assist also fall within this category [14].

Level 3: Vehicle automation in certain situations.

At this level, the vehicle has almost ”full automation” under certain conditions. For in- stance: during highway driving which requires no substantial changes in vehicle speeds (also no intersections or pedestrians). In such cases the vehicle has full control of driving func- tions, including: steering, braking, accelerating etc. The only caveat is that in unfamiliar situations, the driver is warned to retake control. There may be partial autonomy in some of the excluded condition (e.g. pedestrian situations).

Despite a lot of hyperbole surrounding the state of autonomous vehicles, level 3 is the highest level of autonomy achieved to date. Waymo’s autonomous vehicle falls in this category. Audi also has a production vehicle (called the A8) in this category.

Level 4: Nearly fully-autonomous

At this level, the vehicle can handle itself in most situations, e.g. even complex traffic situations involving pedestrians, vehicles, bicycles etc. Yet, options still exist for a passenger to assume control, however it is not expressively required (as it is in level 3).

3 Level 5: Full automation

At this point all of the actuators (e.g. , throttle and brake) can work together so that the vehicle can handle itself in all situations [26, 28].

Waymo is arguably ahead of the herd when it comes to autonomous vehicle design and testing. As of today, it has clocked in 8 million miles driven on the public roads, and

5 billion miles in simulation. This is according to John Krafcik (current (2019) CEO of

Waymo) on his Twitter social network profile.

1.2 Motivation

Most research (and implementations) in autonomous driving focus on the autonomous ve- hicle control during ”regular” driving conditions (e.g. urban driving, highway etc.), as is understandable. Looking into the future; one of the hurdles of bringing autonomous vehicles to level 5 (or even level 4), and giving them the ability to handle most situations will involve the control of autonomous vehicles at their ”limits of handling”. For instance, during ad- verse weather conditions (e.g. snow, rain) the physics involved in the control of the vehicle are much different than those under regular driving conditions. The main actuators of a vehicle are its throttle/accelerator (including braking) and steering wheel, these control the acceleration (hence speed) and steering of the vehicle. These become harder to use when the vehicle’s lose grip with the road e.g. when the vehicle skids on a icy surface. In current automotive systems, active safety features such as Anti-lock Braking Systems (ABS), and

Electronic Stability Controllers (ESC) are often used to avoid the vehicle from entering a state of uncontrollably/instability [41] but for a truly full-autonomous vehicle (e.g. level

4 and 5), more will be required. The vehicle will need to control/handle itself in case the above inevitably happen.

This thesis and most of the supporting literature aim to understand vehicle behavior and control during extreme maneuvers such as drifting. Drifting is complex maneuver character- ized by tire ”saturations” (e.g. loss of tire grip) and large deviations between the vehicle’s

4 longitudinal body axis (i.e. axis parallel to drive shaft) and its actual direction of travel.

It is a complex maneuver to both physically complete, and mathematically model. An example of drifting is shown in figure 1.1 (courtesy of Borna Bevanda on Unsplash) where the vehicle seems to be traveling side-ways. Vx, in the diagram, represents the velocity of the vehicle along its own longitudinal axis, while Vy represents the so-called lateral velocity (additional velocity not along the longitudinal axis). β, which is appropriately defined as

β = tan−1( Vx ), is the so-called slip angle. It gives a quantitative measure of the afore- Vy mentioned deviation in motion. Under normal driving conditions (even at relatively high speeds), Vy is typically near 0 (and therefore so is β), as the vehicle travels in the same direction the car is oriented (e.g. along its longitudinal axis). When the vehicle is brought to its limits of handling however, (e.g. skidding on an icy surface) where the tires lose grip with the road, the vehicle typically moves at a different angle than its body’s orientation.

Hence Vy becomes non-zero resulting in a slip angle. The work in this thesis will focus on controlling the vehicle in the presence of large Vy.

Figure 1.1: Example of vehicle during drifting (Courtesy of: Borna Bevanda on Unsplash).

5 Generally speaking, there are two types of drifts: static (or steady state drifting), and transient drifting [41].

Steady state drifting, as its name implies, is characterized by the vehicle drifting about

fixed states (e.g. fixed velocity, slip angle etc.) and actuation (steering/throttle). The fixed states correspond to the vehicle drifting around a fixed radius circular trajectory, where the rear tires are saturated and the slip angle is non-zero. As shown in figure 1.1, sustained drifting also often involves wheel ”counter steering” (e.g. steering in opposite direction of path). This is done to help stabilize the yaw (i.e. orientation) of the vehicle in order to sustain drift, and stop the vehicle from spinning out of control. Steady-state drifting is a so-called ”unstable equilibrium”, as, any small disturbances can lead to the vehicle exiting drift. Hence a ”controller” is needed to continuously sustain drift.

The second type of drifting is so-called transient drifting. In transient drifting, the vehicle’s states and inputs need not necessarily be fixed. It involves planned drifting (e.g. high side slip, and tire saturation) whereby the vehicle can have changing inputs and states in order to accomplish a complex behavior, e.g. drifting around a corner, or drift parking [41]. The work in this thesis can be seen as a hybrid between steady-state and transient drifting, whereby steady-state drifting is used to achieve otherwise transient drifting maneuvers.

1.3 Consulted Literature and Objective

As previously discussed, much of autonomous vehicle research involves ”normal condition” driving (e.g. urban driving, highway driving), which encompass the vast majority of the use-cases for vehicles. To the author’s knowledge, research into autonomous vehicle control during extreme maneuvers is mostly lead by a few labs, such as the Stanford University

Dynamic Design lab. Research in this field ranges from the control of small-scaled vehicles to full-size ones during conditions such as high speed speed racing (minimum lap times, where drifting often arises) to steady state and transient drifting. The work in this thesis was mainly inspired by three works:

6 Autonomous Drift Cornering with Mixed Open/Closed-loop Control [41, 11]:

This paper stems from the UC Berkeley BARC project. It is one of the first papers read by the author on drifting. The authors of this paper design a control scheme capable of autonomously drifting a vehicle about an oval track, while driving regularly on straight parts of the track (e.g. lane keeping). They accomplish this using a multi-level planning and control scheme. Experiments are first ran on a given track by an expert drifter to collect data. This data is then used to build stochastic maps of parametric inputs. In a high fidelity simulator these maps are sampled multiple times until a successful drift is obtained. Then data from the successful run is used as the desired state (e.g. position, slip angle etc.) for a vehicle to track. What complicates the planning of drifting sequences is the nonlinearity of the mathematical model of a vehicle during drifting (as presented in chapter3), and the fact that even this nonlinear model does not adequately capture the transient behavior of a drifting vehicle. Linear models are much easier to control (as discussed more in chapter2). Hence the author does a multitude of other things such as the linearization of the vehicle model about each of the successful reference trajectory points.

These linear models are used to find closed-loop LQR gains, which penalize any deviation of the vehicle from the successful trajectory. Then online, they use these gains and the original mathematical model of the vehicle at every time-step to determine whether to use closed-loop or open-loop control. Open-loop control works best in certain cases because they capture unmodeled transient behavior. The resulting trajectory is shown in the image below:

Figure 1.2: Mixed open and closed loop transient drift [41].

7 Autonomous Vehicle Control At The Limits Of Handling [20]:

This work/dissertation is from Stanford University’s Dynamic Design Lab. It was less con- cerned with drifting specifically, and more concerned with minimum-time cornering about about a given track by maximizing the combination of lateral/longitudinal accelerations at each point of a reference track. It uses insights gained over the years from racing profes- sionals, such as the use of racing lines and its division into techniques/segments such as trail-braking or throttle-on-exist to always maximise the vehicle’s combined acceleration.

Other insights are derived from this dissertation, such as the creation of racing lines via clothoid maps, which are known to have smooth and linearly varying curvatures compared to other map types (e.g. polynomial maps). These racing lines and planned states (based on clothoid planning) are tracked via a set of ”decoupled” controllers designed to based on vehicle model state errors (e.g. section 4.4.1).

Figure 1.3: Minimum time cornering through friction-circle acceleration planning [20].

Dynamics and Control of Drifting in Automobiles [31]:

This is another dissertation from Stanford University’s Dynamic Design lab. It gives an extensive mathematical study/breakdown of the dynamic behavior of a vehicle during cor- nering (e.g. low speed circular drive) and drifting, as well as the stability analysis of each.

Much of the insight from this dissertation will be discussed in this thesis (see section 4.2).

8 Figure 1.4: Steady state drifting (courtesy of [38]).

Our work is inspired by an attempt at transient drifting (e.g. [41]) using path planning and steady-steady equilibrium, as opposed to using open loop maneuvers to bring the vehicle into the proper conditions to initiate drifting (discussed more in chapter4). Hence, the work in our thesis combines clothoid planning (e.g. [20]) for smooth path planning, and equilibrium analysis for drift planning (e.g. [31]), to accomplish simultaneous path tracking and drifting about an arbitrary track. It is important to note that [41] build their entire project based on low cost onboard sensors, which makes their achieved results all the more impressive. Our project does not have this self-imposed limitation. A tracking system is used to accurately track and relay the position and orientation of the vehicle.

1.4 Thesis Work

The main work done in this thesis is the development of control algorithms for non-steady state drifting about globally-planned clothoid-based maps. And the use of equilibrium anal- ysis to find linearized vehicle models capable of sustaining drift. These are then combined to give our vehicle the ability to simultaneously drift and track our path. The final control algorithms are both tested in a simulation environment (Simulink R ), and on a 1:28 scaled vehicle (using a motion capture system for location tracking). Some system identification work is done as well.

9 1.5 Thesis Overview

Chapter2 gives preliminary background on systems theory and modeling, as well as linear systems. Chapter3 introduces the so-called nonlinear single track vehicle model, which will be used extensively in the remaining work. Chapter4 discusses clothoid-based planning, equilibrium analysis, as well as control design for our drifting problem. Chapter5 discusses the hardware (e.g. built vehicle, tracking system etc.) architecture, and system identifica- tion. System identification allows us to fit our vehicle’s mathematical model to our scaled vehicle. Chapter6 puts our algorithms to test. Lastly, chapter7 concludes this work.

10 Chapter 2 Background Theory

The language of systems is paramount to the design of autonomous vehicle control al- gorithms, especially during drifting. Hence it is briefly introduced in this chapter. The provided information will be somewhat ”abstract” (with respect to the original problem), but will be made clearer once applied/referenced in subsequent chapters. It is assumed that the reader is already somewhat familiar with these topics, but the sake of having a self- contained document they are reiterated. Systems theory and modeling are first discussed, then the modeling of dynamical systems and linear systems are briefly covered.

2.1 Systems and Modeling Theory

2.1.1 Systems Theory

”A system is an organized entity made up of interrelated and interdependent parts” [19, 21,

34], it is defined by its boundaries and is not merely a superposition of its individual compo- nents [34]. General Systems Theory, a field proposed by biologist Ludwig Von Bertalanffyin in the 1920s inspired the definition of a system given above. In order to fully understand the behavior of a complex system: the system must be perceived as a whole, and the effects of each element on the whole system (and conversely the system’s effects on individual el- ements) must be closely studied [34]. In an autonomous vehicle for instance, if its desired

11 elements of study are: its position, orientation and velocity, then these three quantities cannot be treated as separate entities. They are interdependent on each other. That is, any change in one should have some effect on the others. Hence in any representation of the autonomous vehicle system, this interdependence and interrelatedness accounted accounted for. General Systems Theory sought to find fundamental mathematical structures shared by systems, and tools to explore these structures irregardless of the original functionality of the system (e.g. be it an autonomous vehicle, or the human body, or the interaction between people in a society etc.).

Nowadays, there are many (not necessarily mutually exclusive) categories under which sys- tems are classifiable under. For instance, a system can be: stable (or non-stable), linear (or nonlinear), open (or closed) etc. Irregardless of the system (and its functionality), given an adequate mathematical representation of it, there are precise mathematical definitions to determine whether a system belongs to a certain category. One such important category is whether a system is dynamic (or static). Broadly speaking, dynamic systems are systems whose behavior evolve over time, and whose next ”states” can dependent on both current and non-current (e.g. past, or future) states or inputs. Static systems on the other hand are only dependent upon current ”states” or inputs. For instance, the future position (i.e. state) of a vehicle is highly dependent upon its current location, velocity and other states, as well as its inputs (e.g. throttle and steering), hence in most realistic applications it is dynamic system. An example of a static system is a passive resistor on a circuit. The voltage which it drops is only dependent upon its own resistance and the current passing through it, not past voltage drops or past input currents.

Lastly, the second part of the definition of a system given above (i.e. regarding boundaries) is important due to systems typically being contained within larger systems. The boundaries of a system establishes it by distinguishing its internal elements from the ”environment”, and establishes what can be considered as inputs and outputs of a system [27]. For instance a vehicle itself could be a system, or its interaction with other cars on the road could be a system. Hence it is important to establish what is considered as part of the system.

12 2.1.2 Systems Modeling

As aforementioned, obtaining a mathematical description of system is imperative the study and understanding of the system. Modeling is the process of obtaining a mathematical representation of a system. The model of a system aims to emulate the behavior of the system by describing its inner working (and often) as a response to inputs. Modeling is an especially useful tool for the simulation and analysis of the original system [4]. It can allow for repeated testing of the system under various conditions without risking the destruction of the original system. Most systems are however far too complex for accurate mathematical descriptions of them to obtained over a wide range of conditions. As such, a model can mainly only emulate a subset of a behavior of the system, under prescribed conditions

[4]. Therefore many assumptions are made in modeling a system, and it is important to always keep these assumptions in mind while analysing the model to avoid making erroneous conclusions about the system as a whole.

For control design, a model is typically created with certain caveats. Besides capture the essence of a system ( i.e. capturing the aforementioned interrelates and interdependence), a good model for control design trades off between fidelity and simplicity. Fidelity refers to how accurately the model replicates the behavior of the system over a wide range of conditions. Simplicity on the other hand refers to how ”simple” the mathematical model of the original system is. For the most part, these are opposite and competing requirements.

Hence, the model must do a good enough job at trying to replicate the behavior of the system over a wide range of conditions, yet at the same time be simple enough such that the analysis and control of the system is not unfeasible. For instance, in modeling a vehicle, if the only information of interest are its position and velocity due to applied throttle/steering inputs, a very accurate model could include the fluid mechanics of the gasoline in the vehicle’s tank and their effects on the velocity etc. but such inclusion over-complicates the model while providing little to no additional useful information. At least, compared to simply abstracting it (e.g. assuming the vehicle always a full-tank of gas). As such, in modeling

13 (for control design) it is important to identify aspects of the system that are most relevant to the problem on hand, and be ready to either ignore or abstract aspects which provide no additional useful information for the control problem being solved. Furthermore, any such abstraction/assumption must be kept in mind as to not erroneously come to a given conclusion about the system where such abstraction may not hold.

2.2 Modeling of Dynamical Systems

As has been established, a vehicle (with respect to its high-level states e.g. position, velocity etc.) is a dynamic system. This means that it evolves over time depending on established rules and is highly dependent on initial conditions. Differential equations have historically been a popular tool for the modeling of dynamical systems. By definition, a differential equation is an equation which relates functions with their derivatives (i.e. the function’s rate of change). There two types of differential equations, ODEs (Ordinary Differential

Equations) and PDEs (Partial Differential Equations). ODEs are differential equations that contain derivatives of one (or more) dependent variables with respect to a single independent variables (e.g. time) [6]. PDEs involve the derivatives of our dependent variables with respect to more than one independent variable. ODEs are typically more than enough for the modeling for physical systems, and many of the tools from control theory are built upon

ODEs.

dx(t) = f(x(t), u(t)) dt

The equation above is an example of a first order ordinary differential equation. That is,

it is a function of the dependent variable x(t), and it relates x(t) to its first derivative (i.e.

hence why first order) with respect to the independent variable t. Where u(t) is a forced

input to the model. In differential equations the ultimate goal is to typically find x(t)

such that the relationship above holds. This what it means to solve a differential equation.

Special attention is brought to this form, as many systems (especially vehicle autonomous

14 systems) have mathematical models which either fit this form or can be brought to this form. Once in this form, there are a plethora of mathematical tools for the analysis and control of the model.

The differential equation above, in its form, goes by another name: State-Space form. The word ”state” has been used thus far without a former introduction. By definition, ”a state is the minimum set of variables that fully describe the system and its response to any set of inputs” [33]. Hence, x(t) is a collection of our interdependent state variables. For an autonomous vehicle at high-level (as alluded to), these are things such as the vehicle’s position, orientation, speed etc. That is, descriptions/abstractions of the internal behaviors of the system. Inputs u(t) are typically the actuators of the system e.g. steering/throttle

in vehicles. From knowing the state of the system from some initial time t0, using the the map f(x(t), u(t)) and input u(t) we can predict the future states and outputs of the system

for t > t0 [33]. A more full description of a system is given as:

dx(t) x˙(t) := = f(x(t), u(t)) (2.1a) dt y(t) = h(x(t), u(t)) (2.1b)

n n m p x(t) ∈ χ ⊆ R , x˙(t) ∈ R , u(t) ∈ U ⊆ R , y(t) ∈ R (2.1c)

By appropriately choosing the functions f and h, most higher order problems can be brought

down to first order differential equation form given above [4]. Many tools exist in the

literature for the analysis of first order systems. The notation in equation 2.1c merely

means that x(t) is collection ”n” variables represented in vector form. These variables are

interdependent on each other. The derivative of x(t) is merely the derivative of each of its

element with respect to t (e.g. typically time). x(t) is typically composed of real-valued

state functions, but ones typically constrained in a set χ of feasible states, while the inputs

are also from a set of feasible inputs. In an autonomous vehicle for instance, the velocity

15 is typically bounded (e.g. depending on size) hence |xi(t)| <= xi,max could characterize the constraint on the velocity, if the velocity is the ith element of x(t). In the same vain,

there is typically a maximum steering angle that the vehicle can achieve, the corresponding

element of u(t) in U would also characterize such constraints. Lastly, the output y(t) refers

to measurable characteristics of the the system. Often times the outputs, y(t) are only a

function of the states. In fact, in most applications the outputs are the states themselves.

From a system stand-point, it any quantity of interest (usually a state) measurable through

a sensor.

2.2.1 Discretization

One thing to note is that, the state-space form given in equation 2.1 is continuous-time

equation. This means that solving it requires a solution at every instance in time. Differen-

tial equations are often hard (if not impossible) to solve analytically. Hence, their solutions

are often numerically (e.g. digitally) approximated. Hence to solve them on digital device,

the differential equation must be sampled/discretize at a (often) fixed sampling time. That

is, let: t = kT where T is the sampling time (hopefully very small), and k = {0, 1, ...} is the

counting index. As such x(kT ) := x(t) is used to represent our state over time. This (i.e.

x(kT)) is often alternatively represented as xk. This is with the understanding that, xk+1 for instance corresponds to x((k + 1) ∗ T ). With this said, the derivative in equation 2.1 can

be approximated using Euler’s forward approximation [22], as show in equation 2.2. As by

definition, the derivative is a measure of the change in a function over a small change of its

independent variable (e.g. T). The smaller T the more accurate:

x − x x˙(t) ≈ k+1 k (2.2) T

Hence, equation 2.1a can be numerically approximated at every time step as given below:

xk+1 = xk + T ∗ f(xk, uk) (2.3)

16 There is no change to the output function, since it involves no differential term [22]:

yk+1 = h(xk, uk) (2.4)

Equation 2.3 gives of a convenient way numerically solve our differential equation, as the

goal of a differential equation is to obtain the next state as a response to the current state

and current input e.g. the next position of the vehicle given the current position and input

throttle/steering.

As a closing note, it is important to note that there are other tools for modeling systems

such as: PDEs (as discussed) and state diagrams [4]. The former is commonly used in

thermodynamics, where quantities of interest are dependent in changes of both location

and time. While the latter is widely used in hybrid systems with logic e.g. traffic lights.

2.3 Linear Systems

The dynamical system (or model) given in equation 2.1a is general. The mapping function

f(.) (or h(.)) can be arbitrarily complex and nonlinear (with respect to the inputs and

states). Crudely speaking, a linear system (with respect to our states x(t) and inputs u(t)) is

a system such that inputs/states are only scaled by constant coefficients, and only combined

to each other via addition. Furthermore, if all the states/inputs are zero, f(0, 0) = 0 must

hold. An example of a linear system (component) is:x ˙ 2(t) = 2 ∗ x1(t) + 3 ∗ x2(t) + 4 ∗

u3(t), note that each state/input is only scaled by fixed constants, and there are no non-

2 nonlinear transformations applied to inputs/states (e.g. no: cos(u3), x1) etc., and lastly if

all states/inputs x1, x2, u3 are zero, the entire equation is zero. If all other elements of the system are of this type of form, then it is linear. A more formal definition of a linear system

is given in in sources such as Chapters 4 and 5 of [4]. For brevity, a system is linear if it

meets the so-called superposition principle (i.e. additivity and homogeneity). Any other

system is nonlinear, and this rather broad definition a nonlinear system plays a role into

why there are relatively few general tools for dealing with nonlinear systems. 17 Linear systems are an important class of systems. There are a plethora of theory/tools for controlling/analysing linear systems (e.g. frequency domain analysis). In fact, often times one of the main ways of analysing nonlinear systems is by finding their linear counterpart and studying these linear counterparts in attempt to get a glimpse of the behavior of the nonlinear system. The process by which a linear counterpart/approximation of the non- linear system is found is called linearization. An arbitrary system defined by the equation

2.1a (i.e.x ˙(t) = f(x(t), u(t))) is linearized by focusing on f(x(t),u(t)) and finding a local

n m approximation of it near ”fixed” inputs/states xf (t) ∈ R , uf (t) ∈ R . This is done via the Taylor Series expansion of a function, a concept typically covered in a single variable calculus course.

Let g(x) be a single dimensional function of a scalar variable x, e.g. x, g(x) ∈ R. Further- more, if g(.) is differentiable n-times, then g(x) can be approximated by the formulation below, assuming that x is sufficiently closed to a chosen point x ≈ a:

g00(a) gn(a) g(x) = g(a) + g0(a)(x − a) + (x − a)2 + ... + (x − a)n 2! n!

Linearization of g(x) refers to approximating g(x) by the first two terms: g(x) = g(a) +

g0(a)(x − a). x must be sufficiently close to ”a” for this relationship to hold. That is:

x = a + ∆x where ∆x is hopefully small.

In order linearize our dynamic system in equation 2.1a, a higher dimensional analogue of the

Taylor series approximation is needed. Not only is f(x(t), u(t)) a vector valued function of

n-functions, but each of these n-functions is a multivariate function of our states and inputs.

The linearization of such functions is defined as given below. The same way in which the

single variable Taylor series is approximated around a single point ”a”, f(x(t), u(t)) is

linearized about ”fixed” vectors xf (t), uf (t) (as discussed before):

∂f ∂f f(x, u) = f(x , u ) + (x − x ) + (u − u ) (2.5) f f x=xf f x=xf f ∂x u=uf ∂u u=uf

18 ∂f ∂f Where: ∂x and ∂u evaluated at x(t) = xf (t) and u(t) = uf (t) are called Jacobian matrices. Note that vectors are ”fixed” in the sense that the linearization (e.g. derivatives) is done

with respect to our states and inputs themselves (not time). That is, xf (t) and uf (t) are

only fixed with respect to x(t) and u(t), but xf (t) and uf (t) are still vectors of time. Also, the same scheme as equation 2.5 can be done to find a linear map of our output (e.g.

y = h(x, u)) but often times we mainly care of the state’s linearization, and the output map

is defined by us. Also the time dependence in equation 2.5 is often dropped for compactness.

The Jacobian matrices defined as:

  ∂f1 ∂f1 ··· ∂f1  ∂x1 ∂x2 ∂xn   ∂f ∂f ∂f  ∂f  2 2 ··· 2  A(t) = =  ∂x1 ∂x2 ∂xn  ∂x x=xf  . . . .  u=uf  . . .. .   . . .    ∂fn ∂fn ··· ∂fn ∂x1 ∂x2 ∂xn x=xf u=uf   (2.6) ∂f1 ∂f1 ··· ∂f1  ∂u1 ∂u2 ∂um   ∂f ∂f ∂f  ∂f  2 2 ··· 2  B(t) = =  ∂u1 ∂u2 ∂um  ∂u x=xf  . . . .  u=uf  . . .. .   . . .    ∂fn ∂fn ··· ∂fn ∂u1 ∂u2 ∂um x=xf u=uf

Often times, the linear system above is represented in terms of so-called perturbations, especially in tracking problems (see chapter4). That is, the behavior of our system in the vicinity of our desired ”fixed” point. If we let ∆x(t) = x(t)−xf (t) and ∆u(t) = u(t)−uf (t),

then ∆x ˙(t) =x ˙(t) − x˙ f (t). Note that by definitionx ˙(t) = f(x(t), u(t)) (e.g. see equation

2.1a), and thereforex ˙ f (t) = f(xf (t), uf (t)). As such, equation 2.5 alternatively represented as our perturbations below:

∆x ˙ = A(t)∆x + B(t)∆u (2.7a)

∆y = C(t)∆x + D(t)∆u (2.7b)

19 As aforementioned, the output map (y(t) = h(x(t), u(t))) can be linearized the same way

(with respect to inputs and states) to give us its equivalent linear form near our fixed

states/inputs. Where this time h is differentiated instead of f. Again the time dependence is dropped for compactness. But each of our states, inputs and outputs are time-dependent.

In certain cases A(t),B(t) are fixed e.g. A := A(t),B := B(t), (time-invariant) this may be due to the fact that the ”fixed” states by which our system were linearized by were truly ”fixed” (e.g. with respect to time as well), or the system is such that it’s linearization about our fixed states/inputs lead to constant A, B matrices. Hence our perturbation representation is given as:

∆x ˙ = A∆x + B∆u (2.8a)

∆y = C∆x + D∆u (2.8b)

At other times, the system may truly be always linear, as given in equation 2.9. In that case

”∆” is often dropped from equation 2.8a to give this general form of the a linear system below. Though this depends on the application.

x˙ = Ax + Bu (2.9a)

y = Cx + Du (2.9b)

n n m p x(t) ∈ χ ⊆ R , x˙(t) ∈ R , u(t) ∈ U ⊆ R , y(t) ∈ R

n×n n×m p×n p×m A ∈ R ,B ∈ R ,C ∈ R ,D ∈ R

The linear system above is known as a LTI (linear time invariant) system, because the

matrices A, B, C, D are fixed. The system, as it evolves, always modifies our inputs and

states in the same way. As aforementioned, due to their relative simplicity compared to

20 nonlinear systems, linear systems are well studied. For instance, given an initial condition x(t = 0) the entire solution of a linear system is known, and given by:

Z t x(t) = eAtx(0) + eA(t−τ)Bu(τ) dτ (2.10) 0

At n×n At Where: e ∈ R is known as the matrix exponential. It is defined as: e = I + At + A2t2 A3t3 2! + 3! + ... , where I is the identity matrix of size ”n”. The integral is defined as the element-wise integration of each elements of the eA(t−τ)Bu(τ) matrix with respect to dτ.

Lastly, just as was done for the general (and possibly) nonlinear f(x, u) in equation 2.3, the linear system above (equation 2.9) can be discretized (e.g. zero order hold). The linear system above is still continuous-time due to its dependence on a derivative. As was done in equation 2.3, since f(x, u) is defined by the linear system above (where T is the sampling time), then:

xk+1 = xk + T ∗ (Ax + Bu) = (I + T ∗ A)xk + (T ∗ B)uk = Adxk + Bduk

Hence: Ad = (I + T ∗ A) and Bd = (T ∗ B) denote the discretized approximations of A, B. Note that this is mainly accurate when T (sampling time) is relatively small. One can

At notice that the definition of Ad bears a resemble to the first terms of e where t = T . For reasons not discussed here (for brevity), but can be discovered in depth in a typical Linear

Systems Theory textbook, the technically correct definitions of Ad,Bd are:

Z T AT Aτ Ad = e Bd = e Bu(τ) dτ (2.11) 0

Where A, B are the continuous-time matrices of the linear system. Furthermore, if A is

−1 AT invertible, then: Bd = A (e − I)B. Otherwise the integral above has to be solved in order to obtain Bd, and as stated before, by definition it is the element-wise integration of the matrix’s entries with respect to dτ.

21 One big note: The ”properties” of (Ad,Bd) are typically vastly different than that of their continuous counterparts (A, B). Aside from the output matrices, these matrices uniquely define a linear system. Due to the large body of study of these systems, these matrices alone can be used to find out whether a system has a certain properties. For instance, a linear system is said to be ”asymptotically stable” if all of the eigenvalues of A are negative.

However this check only applies to the continuous system’s matrix A, the equivalent test for Ad would be whether its eigenvalues are smaller than 1. This is related to the fact that

AT Ad = e . In the same vein, the test is always consistent. Meaning if the eigenvalues of A are negative it must mean that the eigenvalues of Ad are smaller than 1. The properties of the systems do not somehow change, but the way of testing for a property does. Lastly,

again, since y(t) is not dependent on a differential, (C,D) matrices are the exact same as

(Cd,Dd).

Some Properties of Linear Systems:

There are two properties of linear systems which will be used in subsequent chapters, hence they are briefly introduced here. These are: stability and controllability.

As aforementioned, for linear systems, much can be learned about its properties from looking at its (A, B, C, D) matrices. A continuous-time linear system is said to be asymptotically stable if all of the real parts of the eigenvalues of A are negative, and it is unstable if at least one eigenvalue of A is positive. That is, let the eigenvalues of A be λi, then:

Asymptotically Stable: ∀ i: <(λi) < 0 (2.12a)

Unstable: ∃ i: <(λi) > 0 (2.12b)

Where: i ∈ 1, ...n

Stability looks at the internal behavior of the system in the absence of inputs. The reason why negative eigenvalues lead to asymptotically stable systems is related to equation 2.10.

In the absence of inputs, only the first term remains. If our eigenvalues are negative, the

22 matrix exponential (”decay”) leads to: x(t) → 0 as t → ∞. If at least one eigenvalue is positive, for at least one entry of our state vector: xi(t) → ∞ as t → ∞. Hence we say that the entire system is unstable, because one of its elements is unbounded. In certain cases

(e.g. if one of eigenvalues is 0), the system can be said to be stable but not asymptotically

stable. These are edge cases not relevant to us in this work. Also, as mentioned before, this

stability test is only for continuous-time systems. For the discretized version, we look for

whether the eigenvalues are less than 1 for asymptotic stability.

Crudely speaking, controllability determines our ability to use inputs u(t) to reach any

desired state x(t) in finite time. This ability is also determined/limited by equation 2.10.

The derivation of it is relatively involved. But, a continuous-time linear system is said to

be controllable if:

rank([B AB A2B...An−1B]) = n (2.13)

Where: n is the size (e.g. number of rows=columns) of A.

Unlike in stability, the controllability of a discretized system can be checked using equation

2.13 as well (i.e. by using the discrete system matrices).

23 Chapter 3 Vehicle Model

The concepts of systems and system modeling were introduced in chapter2. In this chapter, the mathematical model (e.g. equation 2.1a) of a vehicle is obtained, while keeping in mind the general discussion on modeling in chapter2.

In vehicle modeling, there are typically three frames of references (or coordinates) of interest.

These are: the inertial frame of reference, vehicle body frame and tire frame.

Inertial Frame: The inertial frame is also the so-called ”global coordinate frame”. It may be defined arbitrarily, or may often be dependent on the method (e.g. GPS, tracking system) by which the object is localized in space. The vehicle’s location and orientation are defined relative to the chosen origin of this coordinate frame. Where, the vehicle’s location is denoted as: (x, y). We also typically care for both the vehicle’s orientation, and the rate of change of this orientation. These are denoted as: ψ and r = ψ˙, respectively. Figure 3.1 shows the vehicle in its inertial frame. This ”lumped” tire representation of a vehicle is more formally discussed in proceeding sections.

24 Figure 3.1: Vehicle inertial frame. Dashed line is parallel to x-axis of coordinate.

Body Frame: The body frame is typically chosen to be at the center-of-gravity of the vehicle. The ”x-axis” of this coordinate frame is chosen to be about the vehicle’s longitudinal axis (i.e. along the length) of the vehicle. Hence, naturally its ”y-axis” is perpendicular to the longitudinal axis, i.e. the vehicle’s lateral axis (i.e. along its width). As was discussed in section 1.2, the so-called longitudinal and lateral velocities of the vehicle (Denoted:

Vx and Vy, respectively) are defined along the body frame of reference. The aforementioned vehicle slip angle (Denoted: β), as described in section 1.2, is also defined in this coordinate frame. Lastly, the steering angle of the vehicle (Denoted as: δ) is technically also defined in this frame of reference, as it is the angle between tire’s orientation and the body’s longitudinal axis. The vehicle body frame is shown in figure 3.2. The other quantities in the figure are defined in subsequent sections. The axis of Vx and Vy are this frame’s aforementioned longitudinal and lateral axis, respectively. Vg is the vehicle’s net velocity.

25 Figure 3.2: Vehicle’s body frame.

Tire Frame: The vehicle’s tire frame is defined along its center-point (e.g. origin of radius).

Often times however, our interest is in the tire’s contact patch. That is, the part of the tire which has contact with the ground. Many of our vehicle’s forces, like our longitudinal

(Fx) and lateral (Fy) forces are created at this contact patch. The latter leads to a tire slip-angle α. Like the vehicle slip, the tire slip measures any angular deviation between the tire’s direction of travel (i.e. dash line in figure 3.3) and its longitudinal axis (axis of Fx).

Figure 3.3: Vehicle’s tire frame contact patch.

One of the starting points for modeling the high-level behavior of a vehicle is to treat the vehicle as a rigid body, and find the balances of forces and moments acting on it. For our purposes, we mainly only care for the motion of the vehicle in the x-y inertial frame, and rotational motion about this frame (e.g. yaw about z-axis), as shown in figure 3.1. Any other motion (e.g. roll, pitch, z-axis position change) is mostly either neglected, or taken into account in other ways, as expended upon in our later discussions on tire modeling. As such, Newtow’s laws second laws of motion (e.g. ”F = ma”) and Euler’s rotation equations 26 (for moment) are used to find the force balances (along x and y) and moment balances

(about yaw axis), respectively:

n ny n Xx X Xr mx¨ = Fx,i my¨ = Fy,i Jzr˙ = Mz,i (3.1) k=1 k=1 k=1

There are multiple forces and moments acting on the vehicle, and equation 3.1 gives an

”abstract” representation of their net sum. In order to obtain the individual forces and moments from equation 3.1, we must delve into the longitudinal, lateral and tire dynamics of a vehicle, as expended upon in subsequent sections.

3.1 Vehicle Modeling Overview and Assumptions

The Multiple Vehicle Models:

There are multiple ways in which to model a vehicle. Arguably, the simplest vehicle model is the so-called point-mass model. As it name implies, it is a model which assumes that the vehicle is single point in space with mass. Using equations such as equation 3.1 (for instance) its state-space form can be obtained to represent the vehicle’s location (x,y) and velocities (Vx,Vy), as a result of input acceleration. Another vehicle model is the kinematic vehicle model, which is a mostly geometric based model that derives the motion of the vehicle from the geometric relationships between the different components of the vehicle.

It further assumes little to no vehicle slips [3], meaning that it does not accurately reflect the vehicle of a vehicle during extreme maneuvers. Both of these models (i.g. point mass and kinematic) are far too simplistic for anything beyond simple motion planning [3]. On the opposite spectrum, there are also higher fidelity models such as the so-called full-body model, and multi-body model. The full-body model is a 4-wheel model of the single track vehicle model (e.g. figure 3.2). The other vehicle model is the multi-body model (like that in [3]), which considers a full-range of vehicle motions such as pitch/roll, as well as a full-range of nonlinear dynamic motion of the vehicle. Both the full-track model and multi- body model may be fairly accurate, but as discussed in chapter2, a good model (for control 27 design) must trade-off between fidelity and simplicity. As such, a popular vehicle model known as the nonlinear single-track model will be used for the control design in this thesis.

It is a model which lumps the pair of front and rear tires into a single front and rear tire, as shown in figure 3.2. Hindiyeh [31] showed that, on top of being able to account for a wide range of vehicle behavior, this model is also able to predict the behavior of a vehicle during extreme maneuver such as drifting. And it does it as well as the higher fidelity full-track models.

More on the point-mass and kinematic vehicle model can be found at [3, 29], where the former also includes the multi-body vehicle model. The four-track (or full-track) model is discussed in a few works, such as these thesis [11, 32].

Nonlinear Single Track Model Assumptions:

As shown in figures 3.1 and 3.2, the nonlinear single-track model is vehicle model which lumps our tires together. It assumes that the left and right tire behaves approximately the same way. That is, they have the same steering angles, slips, force distribution etc. Aside from these, other assumptions about the single-track model are given below:

• Vehicle only moves about the x-y inertial frame (no z-axis, or vertical movements)

• Vehicle only rotated about the yaw axis (no pitch or roll) e.g. no load transfer

• The vehicle is only steered from the front tire, and only rear wheel driven (RWD)

• For most angles, the counterclockwise (or left) direction is the positive direction.

Most of these assumptions are due to control design. The single-track model can be modified to incorporate some of them. As stringent as they are, these assumptions do not severely impede the drifting ability of our model.

28 3.2 Nonlinear Single Track Model Dynamics

The nonlinear single-track model is formally derived using the forces (and moments) balance equations discussed in equation 3.1. For convenience, the inertial frame longitudinal and lateral acceleration (e.g.x ¨ andy ¨, respectively) can be alternatively restated in terms of body-frame parameters. This is as done in [32]. The following is obtained as a result:

n ny n Xx X Xr m(V˙x − rVy) = Fx,i m(V˙y + rVx) = Fy,i Jzr˙ = Mz,i (3.2) k=1 k=1 k=1

After solving in terms of the velocities in the body frame, and yaw rate, we get:

n ny n 1 Xx 1 X 1 Xr V˙x = Fx,i + rVy V˙y = Fy,i − rVx r˙ = Mz,i (3.3) m m Jz k=1 k=1 k=1

The sum of forces and moments can be given as shown in equation 3.4. The matrices are change of coordinate transformation matrices, more popularly known as ”rotation matrices”.

They serve as a way of transferring physical quantities to different frames of references. The

f r f r quantities Fy ,Fy ,Fx and Fx are lateral and longitudinal forces in the front and rear tires’ frames of references (as partly shown in figure 3.3). Yet in this work (and in the final vehicle model), our frame of interest is the body frame. Hence the coordinate transformations below are done in order to bring these quantities to the body frame.

        P Fx −sin(δ ) −sin(δr)   cos(δ ) cos(δr)   F f f f f d     Fy   Fx    P  =     +     +    Fy   cos(δf ) cos(δr)     sin(δf ) sin(δr)     0      F r   F r   P    y   x   Mz lf cos(δf ) −lrcos(δr) lf sin(δf ) −lrsin(δr) Mzd (3.4)

As discussed in the preceding section, the vehicle of interest is only steered through the front tire and is rear wheel drive (only driven through a rear motor). As such: δr = 0 (i.e.δ = f f δf ) and Fx = 0. The latter is so because Fx refers to the driving force provided by the

29 front tire. Furthermore, note that if we were dealing with a full-track model (i.e. four tire model), the equation above can still be used to derive our dynamics by replacing our front

f fL fR and rear tire forces by the sum of the left and right tire forces e.g. Fy becomes Fy + Fy in equation 3.4. The yaw moment term may require the inclusion of terms related to the vehicle’s width.

Fd and Mzd are treated as disturbance forces and moments. Therefore for this project, Fd =

0 and Mzd = 0. This is often done to also simplify control design. However, as will be seen in our longitudinal force identification in chapter5, disturbance forces will be reincorporated into our controller. Disturbance forces are friction forces such as aerodynamic drag due to

r wind, which impede our input longitudinal driving force Fx , hence (often) our longitudinal velocity Vx, as well. Mzd on the other hand, are things such as self-aligning moment. More on these additive forces are discussed in [16, 20].

Taking all of these assumptions into account, once the sum of forces (and yaw moments) from equation 3.4 are substituted into equation 3.3, we get our final nonlinear single track model:

1 f r r˙ = (lf Fy cos(δ) − lrFy ) Jz 1 V˙ = (F f cos(δ) + F r) − rV (3.5) y m y y x 1 V˙ = (F r − F f sin(δ)) + rV x m x y y

Equations 3.5 describe how the vehicle states (yaw and longitudinal/lateral velocities) evolve dynamically as a response to input steering and force. Sometimes however: for either visualization of the trajectory of the vehicle in the global coordinate frame, or path tracking applications, we often also care for the equations which describe the evolution of the vehicle’s position and orientation over time. These are given below in equation 3.6. The first two equations (more or less) bring the body-frame’s longitudinal/lateral velocities to the global frame of reference (e.g. coordinate transformation). By solving the differential equations, the positions (x, y) and orientation (ψ) of the vehicle in the global frame are obtained

30 depending on the evolution of our other states in equation 3.5. Appropriately (by definition) the yaw rate in time, r, is defined as the first derivative of ψ, as given in the last term of the equation:

x˙ = Vxcos(ψ) − Vysin(ψ)

y˙ = Vxsin(ψ) + Vycos(ψ) (3.6)

ψ˙ = r

Equations 3.5 and 3.6 together are often taken to be our state-space equation which models the the ”full” behavior of interest for our vehicle, as given equation 3.7. This is much like the state-space form discussed in chapter2. However, sometimes, as will be the case in chapter4, they are often left separated as either equation 3.5 or 3.6 for control design or analysis purposes.

6 z˙ = f(z, u) ∈ R (3.7) T r T z = [x, y, ψ, r, Vx,Vy] u = [δ, Fx ]

f r Not represented in equation 3.7 are the two lateral forces Fy and Fy . These are not fixed quantities, they are ”static” functions of our other states (e.g. Vx,Vy, r) and inputs (e.g.

r δ, Fx ) introduced by our tires, as will be discussed in the following section.

31 Table 3.1: Vehicle variables and parameters definitions

Parameter Definition r Fx Rear tire longitudinal force (tire frame). f r Fy ,Fy Front and rear tire lateral force (tire frame). Vx,Vy Longitudinal and lateral velocity (body frame). β Vehicle slip angle. Def: β = arctan( Vy ) (body frame). Vx x, y Vehicle’s center-of-gravity location along x-axis and y-axis (inertial frame). ψ Yaw angle (inertial frame). r Yaw rate. Def: r = ψ˙ (inertial frame). δ Front wheel steering angle. lf , lr Distance of front and rear tire to vehicle’s center-of-gravity. f r Fz ,Fz Front and rear tire static normal force. m Mass of vehicle. Jz Yaw moment of inertia of vehicle. Cα,f ,Cα,r Front and rear tire cornering stiffness. αf , αr Front and rear tire slip angle. µr, µf Front and rear tire coefficient of friction with road.

3.3 Tire Modeling

From supporting the weight of the vehicle, to providing traction for vehicle movement (or braking and turning) [32], tires are some of a vehicle’s most important elements. As such, tire behavior is very important to capture in our vehicle model. They play an important role in the dynamics of a vehicle, especially during extreme maneuvers such as drifting. As aforementioned in the previous section, most of the tire’s behavior comes into our vehicle model through the longitudinal and lateral forces acting at the contact patch of the tire (as shown in figure 3.3).

r r r Hence, our longitudinal (Fx ) and lateral forces (Fx and Fy ) are merely deformations (e.g. grip) of the contact patch in either the longitudinal and lateral direction of the tire frame.

r The longitudinal deformation is what leads to the forward driving force (Fx ), and lateral r r deformation (Fx and Fy ) is necessary for turning. Since the rear tire has both longitudinal r r and lateral forces, Fx and Fy (respectively), there is a ”coupling” between these forces at the contact patch. The grip (or deformation) in one direction limits the maximum available grip in the other. The relationship between the two are shown in figure 3.4:

32 Figure 3.4: Friction circle of longitudinal/lateral force Coupling

This is the so-called friction circle (sometimes Kamm’s circle). As seen by the dashed outer radius, the combination of our longitudinal and lateral forces are limited by the normal force

r (i.e. Fz for rear tire, in our case) after being derated by the coefficient of friction (µr in our case), both of which are concepts discussed later. Hence, at the ”limits of handling” (e.g.

near boundary/radius of circle) the maximum achievable force in either the longitudinal

or lateral force depends on the magnitude of the other. Fc in the image corresponds to a case where there is remaining grip in both directions (e.g. low acceleration driving and turning). It is important to note that this set-up makes the assumption that the maximum possible longitudinal and lateral force are the same (e.g. hence a circle). However this is not always so, making the friction circle more of an ”ellipse”. Such a case is not relevant to our problem, as for control design we may assume that it is true. Our final controller can compensate for discrepancies.

The friction circle gives us an idea of how the longitudinal and lateral forces are related, but tells us nothing about the behavior of the individual forces. These two forces are typically dependent on the so-called tire slip ratio and slip angle, respectively. When a lateral force is created at the tire’s contact patch (e.g. during turning) this results in a slip angle (as shown in figure 3.3) between the tire and its direction and travel. In the same vein, longitudinal

33 force at the contact patch creates a slip-ratio, which measures any deviation between the tire’s rotational velocity and the tire’s axle longitudinal velocity. These are defined as follows:

Slip Angle:

  Vy + rlf αf = arctan − δ (3.8a) Vx   Vy − rlr αr = arctan (3.8b) Vx

Slip Ratio: vx,t − wreff sx = (3.9) max(vx,t, wreff )

The front and rear wheel slip angles in equation 3.8 are given in terms of body-frame variables, because they are easier to obtain than the wheel’s longitudinal and lateral velocity.

And it is specific to the single-track vehicle model (e.g. figure 3.2) . Here: Vy, Vx and r are our vehicle’s dynamics states from equation 3.5, and the other parameters are defined in table 3.1. As for the slip ratio: vx,t is the tire’s axle longitudinal velocity (very close to

Vx). w is the angular speed of the tire’s axle, and reff is the so-called effective or rolling radius of the tire. This is not quite the same as the tire’s physical radius. It is typically smaller, especially for pneumatic tires which are flatter near the contact patch. It measures the radius the tire would have with the ground during pure rotation. More on the effective radius, and it relationship to our longitudinal force is explained in chapters 4 and 13 of [30].

Crudely speaking, for relatively low longitudinal and lateral forces (as is typical during normal driving conditions), our tires are linearly related to these parameters are given in equation 3.10:

Fx = −Cssx (3.10a)

Fy = −Cαα (3.10b)

34 Here Cs and Cα are the longitudinal and lateral stiffness, respectively. These parameters are typically identified through experimentation. Note that the approximations in equation

3.10 only hold for small slip angles and ratios. At higher longitudinal and lateral forces, the relationship between the slips and our forces become highly non-linear. Many tire models exist in the literature to describe this behavior. These range from numerical methods such as Finite Element Analysis to data-driven empirical methods such as the Pacejka model.

The Pacejka tire model is arguably one of the most accurate ones, as it is dependent on multiple parameters which are entirely determined by experimentation and data collection of vehicle attributes on our testing surface. Other models which will be discussed in this work include the Duggoff tire model and Fiala tire model.

3.3.1 Dugoff Tire Model

The Dugoff tire model is a fairly analytical tire model. It models both of our longitudinal

(Fx) and lateral (Fy) forces over a wide-range of conditions of tire slip angle (α) and slip ratio (sx). The main assumption made by this model is that the tire’s pressure is uniformly distributed at the contact patch. In reality, tire pressure at the contact patch has more of a parabolic distribution (e.g. as in the Pacejka model) [30]. However, compared to the

Pacejka which is more data-dependent, this analytical tire model is only dependent on a few parameters such as the stiffness coefficients (Cs,Cα), friction coefficients (µ) etc. which are easily obtainable/estimable. Lastly, and more importantly, it accounts for the coupling between our longitudinal and lateral tire forces. The longitudinal and lateral tire forces of the Dugoff tire model are given as:

sx Fx(α, sx) = −Cs f(λ) (3.11a) 1 − sx tan(α) Fy(α, sx) = −Cα f(λ) (3.11b) 1 − sx

35 µF (1 − s ) λ = z x (3.12a) p 2 2 2 (Cssx) + (Cαtan(α))  (2 − λ)λ if λ < 1 f(λ) = (3.12b) 1 if λ ≥ 1

For a given tire and road-surface, the parameters to be identified are the longitudinal/lateral cornering stiffnesses Cs/Cα and coefficient of friction µ. Fz is the normal gravitational force acting on the tire. It is discussed more thoroughly in the proceeding section. Knowing these parameters, our tire’s lateral and longitudinal forces in equation 3.11 can be found at any given tire slip angle and slip ratio e.g. equations 3.8 and 3.9.

3.3.2 Modified Fiala Tire Model

As useful as the analytical Dugoff tire model is (e.g. has ability account for longitudi- nal/lateral tire coupling), it is still relatively ”complex”. For one, it depends on an accurate measurement of our tire’s velocity (as seen in equation 3.9), and/or overall slip ratio mea- surement (e.g. may require expensive sensor). Related to this is the fact that we desire that

r our longitudinal force, Fx be a system input (e.g. as seen 3.7) which we can be ”arbitrarily” assigned based on control need. In order to use the Dugoff tire model in such a case, the desired tire force would need to be determined based on our slip ratio (as seen 3.12) which can a layers of unnecessary complexity.

A way to take care of these ”issues” is through the use of another tire model known as the Fiala tire model. The Fiala tire model makes sort of the same uniform tire pressure distribution assumption as the Dugoff model. Unlike the Dugoff model, this model only depends on the tire’s slip angle (α) (e.g. not ratio). The original Fiala model is only a lateral

tire model, which only accounts for force generation in the lateral direction. However, its

modified version, appropriately named the Modified Fiala tire model adds new terms which

take into account longitudinal/lateral coupling by directly using the tire’s longitudinal force

36 Fx. This modified Fiala tire model is given as:

 C2 C3 −C z + α |z|z − α z3 |α| < α  α 2 2 2 cr 3ξµFz 27ξ µ Fz Fy(α) = (3.13a)  −ξµFzsign(α) |α| ≥ αcr

p r 2 r 2 3µFzξ (µrFz ) − (Fx ) z = tan(α) acr = arctan( ) ξF = 1 ξR = r (3.13b) Cα µrFz

Most of the parameters are as given before, and are defined in table 3.1. The slip angle,

α, is as defined in equation 3.8. ξ is the so-called derating factor, it is why this model is called ”modified”. It takes care of the longitudinal/lateral tire force coupling. As assumed

r in this project, only our rear tire generates driving/longitudinal force Fx (RWD). Hence the longitudinal and lateral force dependence from the friction circle (figure 3.4) is normalized to give us our rear tire derating factor ξR. Since the front tire has no driving force, ξF = 1.

That is, most of the grip force is in the lateral direction. αcr is the critical slip-angle, it determines at what slip-angle the lateral force maximizes/saturates.

The critical slip angle, αcr, is appropriately dependant on friction, normal force and derating factor. Decrease in any of these leads to the lateral force saturating earlier, due to less grip between the tire and ground in lateral direction (e.g. slippery road, or less car weight to grip tire to ground, or more longitudinal grip etc.). This is shown in figure 3.5. Lastly, sometimes times two different coefficients of friction terms are used in in equation 3.13a.

These are the so-called peak friction and sliding friction between the tire and ground. This takes care of any temporary ”peaks” in the lateral force before saturation (or slow decay).

They are discussed more in sources like: [32]. This thesis, and much work done in this area assumes that we only have a sliding friction. Our road/tire parameter identification in chapter5 shows that that this assumption holds for our vehicle (e.g. no ”peak” in lateral force).

37 Figure 3.5: Rear tire Fiala lateral force as critical slip angle, αcr, is varied. αcr,mx corre- sponds to maximum critical slip e.g. when ξ = 1.

Normal Force:

A step-back is needed to discuss our tires’ normal forces, as they can effect the vehicle’s dynamic behavior. The normal force of an object by definition is the gravitational force acting on the object. It is a force perpendicular to the plane on which the object is placed

(e.g. our contact patch). On a flat surface, as is the focus of this thesis, the total normal force of an object is:

T Fz = mg (3.14) m is our object’s mass and g is the gravity of earth at 9.81 m/s2. In this project the vehicle

(a rigid body) is separated into a single front and rear wheel (e.g. figure 3.2). Since there is no assumption of mass symmetry about the center of gravity (e.g. lf 6= lr), our normal

38 forces of the front and rear tire can be obtained as follows:

r lf Fz = mg l + lr f (3.15) f lr Fz = mg lf + lr

These are so-called static loads. Note that if center of gravity is at the geometric center of the vehicle, each of the forces would have a normal force of FT /2. Also, the normal force would be expected to be larger near the shorter of lr or lf , as it would mean that the vehicle’s mass is more concentrated in that area. This is why the ratios above are defined as they are.

For the the sake of control design, we will assume to only have static loads. In reality, espe- cially during extreme maneuvers, the weight of the vehicle often shifts at high accelerations

(e.g. leading to pitch, roll motion). For example: when braking is applied to a fast traveling vehicle (e.g. weight shift to front). These change our center of gravity point, therefore also changing our load distributions. Hence, as seen in figure 3.5, a change in the normal force would also affect our maximum possible lateral force. There are ways of taking into account these weight shifts:

x maxhcg ∆Fz = (3.16a) lf + lr

r r x Fz,ch = Fz + ∆Fz (3.16b)

f f x Fz,ch = Fz − ∆Fz (3.16c)

For a single-track model considered in this work, the biggest source of is longitudinal force. This is better seen in the full-track vehicle model formulation seen in sources such as [11, 32]. The equations above give the final instantaneous normal force at the front and rear tire of the single track model. They are a combination of the static normal forces in equation 3.15 and weight transfer due to longitudinal acceleration, ax. hcg is the

39 ”ground clearance”, it is the distance between the ground and vehicle center-of-gravity.

The lower this distance, the less the vehicle has the tendency to shift weight (account for vehicle length). This is why racing cars, or even regular sedans have a low center of gravity compared to SUV (which flip over much more easily). All that said, as discussed, the final normal force which will be used are the ones in equations 3.15, as they make control design much more feasible.

3.4 Linear Vehicle Model

Previous sections discussed the nonlinear single track model, which is a relatively complex

(e.g. due to nonlinearities introduced by the tire) vehicle model capable of modeling extreme maneuvers such as drifting. This section discusses a simpler ”linearized” vehicle lateral dynamics model (e.g. of equation 4.5). The ”linear vehicle model” only holds for relatively small slip angles (e.g. α < αcr in figure 3.5, and assumption in equation 3.10b). Hence, most regular driving conditions (e.g. simple path following) can be represented by this model.

When the slip angle is small (see equation 3.10b), our vehicle dynamics in equations 3.5 can be simplified much further to give us the so-called linear model of our lateral dynamics.

This vehicle model holds under most ”normal” driving conditions, but breaks down at the limits of handling. Nonetheless, much can be learned from it. A few assumptions are made in deriving this model, most of them are related to using angle small approximations:

• cos(δ) ≈ 1.

– As will be discussed in chapter5, |δ| ≤ 22◦. Even for full-sized vehicles, our

steering angle typically only goes up to 30◦. This approximation only introduces

an error of up to 8% (e.g. cos(0) = 1, cos(22◦) ≈ 0.927).

• Vx is fixed =⇒ V˙x = 0.

– As seen in equation 3.17, this is necessary for the final system to be linear. This

40 means that the longitudinal velocity is treated as a scheduled model parameter,

and not as something which dynamically varies.

• The slip angles are approximated as: α ≈ Vy+rlf − δ α ≈ Vy−rlr f Vx r Vx

– These are small angle approximations, and only work for small slip angles.

Taking these assumptions into account, and linear lateral tire model in equation 3.10b. The

dynamics in equations 3.5 simplify to give us the following linear vehicle model:

   (l2 C +l2C )      r˙ − f α,f r α,r −lf Cα,f +lrCα,r r Lf Cα,f    JzVx JzVx     Jz    =     +   δ (3.17) V˙ −lf Cα,f +lrCα,r − V − (Cα,f +Cα,r) V Cα,f y mVx x mVx y m

Using the linear equation above, a lot of basic properties of our vehicle can be obtain. For instance, is there a maximum possible velocity for which this model remains stable? By studying the eigenvalues of the model (e.g. A matrix, as discussed in chapter2), such a thing (and more) can be determined. Rajamani [30] uses an alternative stability test,

Lyapunov stability theory, to study the model.

41 Chapter 4 Control Systems Planning and Design

This chapter gives an overview of our state-trajectories’ design for path and drift tracking.

Section 4.1 discusses clothoid-based global path planning by introducing the basic method- ology for the creation of clothoid based maps. Section 4.2 discusses equilibrium analysis of our 3 degree-of-freedom dynamic vehicle model (e.g. equation 3.5). This is imperative to obtaining drift trajectories. The remainder of the chapter discusses controller design for simultaneous path and drift tracking, where: section 4.3 gives a brief overview of general control tools/methods, and section 4.4 gives the final control design.

4.1 Path Trajectory Creation

Since the goal of this project is drift about a planned trajectory, the desired path needs to be designed. The path (in our case) for the most part refers to the set of (x,y) points which make up our desired trajectory in the inertial frame of reference (as discussed in chapter3). The work in this thesis involves global path planning, meaning that the path is assumed to be known and generated offline. In applications such as those involving obstacle avoidance, the set of trajectories are typically generated online. Regardless of the case: path planning methods such as polynomial-based maps, Bezier curves, spline methods etc. can be used. All of these have been extensively covered in the literature. Polynomial-

42 based planning is arguably the simplest, as polynomials are well studied functions, and polynomial-based methods allow for the fitting of an arbitrary path given a set of desired trajectory points (x,y). As useful (and typically easy to implement) as polynomial-based maps are, polynomial-based maps often suffer from abrupt changes (e.g. sharp corners) between connected path segments [20]. This makes it harder for a vehicle to follow such a path, let alone attempt to drift about such a path. As such, for our work, clothoid- based maps are used for our global path planning. Clothoid-based maps have a guaranteed smoothness to them, because by definition, clothoid maps have linearly varying curvatures.

This insures that that no shap-points are present between connected segments, making path following at the limits of handling feasible.

Theodosis [37] extensively studies how to fit clothoid-based maps to existing tracks. He does so especially as it related to racing, whereby the fitted path is optimized for minimum-time travel around the track.

The basic formulation for clothoid-maps may differ depending on the referenced literature, but they are all variations of the so-called Fresnel integral. The clothoid definition used in this thesis was adapted from [24]. Using the notation from the same source, we obtain equations 4.1 and 4.2, where figure 4.1 gives a generic representation of the equations’ parameters.

43 Figure 4.1: Generic clothoid map adapted from [24]. (Left) Path in x-y plane starting from origin. (Right) Curvature, κ, of path as arc length, s, increases..

Z si+1 (s − s )2 x = x + cos(θ + κ (s − s ) + c i )ds i+1 i i i i i+1 2 si (4.1) Z si+1 2 (s − si) yi+1 = yi + sin(θi + κi(s − si) + ci+1 )ds si 2

2 Li+1 θi+1 = θi + κiLi+1 + ci+1 2 (4.2)

κi+1 = κi + ci+1Li+1

Where: i = 1, ..., N Pk=i And: si = k=0 Lk, si+1 = si + Li+1 as seen in figure 4.1 (right).

Equations 4.1 and 4.2 give formulae of the location, orientation and curvature of our map’s end-points (e.g. red points in figure 4.1). The locations (denoted as xi and yi) determine the position in space (i.e. inertial frame) of a given endpoint (e.g. location of si). The orientation of the path (denoted as θi), measures the tangent (angle) of the curvature at a given end-point. That is, the ”yaw” orientation of the map. The segment length, Li,

44 gives the length of the path between two end-points (e.g. from xi and xi+1). Since by definition clothoids have linearly varying curvatures, the parameter c measures the slope of

a segment’s linear curvature. These parameters are made clearer in figure 4.1. Lastly the

arc length, s, refers to the accumulated path length up to the chosen point. In clothoid

planning, s parametrizes our position on the path. The equations above give us subsequent

end-points of our segments based on previous end points, and aforementioned parameters

(e.g. segment length, curvature slope etc.).

Though the end-points defined in 4.1 and 4.2 are important in determining the ”overall”

structure of our map, for path following we care for the individual points along a given

segment. For instance let q be some arbitrary point in a given path segment (i.e. be-

tween endpoints si and si+1), what is the coordinate ( x(q), y(q) ), orientation ( θ(q) ) and curvature ( κ(q) ) of q?

Z q (s − s )2 x(q) = x + cos(θ + κ (s − s ) + c i )ds i i i i i+1 2 si (4.3) Z q 2 (s − si) y(q) = yi + sin(θi + κi(s − si) + ci+1 )ds si 2

(q − s )2 θ(q) = θ + κ (q − s ) + c i i i i i+1 2

κ(q) = κi + ci+1(q − si) (4.4) 1 R(q) = κ(q)

Where: i ∈ {i|si < q ≤ si+1}

Note that if we let q = si+1 in the equations above, we end up re-obtaining equations 4.1 and 4.2 i.e. our end-points. As also discussed, since our curvature segments are piecewise linear, then the curvature q is defined based on the segment which the point is located, e.g. between si and si+1. R(q) is the radius of the path; by definition the curvature is the inverse of the radius.

Given what has been discussed so-far, we can now create a reference trajectories for our

45 autonomous vehicle to follow. That is, the first three states in equation 3.7: our position

(x and y) and yaw (i.e. orientation θ in path following applications).

4.2 Vehicle Dynamics Equilibrium

4.2.1 Equilibrium Analysis Overview

Equilibrium analysis of our 3 DOF (degree-of-freedom) vehicle dynamics model in equation

4.5 is explored in this section. Equilibrium analysis is often used to gain insights about complex/nonlinear systems. As we will learn later in this section, some of the equilibria of our 3 DOF vehicle dynamics correspond to drifting (e.g. large slip angle, yaw rate etc.). Therefore drifting equilibria can be used to generate reference yaw rates (r) and

lateral velocities (Vy) for our vehicle to track. Hindiyeh [31] did an extensive study of the 3 DOF vehicle model in equation 3.5, and his analysis will lay the basis of the analysis

covered in this chapter. Hindiyeh also established that the dynamic behavior of the 3 DOF

nonlinear single-track model is similar to that of higher fidelity (e.g. four-track) models

during drifting. This is assuming of course that the used tire lateral dynamics are able to

account for drifting behavior.

The equilibria of our states Vy, r and Vx are obtained by setting 3.5 to zero. That is:

r˙ = V˙y = V˙x = 0. In the literature, drifting is often characterized in terms of the vehicle slip. Hence, the vehicle slip angle β is often used to study equilibria, as opposed to Vy. It makes no big difference because by definition β = arctan( Vy ). Therefore one can simply Vx go back and forth between the two. The dynamics of β is obtained by approximating it as: β = Vy . Assuming V to be fixed (constant) allows for: β˙ = V˙ /V . Alternatively the Vx x y x

”quotient rule” (in calculus) can be used; given that V˙x = 0 the same conclusion will be reached. In certain cases the vehicle slip-angle β may be larger compared to regular driving conditions, but it is still small enough for the drifting considered (e.g. |β| < 22◦(0.38rad))

for the approximations to hold, e.g. arctan(0.38) = 0.36 (at most 5.5% error, much smaller

than modeling error [31]).

46 Using the discussed β˙ = V˙y/Vx approximation, Vy from equation 3.5 is simply divided by

Vx to obtain our dynamics for β. Furthermore, the Vy term in the Vx dynamics (of the

same equation) can be replaced by substituting it by Vy = Vxβ (or even Vy = Vxarctan(β)). Hence, the equilibrium of our system in equation 3.5 is given as:

1 f r r˙ = (lf Fy cos(δ) − lrFy ) = 0 Jz ˙ 1 f r β = (Fy cos(δ) + Fy ) − r = 0 (4.5) mVx 1 V˙ = (F r − F f sin(δ)) + rV β = 0 x m x y x

f r Where: Fy (αf ) and Fy (αr) are our tire lateral forces from the modified Fiala tire model in equations 3.13a and 3.13b.

T r T Let our state-space be represented byx ˙ = f(x, u) where x = [r β Vx] , u = [δ Fx ] . The T r T equilibria are given byx ˙ = f(xeq, ueq) = 0. Where: xeq = [req βeq Vx] , u = [δeq Fx,eq] .

All non-state and non-input parameters (e.g. Jz, m, lf , µr etc.) are assumed to be known. For our vehicle, see table 5.1. Equations 4.5 are set of non-linear equations. There are 3 equations, and 5 unknowns (3 states, 2 inputs). As such, to be able to solve these equations two of the unknown values must be fixed [31], (e.g. analogous to solving a matrix system of equations). As discussed before, we assume that Vx is fixed. Secondly, δ is also chosen to

f r be fixed. Fy (αf ) and Fy (αr) are state/input dependent functions which are calculated by r using 3.13a 3.13a and 3.13b. Therefore, the only remaining variables are: req, βeq and Fx,eq.

For a fixed δeq and Vx, we are able to find the three remaining variables. As made clearer by further analysis below, equation 4.5 is solved for multiple values of δeq in order to gauge the system’s equilibrium behavior over a large range of input steering. Our range is chosen

◦ ◦ to be −20 ≤ δeq ≤ 20 . This is near the physical limitations of our vehicle’s steering (see chapter5). Finally, equation 4.5 was solved in MATLAB using its fsolve() method. The following plots are obtained:

47 (a) Yaw rate equilibria (b) Slip angle equilibria

Figure 4.2: r and β over a sweep of steering angles, at Vx = 1.05 m/s.

(a) Front tire lateral force at equilibria (b) Rear tire lateral force at equilibria

f r Figure 4.3: Fy and Fy over a sweep of steering angles, at Vx = 1.05 m/s.

48 Figure 4.4: Longitudinal force equilibria over a sweep of steering angles, at Vx = 1.05 m/s.

49 There are a few key features to note about these plots. First, most steering angles (especially center angles) are mapped to 3 different values. For instance, in figure 4.2, the yaw rates at δ = 5◦ are: -2.83, 0.81, 3. This lies in the fact that the that equations 4.5 are a set of non-linear equations. Therefore for a fixed δeq and Vx, there multiple configurations of

r r, β, Fx which result in the equations being simultaneously being 0. The main source of this nonlinearity comes from the lateral tire model in equation 3.13a. In fact, the three colored

r lines in the figures depend on the saturation level of our rear wheel lateral force, Fy . In f obtaining the figures above, cases where the front lateral force (Fy ) is saturated (αf > αcr) are not considered, for reasons more clearly explained later. In drifting applications, it is typical to only consider rear tire saturation.

The three different colors establish the types of equilibria possible. As discussed in chapter3, normal driving conditions are characterized by relatively small tire slip angles (and therefore lateral forces) for both the front and rear tires. Therefore neither tire is saturated (e.g.

αr < αcr). Drifting on the other hand is characterized by saturation of the rear tire. The first case (neither tire saturated) is shown by the red lines (cornering) in the figures above.

The case where the rear tire is saturated is shown by the blue/green lines. Note that when

r our states (r, βeq,Vx) and inputs are fixed (δ, Fx ) are fixed, for the most part the resulting vehicle trajectory is going to be circular. This is irregardless of cornering or drifting.

Cornering:

In figures 4.2, 4.3 and 4.4, the cornering equilibria are denoted in red. As previously discussed, these correspond to conditions where neither the front or rear tire is saturated.

In this region, the vehicle corners in circles at relatively low longitudinal forces (compared to drifting driving force). In fact, for the most part: the slip angle, yaw rate and lateral forces are smaller in magnitude compared to the equivalent drifting parameters at the same steering angles.

r Due to the relatively low values of our inputs (Fx ) and states in this region, cornering equi- libria are so-called stable equilibria (as shown in subsequent sections). In this configuration,

50 the vehicle has the natural tendency to keep cornering without additional control. This fact will prove useful in the identification of our vehicle parameters in chapter5.

Clockwise and Counterclockwise Drifting:

In figures 4.2, 4.3 and 4.4 ”drift equilibria” are denoted by the green and blue lines. In chapter3 we made the assumption that our positive angle direction is the counterclockwise direction (e.g. like on a unit circle). The blue line corresponds to a counterclockwise drifting direction (i.e. positive yaw-rate), while the green line corresponds to clockwise (e.g. negative yaw rate) drifting. Furthermore, the counterclockwise drift is characterized by negative vehicle slip angle (opposite of sign of yaw), while the clockwise drift is characterized by a positive vehicle slip angle, β. The absolute yaw-rates and slip-angles in these equilibria regions are larger than those of the cornering equilibria. Added to this is that, the rear tire’s lateral force is saturated (as previously discussed). This is why these equilibria are referred to as drift equilibria. They hold all of the same characteristics which we know drifting to hold. Also, this is not well seen in our images above, but at absolute steering angles beyond

◦ 20 (on both sides, for Vx = 1.05 m/s of our vehicle) cornering equilibria cease to exist. The only remaining equilibria are drift equilibria which are all countersteer. For instance in figure 4.2, at steering angles beyond -20◦ only counterclockwise equilibria remain. Since the yaw rate is positive while the steering angle is negative, this indicates countersteering.

At ”center” steering angles (i.e. angles less than 20◦) there is some overlap, especially at lower velocities.

r Lastly, the relatively high inputs (Fx ) and states in these two (green and blue) regions, as well as sometimes strange vehicle configuration (e.g. countersteering) leads to our drift equilibria being unstable. This is discussed more formally in the proceeding section.

4.2.2 Phase Portrait

The equilibria in figures 4.2, 4.3 and 4.4 have been referred to as stable (red) and unstable

(green and blue). This section formally discusses why this is so. The red curves correspond

51 to fairly regular driving conditions where neither the front or rear vehicle tire is saturated, therefore the vehicle travels at a relatively low: slip angle, yaw rate, lateral forces etc. The remaining equilibria (blue and green) correspond to rear tire saturation, where the vehicle has a relatively high: yaw rate, slip angle etc. This configuration of states is partly why the vehicle is said to be unstable in these regions, as the vehicle is unable to naturally sustain them without additional control.

For the remainder of this section, the steering angle δ = 5◦(0.087 rad) is taken as example.

The velocity remains fixed at Vx = 1.1 m/s. We can generate a phase-portrait (as discussed later) of our dynamics for r and β (only remaining states) at this fixed angle and velocity.

This done using our 3 DOF dynamics equations in 4.5. Since we assume that Vx is fixed, only the 2 DOF is used. The dynamics become:

    1 f r r˙ (Fy cos(δ) + F ) − r    mVx y    =   (4.6) β˙ 1 (l F f cos(δ) − l F r) Jz f y r y

r For sake of simplicity, the formulation above ignores Fx (e.g. assumes ξr = 1 in calculating r r Fy ). This is only done to simplify obtaining figure 4.5. Fx is considered in subsequent work. For a given steering angle, the magnitude of r and β may be slightly higher with that

r assumption. This is because the lateral force Fy (at saturation) has not been derated (e.g.

ξr) by the longitudinal force. The overall behavior however, remains the same.

A phase portrait or vector field diagram is a convenient way of visualizing (up to 3 dimen- sions) the behavior of a dynamical system for multiple initial conditions. For our 2-state system in equation 4.6, we generate our phase diagram by obtaining our derivative vectors v = [r ˙ β˙]T (i.e. equation 4.6) for multiple r and β points. The resulting r vs. β phase plane is given in 4.5. Curves were overlaid on top of the vector-fields to give a better representation of the trajectories, and the vectors themselves are normalized.

52 ◦ Figure 4.5: r − β phase dynamic when vx = 1.1 and δ = 5 , with marked equilibria. Red: Cornering, Blue: Counterclockwise drift, Green: Clockwise drift

As seen in the figure above, at δ = 5◦ there are 3 equilibria. These are the exact same equilibria obtained in the r and β plots of the same angle in figure 4.2. As aforementioned,

r there may be slight differences in the numbers since Fx was ignored (for simplicity) in obtaining the plot above.

At the cornering (red) equilibrium, vector fields from all sides converges to it. Hence any small deviation of our vehicle’s states (yaw rate and slip angle) from this point, will inevitably see the states naturally restored to the cornering equilibrium. As such, it is a stable equilibrium. As for both drift equilibria (blue and green), vector fields in one direction go towards them, while in the other direction away from them. This is referred to as a saddle point. For the case of negative yaw rate (green) equilibrium for instance, yaw-rates larger than our equilibrium yaw-rate have the tendency to diverge, hence constant

53 control would be required to keep our vehicle at that particular equilibrium.

General Stability of Equilibria:

The discussion of equilibria thus far has mostly been quantitative and visual. The stabil- ity conditions can be more formally checked by linearizing our dynamics in equation 4.6

r (accounting for Fx ) about a given equilibrium. The linearization is done as extensively r discussed in chapter2. More specifically, equations 2.5 and 2.6. Since both inputs Fx and δ are fixed, we only care for the internal system matrix (A in equation 2.6).

T Let equation 4.6 be represented by: f(z) = f(r, β) = [f1(r, β) f2(r, β)] . Since our inputs are fixed, we know its linearization is given in equation 4.7.

      ∆r ˙ ∂f1(r,β) ∂f1(r,β) ∆r    ∂r ∂β      =     = A∆z (4.7) ∆β˙ ∂f2(r,β) ∂f2(r,β) ∆β ∂r ∂β β=βeq r=req r r δ=δeq Fx =Fx,eq

This gives our perturbation states (as discussed in chapter2), where A and ∆z are the

Jacobian matrix and perturbation state-vector respectively. Per the discussion in equation

2.12, our continuous-time linear system is asymptotically stable the eigenvalues of A are all

negative, and unstable if there is at least one positive eigenvalue.

◦ For the same Vx = 1.1m/s and δ = 5 in figures 4.2, 4.3 and 4.4, we get the following eigenvalues when our system is linearized at each equilibria: [7.8, −20.5] (Counterclockwise

drift), [−16.88, −38.14] (Cornering), [7.75, −19] (Clockwise drift). As expected from pre-

vious discussion, the cornering equilibria has all negative eigenvalues (stable). The drift

equilibria have one positive and one negative eigenvalue (e.g. saddle point), but are ulti-

mately unstable due to the single negative eigenvalue.

It important to note that this analysis was done (for demonstrative purposes) at a single

steering angle δ and Vx. But the general idea holds, which is that of cornering equilibria being stable, and drift equilibria being unstable. Hindiyeh [31] discusses more general

stability analysis. Other works, such as [32] plot the eigenvalues for a fixed velocity over a

54 large range of steering angles which clearly shows the same expected behavior.

Varied Velocity and Equilibria:

What happens to our equilibria as our fixed longitudinal velocity Vx parameter is varied?

As seen in figure 4.6, the yaw rate and slip angle (at a fixed steering angle) becomes smaller as the longitudinal velocity is increased. This happens to all of our other parameters (e.g. lateral and longitudinal forces). This means that the vehicle starts drifting at smaller yaw rates and slip angles, for the same steering angle. The region of vehicle cornering also starts to shrink. At higher velocities, the vehicle is only capable of drifting beyond certain angles, because the rear tire saturates much earlier. [11] briefly discusses changing velocities effects on equilibria.

(b) Slip angle equilibria for different Vx (a) Yaw rate equilibria for different Vx

Figure 4.6: r and β fo Vx = 0.3, 0.6, 0.9 and 1.1 m/s.

The discussion above briefly brings us another point not yet touched upon. It has already been discussed that equilibria (whether cornering or drifting) are essentially all circular trajectories in space. This is given the fact that all of our states and inputs are fixed during equilibria. A natural question one might ask is, what is the radius of cornering or drifting

55 of the vehicle? The following relationship was derived:

2π 2πR V = =⇒ R = tot (4.8) r q 2 2 r Vx + Vy

V is the same as before, and it is related to β through β = arctan( Vy ). R is the radius of y Vx drift or cornering. r, is our vehicle’s yaw rat. The denominator of the right-hand equation is

q 2 2 merely our vehicle’s total speed (i.e. Vtot = Vx + Vy ). In deriving the relationship above, we are essentially setting the time which it takes the vehicle to travel one yaw revolution.

Hence, for a fixed longitudinal velocity, Vx, the radius of drifting is limited by the range of possible yaw rates and drift angles.

4.3 Controls Overview

In this section an overview of control systems is given as it pertains to tracking and reg-

ulation. An overview of some control methods are given. These are: PID (Proportional

Integral and Derivative control) and LQR (Linear Quadratic Regulator).

Crudely speaking, our aim in control is to typically track desired states. This is done by

choosing appropriate control actions which minimize any deviation between our current

value of the state and the desired state (e.g. feedback control). For instance, if we desire

that our vehicle travels at a given velocity, Vd, we would aim to minimize (i.e. near zero)

our velocity error: ∆Vx := eVx (t) := Vx − Vd, where Vx is our current velocity. Control is

done by defining control action as a function of our error (e.g. u(∆Vx)). This automatically

increases/decreases our base input depending on the error level. The structure of u(∆Vx) (in our example), would depend on the type of aforementioned controller (i.e. LQR, PID,

MPC etc.) as detailed in the proceeding section.

4.3.1 Tracking and Regulation

Proportional Integral Derivative (PID) Control: 56 PID control is the most widely used control method [4]. Given a feedback error, e(t), (as previously discussed), it modifies this error in 3 ways:

• Proportional Action: Kpe(t)

– Scales error by a constant gain factor and feeds result back into the system

as input. In tracking applications, our goal it get e(t) to 0. Hence the input

automatically adjusts depending on error level. Larger gains (Kp) tend to lead to faster convergence of error to 0. But if too large, can lead to large overshoots

(larger e(t) before settling), or may even become oscillatory (e.g. when even

smaller errors are amplified) [40].

R t • Integral Action: Ki 0 e(t)

– Sums over all past errors. Good for decreasing any steady-state error from a

proportional-only controller (e.g. e(t) = a where a 6= 0 as t → ∞). Hence,

helps error settle to zero faster. But gains too large can lead to same stabil-

ity/oscillatory issues [40], to where error grows out of control.

de(t) • Derivative Action: Kd dt

– Penalizes instantaneous (quick) changes in error. Hence, can decrease oscillations

(e.g. quick change in error) and overshoots. But at the same time: since quick

changes in error are penalized, it may take longer for error to settle to zero.

Has another fatal flaw: has the tendency to amplify high-frequency noise (e.g.

small sensor noise). Even small amplitude sensor noise can have a rate of change

typically large enough to destabilize control.

The full PID controller is given as:

Z t de(t) u = Kpe(t) + Ki e(t) + Kd (4.9) 0 dt

There is a lot more to a PID controller, as discussed in sources such as [4, 40]. For certain types of systems, these gains hold information which give better physical intuition on how 57 to drive error to zero. In order to apply the PID digitally, a discredited version of the

PID controller is needed. This essentially means that, for instance, the derivative of our error could be estimated with a finite difference (e.g. equation 2.2). The integral could be estimated with the rectangular/trapezoidal approximation of an integral.

Linear Quadratic Regular (LQR):

As simple as PID-based control is to implemented, certain types of systems have controllers better suited to them. The discussion of PID control in equation 4.9 only talks of a single error i.e. assumes single input single output (SISO) situation. Whereas our system may be a multiple input and multiple output (i.e. MIMO) system. In such a case, the errors of multiple states (which may be highly interdependent) need to regulated to zero. PID control becomes relatively non-trivial in such cases. Though, as done in subsequent sections, if the errors are carefully defined then PID control is feasible. However, for MIMO systems there are better suited controllers depending on the system.

If our system is known to be linear (i.e. of the forms of equations 2.8a and 2.9) then it may be controlled using LQR. LQR is an optimal controller which penalizes any deviation of our states (or errors) from zero, and it can also penalize changes in some nominal input. The feedback input resulting from LQR is guaranteed to be the best (most optimal) possible input for a chosen set of ”tuning” parameters (see equation 4.11).

In tracking problems (as is of interest in our work), we typically care for the perturba- tion (e.g. error) around some some nominal state. As introduced in equation 2.8a, the perturbation of our linear system would have the following dynamics:

∆x ˙ = A∆x + B∆u

Its discretized version (with sampling time T) remains as was explained in equation 2.11:

∆xk+1 = Ad∆xk + Bd∆uk

58 In LQR (discrete-time version) our aim is to find ∆uk (from times: k = 0, .., N − 1) such that the following cost function is minimized:

N−1 T X T T min ∆xN Qf ∆xN + (∆xk Q∆xk + ∆uk R∆uk) (4.10) ∆u0,...,∆uN−1 k=0

Q, Qf ,R above are tuning parameter matrices of our: state, terminal state and input de- viations, respectively. They determine the level to which input/state deviations should be

penalized. Q, Qf are assumed to be symmetric positive semidefinite matrices, that is, all their of their eigenvalues must greater than or equal to zero. R must be symmetric positive

definite, meaning that its eigenvalues are strictly positive.

The LQR formulation given in equation 4.10, is the so-called discrete finite horizon formu-

lation. ”Finite horizon” stems from the fact that N 6→ inf. This means that there is a

terminal state (or deviation) which is imperative to reach at the end of our control. In the

continuous time equivalent (i.e. equation 2.8a), the cost function above is an integral over

time, as opposed to summation. In this work we are not concerned with a particular final

state, hence we want an infinite horizon LQR (e.g. N → ∞). Hence Qf might as well be

Q. They are typically defined as given in equation 4.11. If Qf were considered it would have much of the same structure as Q.

    q1 0 ... 0 r1 0 ... 0          0 q2 ... 0   0 r2 ... 0  Q =   R =   (4.11)  . . .   . . .   . . ..   . . ..   . . 0   . . 0      0 0 . . . qn 0 0 . . . rm

With these definitions, it is easy to see that equation 4.10 is penalizing the square error of

th each state and input. qi and ri determines by how much to penalize the i input and state.

The question remains of how to minimize the cost function in equation 4.10? There are

multiple approaches, which are covered in sources such as [5]. When the optimization

59 problem is solved we get the following relationships:

T T T −1 T P = Ad PAd − (Ad PBd)(R + Bd PBd) (Bd PAd) + Q (4.12a)

T −1 T K = (R + Bd PBd) Bd PAd (4.12b)

∆u = −K∆x (4.12c)

Where P is a recursive relationship commonly referred to as the Algebraic Riccati equation.

There is also a continuous-time version with slightly different terms. The third equation

indicates that the optimal possible input for a linear system with respect to the cost function

in equation 4.10 is a proportional-gain state feedback. Where the gain K is found based

on our system’s matrices, tuning parameter matrices and aforementioned ”P” from the

Algebraic Riccati equation.

ref The obtained input ∆uk (i.e. ∆u = uk − uk ) is that of the perturbation around our nominal input. The input of the full system is given in equation 4.13, where uref the

desired input.

ref uk = −K∆xk + uk (4.13)

One thing is important to note: A sufficient condition for using LQR on a linear system

(Ad,Bd) is that the system is controllable (as discussed in equation 2.13). Obviously, we want to be able to use inputs to reach any arbitrary state. However, this is not a necessary

condition for using LQR. If the uncontrollable parts of the system are stable (e.g. so-

called stabilizability), then LQR can be used to stabilize the system through the remaining

controllable part of the system.

Lastly, the gains K of the continuous-time system A, B and its discredited counterpart

Ad,Bd tend to be vastly different. Hence, it is important to distinguish between the two in

60 solving a problem. MATLAB has methods, lqr() and dlqr(), which can directly calculate our gains K.

Some limitations of LQR are important to note, though these were not limiting to our project. LQR makes no assumption about limitations in our inputs. Though large in- puts can be penalized through the R matrix in our cost function, LQR is a relatively unconstrained optimization problem. Also, as discussed, it only works for linear systems.

Another type of control: MPC (Model Predictive Control) can take care of these prob- lems. It has a similar cost function to that of LQR, but state and input constraints can be specified. It is almost more of an online controller which optimizes our tracking on the

fly, and picks the first input of the optimal sequence. MPC is discussed more formally in

[2]. The Multiparametric-Toolbox [12] was used early on in this project, so was MATLAB’s

Adaptive MPC (allows for the updating of fixed states e.g. velocity in our case). At the end, a combination of LQR and PID control was beyond adequate for our final control.

4.4 Deviation Definitions and Control

4.4.1 Deviation Definitions

The work in this thesis is concerned with simultaneous path (see section 4.1) and drift- tracking (see section 4.2). From a controls perspective, typically, these are vastly different problems. The differences between them will be laid out in this section.

As discussed in both sections 3.4 and 4.2, it preferable that the velocity be treated as a

fixed parameter which is scheduled. This is as opposed to its dynamically varying form given in equation 3.5. Any application requiring a change in velocity would merely be velocity-scheduled. As such, most of the control work for this thesis involves the design

r of active steering controllers. Since the velocity is fixed so would Fx in our full dynamics in equation 3.7. As will be shown more formally in the subsequent section, this is feasible during both regular path tracking situations and drifting. In drifting, Hindiyeh and Voser

[31, 39] showed the vehicle’s ability to track a given drift equilibrium at a fixed velocity 61 where a steering-only controller is used to sustain drifting. That said, the vehicle was shown to have improved steady-state drift tracking when both steering and throttle control were coordinated. This is especially useful when the front tire is at risk of saturating (e.g. limit steering control). Due to the added complexity of such scheme however, it is not considered in this work. The steering-only controller is able to achieve our end-goal and allows the use of the linear controllers for simultaneous path tracking and drifting.

Velocity Control:

r As seen from our dynamic model in equation 3.5, the force actuation Fx only directly affects our longitudinal velocity Vx. Hence, it is often that our longitudinal force input is given as follows:

r Fx = −Kvx ∆Vx (4.14)

Kritayakirana [20] showed that this simple proportional state-feedback controller is capable of tracking the velocity of a full-sized vehicle at ”the limits of handling”. In his work, a single gain value Kvx was shown to track a continuously changing reference velocity. Hence, it is more than adequate for tracking our single (possibly scheduled) reference velocity.

During cornering (normal driving) the feedback scheme above is adequate, and often used.

During drifting however, from the equilibrium analysis in section 4.2 (i.e. Vx in equation 4.5), we know our equilibrium longitudinal force at a fixed velocity . Hence, for a steady- state drift our input longitudinal force can include a feedfoward of this longitudinal force:

r r Fx = −Kvx ∆Vx + Fx,eq (4.15)

r Where Fx,eq is the aforementioned steady state equilibrium force. A pure-feedback controller would be too aggressive, hence a feedfoward term helps. Lastly, as well as the simple proportional feedback scheme has been shown to work, a PID controller (e.g. 4.9) was used to slightly improve the tracking ability of our velocity in both cornering conditions and

62 r drifting, still accounting for the feedforward longitudinal force (e.g. PID(∆Vx) + Fx,eq).

Path Control:

The path following control is based on vehicle position (x, y) and orientation ψ error. The clothoid map scheme discussed in section 4.1 lays the foundation for our path in this work.

Hence our desired path trajectories are obtained from our built clothoid map. Aside from path information, by construction, clothoid-based planning gives us information such as the path orientation (θ), curvature (κ) etc. which will come handy.

Many path following control schemes for typical (e.g. cornering) driving conditions exist in the literature. Snider [36] surveys a wide-range of path following controllers. These include the so-called pure pursuit controller and Stanley’s method, which are popular in mobile robotics. He also discusses dynamic path following controllers based on the derivatives of our path error and orientation error (tangent of closest path point). Such dynamic path controllers can be appended to equation 3.17, for a ”full-state” vehicle control at fixed (or scheduled) velocities.

As for the ”limits of handling”: center-of-percussion based steering control has to been shown to have strong path tracking abilities [20]. This is a dynamic path and orienta- tion following controller based on the center-of-percussion of the vehicle as opposed to its center-of-gravity. This simplifies the controller scheme due to the center-of-percussion being unaffected by rear tire lateral dynamics, which are typically large at the limits of handling.

It’s a somewhat linear controller, allowing for the use of optimal control tools such as LQR.

The path following scheme in 4.16, was used in this work. It was shown to do as good of a job path following during cornering, and near the limits of handling [20].

δlk = −Klk(e + xlasin(∆ψ)) + δFFW (4.16)

Where: Klk is our path deviation gain. xla is a look-ahead distance from our vehicle center-

of-gravity, as seen in figure 4.7. It is a tunable parameter. δFFW is our feedfoward input,

63 as defined in 4.18. ∆ψ is our path orientation error, in typical driving conditions it is the difference between our vehicle’s yaw (ψ) and path orientation (θ := ψd). Lastly, e is our path error, as defined in equation 4.17. It measures the (euclidean) distance between our current location (x, y) and closest point to us (xd, yd) on the path.

p 2 2 e = (x − xd) + (y − yd) (4.17a)

∆ψ = ψ − ψd (4.17b)

K V 2 δ = (L + ug x )κ(q) (4.18a) FFW g f r Fz Fz Kug = − (4.18b) Cα,f Cα,r

Kug is our so-called under-steer gradient. During turning, it determines whether the vehicle has natural tendency to drive away from the turn (under-steer) or towards the turn (over-

steer). κ(q) is the curvature at a given point of our path, as defined in equation 4.4. This feedfoward (δFFW ) term is derived from trying to eliminate steady-state lateral position error from our path [30, 36].

64 Figure 4.7: Combined path error (ela) and its components: orientation error (∆ψ) and position error (e).

As useful the feedback-feedfoward schemed given in equation 4.16 is, [18] shows that it

may not always work as intended, especially in the presence of a non-negligible vehicle slip

angle β (e.g. as is typical in non-cornering conditions). For instance, a non-zero ela (e.g. feedback term in equation 4.16) term can be obtained while both path errors and orientation

errors are non-zero (e.g. equal and opposite signs). Equation 4.16 is modified to include a

steady-state slip-angle to take care of this:

δlk = −Klk(e + xlasin(∆ψ + βss)) + δFFW (4.19)

2 mlf Vx Under cornering conditions: βss = ( − lr)κ(p)[18]. During steady-state drifting, (lf +lr)Cα,r our desired slip angle is the one obtained from our desired drift radius in equation 4.8.

Also during drifting: δFFW must not be included in the feedfoward, but instead replaced with the desired steering (typically countersteer) obtained from a desire drift equilibria, as

discussed in the proceeding section.

Lateral Control:

65 Most of the analysis in section 4.2 was based on the steady-state lateral behavior of the vehicle. During typical cornering conditions, the lateral dynamics are minute and a linear model model given in equation 3.17 is enough to represent the behavior of the vehicle. As shown in section 4.2 the vehicle is already relatively stable in that region, but LQR can further be used to track our vehicle to a particular desired slip angle (or lateral velocity) and yaw rate. In steady-state drifting however, equation 4.7 becomes more suitable. Irregardless, in either case:

δdr = −KVy ∆Vy − Kr∆r + δd (4.20)

Voser [39] used this feedback-feedfoward scheme for steady-state drifting through LQR.

Equation 3.17 could be changed back to depend on Vy instead of β using the same assymp- tions made in deriving equation 4.5. The reason given by Voser for using Vy instead of β is that β has more terms dependent on Vx which may affect our longitudinal dynamics (e.g. ideally fixed). δd in drifting is the steering angle obtained from the desired steady-state equilibrium drift (typically countersteer). In cornering it is typical to want zero vehicle slip, and that only the yaw (e.g. equation 4.2) is tracked as opposed to the yaw rate. The feedfoward steering in that case remains the original δFFW in equation 4.18 (i.e. no new feedfoward added). Hindiyeh [31] discusses a more complex looping structure lateral control error than the one in equation 4.20, for drifting. This was shown to improve drifting ability, especially in cases where the vehicle was at risk of exiting drift. For our work, Voser’s original scheme (in equation 4.20) is more than adequate, and it allows for the use of linear control, LQR.

4.4.2 Vehicle Controller

One of the biggest advantages of using clothoid-based maps for global path planning is that its curvature is one of only 3 choices (as shown in figure 4.8). These are: zero (zero slope and value), linear (increasing/decreasing slope), and constant (zero slope and non-zero

66 value). The first (i.e. zero) corresponds to straight parts of our path, while the second (i.e. linear) corresponds to spirals of changing curvatures, and the third (constant) corresponds

1 fixed radii circles (of radius R = κ , as shown in equation 4.4). As shown by [37], almost any arbitrary track can be fitted with a clothoid map by fitting optimal linear curvatures through the path’s original curvature. Since by definition a clothiod’s curvature can only be linear, any arbitrary track can be made through the combination of 3 types of curvatures, as shown in figure 4.8. In global planning applications, this can simplify control design, as

(depending on application) one would only need to specify the desired behavior in one of these 3 curvature regions of our track.

Figure 4.8: Arbitrary clothoid map from 3 types of curvatures (straight, spiral and arc) (Courtesy of [10]).

67 Figure 4.9: Vehicle control architecture.

Our project has the control structure above (figure 4.9). As discussed in section 4.4.1 the definition of our path planning and lateral control changes depending on whether the vehicle is cornering versus drifting, and these lay the basis of our control. For reasons discussed in chapter6, our clothoid-based test path is given in figure 6.5. To use steady-state drifting, the path which we desire to drift about must either be circular (fixed radius) or approximable as a semi-circle. As such the entry and exit point of the circle would then be a spiral

(e.g. blue regions in figure 6.5). Knowing the curvature of the desired drifting circle (hence radius), equation 4.8 is used to find the reference yaw rate and slip to make this possible, at a fixed velocity. The appropriate steering angle, equilibria longitudinal force among other equilibria parameters are found for such a circle, or multiple circles if the trajectory is made from multiple turns. Using those, our control in figure 4.9 is carried out. Our longitudinal

r force Fx is a feedback our velocity error. During regular driving (e.g. cornering) equation 4.14 is used and during drift equation 4.15 is used, then PID is used to track our desired velocity. In path tracking, equation 4.19 is used during both drifting and cornering, where the aforementioned appropriate changes are made. As discussed, PID control is also used used to track the feedback term in path tracking. The lateral control in equation 4.20

68 uses LQR control (using equation 4.7) of our obtained equilibrium states during drifting.

And during cornering (for lateral control), LQR of equation 3.17 can be used to find our tracking gains. The switch between the regular driving and drifting controller happens in the transition region to our circle (e.g. blue region in figure 6.5). As opposed to using an open-loop scheme to bring our vehicle into drifting conditions, the transition region is used for this. If the vehicle’s rear tire are unable to saturate enough for drifting to occur in that transition region, then the vehicle merely drives regularly to clear the path. Hence the entire path has a ”regular driving” planner, but the designated circular parts have a drift planner, which if the vehicle is unable to enter drifting conditions is abandoned for path tracking.

69 Chapter 5 System Architecture and Identification

Prior chapters introduced our vehicle model, as well as theories pertaining to drift maneuver planning and control design. This chapter focuses more on our ”system” (as discussed in chapter2), or hardware. The first two sections discuss our vehicle specifications and project architecture, respectively. The final chapter delves into system identification. This is how vehicle parameters from chapter3 are estimated, as well as our vehicle inputs.

The vehicle of choice for our project was a 1:28 scale vehicle. It was originally a radio controlled vehicle which was modified to include components more suited to our project.

The physical vehicle testing space was too compact for larger scaled vehicles (e.g. 1:10 scale). Not only that, but due to its relatively low cost, ease of modification/fixing (in case of testing accidents) the 1:28 scale vehicle was beyond adequate for our project. That said, most of the discussion in this work should apply to larger scaled vehicles as well, with appropriate modifications.

70 5.1 Vehicle Specifications

Figure 5.1: Project’s scaled vehicle (Side view).

Figure 5.1 shows our project’s vehicle. It is a 1:28 scale car. The base vehicle is an Atomic

DRZ RWD (Rear Wheel Drive) vehicle. It was modified by adding hardware/circuitry to enable the the actuation of the vehicle through Bluetooth commands (as opposed to radio). As such, communication modules were also added, which would enable the vehicle to communicate with our computer (where our algorithms were hosted). Our vehicle is composed of the following parts:

• Servo Motor: Controls vehicle’s front steering angle. Brand: SG90 9G Servo. Oper-

ating Voltage: 4.8 volts - 6 volts (low enough to be powered by Arduino pin). Speed:

No-load (running speed) of 0.09 seconds/60◦ at 4.8 volts. Angle Range: 120 ◦. Di-

mensions: 23*12*29 (Length*Width*Height), 9 grams (Weight). Pulse Width Range:

900 - 2100 µs. Running Current: 0.4 amps at 4.8 volts.

71 • BL-DC (Brushless DC) Motor: Provides vehicle’s rear wheel drive torque/force.

Brand: 8700KV Brushless Inrunner. Rated Values: 5.8 volts (Max Voltage), 12 Amps

(Max current), 80 Watts (RPM), 50,000 rotation per minutes (Max RPM). Dimen-

sions: [13/20]-[6/1.5] ([Diameter/Length]-[Shaft Length/Shaft Diameter]).

• Arduino Nano: Small development board which controls our driving motor (through

ESC shield) and servo motor (steering). Receives commands from computer through

Bluetooth. Voltages: 5 volts (Operating Voltage), 7-12 volts (Input Voltage i.e. Vin pin). Pin Currents: 0.04 amps (Input/Output pins). Dimensions: 45*18 millimeters

(Length*Width), 7 grams (Weight)) .

• LiPo Battery Pack: Provides power to BL-DC motor and Arduino board. Brand:

Crazepony. Voltage: 2 cell 7.4 volts. Battery Capacity: 400 milliamp-hour. Discharge

Rate: 30c. Dimensions: 38*19*19 millimeters (Length*Width*Height), 22 grams

(Weight).

• ESC (Electronic Speed Control) Shield: Device (i.e. shield) which allows for

the control of our BL-DC motor through an Arduino. Running Current: 30 amps .

Dimensions: 68*25*8 millimeters (Length*Width*Height), 39 grams (Weight).

• Bluetooth Module: Allows for serial communication between the the computer

(MATLAB R ) and Arduino. Brand: DSD TECH HC-06. Baud Rate: 9600 bits per

seconds. Operating Voltages: 3.3-6 volts (Power/Transmit Signal), 3.3 volts (Maxi-

mum for Receive Signal). Current: 0.03 Amps (Unpaired), 0.01 Amps (Paired). Di-

mensions: 28*15*3 millimeters (Length*Width*Height), 13.6 grams (Weight). Mode:

Slave (Only receives data). Range: 9 meters.

The values above were provided by the manuals/datasheets of the individual items. To be noted additionally: as seen in figure 5.1, a small (half-size) bread board was used to connect the different electrical components of our project. The board was perfect for our vehicle and Arduino Nano’s size. It was fastened to the vehicle using a small velcro cut-out. Its dimensions are: 84*56*8 millimeters (Length*Width*Height), 10 grams (Weight).

72 5.2 Vehicle Architecture

Figure 5.2: Project architecture.

Figure 5.2 gives an overview of the project’s hardware/communication architecture set-up.

There were 3 main components (with overlap):

OptiTrack:

Natural Point’s Optitrack System [15] acted as this project’s main sensor. The Opitrack system is a set of high-accuracy (millimeters-level accuracy) and low-latency motion-capture infrared cameras. It captures information (e.g. location, orientation) about an object from

73 infrared light reflected from special markers placed on the object of interest (e.g. grey round balls in figure 5.1). The OptiTrack had the perfect set-up for our indoor tracking system.

Our Optitrack system has 6 cameras in total. Each camera captures 2D images, which are then stitched together and reconstructed into a 3D representation of our object as a rigid body [15]. The cameras must be placed in such a way that a given marker is always visible to at least two cameras [15]. In our case, the reconstruction occurs in Natural

Point’s Motive:Tracker (version 2.1) software, which receives information from the cameras via Ethernet. The cameras are mounted on stationary mounts (as shown in figure 5.4). Any movement in the cameras requires that be re-calibrated through the motive software. After calibration, we typically mark/specify our inertial frame coordinate system (as discussed in

3). Then the object (e.g. vehicle) is placed at the testing surface, which allows us to create a rigid-body from the markers placed around the vehicle. Around 8 markers were used in our case. This insured that no frames were dropped.

The current version (version 2.1) of the Motive software allows for a 6 degree of freedom tracking of our object. That is: the 3 translational locations (X, Y, Z) and rotational orien- tations (yaw, pitch, roll). But it is important to note that depending on how the coordinate system is set after calibration, the software’s denotations of these values (e.g. X, Y, Z etc.) may not necessarily correspond to their conventional definitions. For instance, in our case the ground plane’s vertical axis was X (as opposed to Y), horizontal axis Z (as opposed to X) and perpendicular (to the ground plane) was Y (as opposed to Z). The pitch, yaw and roll were also rotated to correspond to these new axis e.g. yaw about Y-axis. To avoid the phenomena of Gimbal lock, the software separates the different orientation axis into quadrants. For instance, in one full rotation of our object, the pitch/roll seemed to go from

0◦ to 180◦ then from -180◦ to 0◦ (normally: 181◦ to 360◦). The yaw was compartmentalized

into four 90◦ quadrants. This proved relatively inconvenient for remapping our values back

to 0◦ to 360◦, since the yaw is the only orientation of interest in this work. In our project,

this was handled by switching the yaw axis with the pitch (or roll). This allowed the new

”new yaw” to be separated into 2 quadrants of 180◦, which was simpler to map back to 0◦

74 to 360◦, as needed.

To the author’s knowledge, the three aforementioned location and orientation information are the only things provided by the current version (version 2.1) of the Motive:Tracker software. Of those we only care for the conventional X-Y information, and conventional yaw (after aforementioned changes have been made). All other information such as our longitudinal/lateral velocities, yaw rate, body-slip angle have were estimated based on those

3. The Optitrack is accurate enough to where a finite difference (e.g. equation 2.2) can do a decent job at approximating our other vehicle states (i.e. equation 3.7). For obvious reasons, the instantaneous estimations may sometimes be noisy. As such, an Extended Kalman

filter was used to refine these state-estimations. Other works such as [32] use an IMU

(Inertial Measurement Unit) to obtain the yaw rate and body frame velocities/accelerations information.

Figure 5.3: Motive screenshot of OptiTrack system during object selection.

75 Figure 5.4: Three of 6 of our OptiTrack system’s cameras.

MATLAB:

Mathwork’s MATLAB R development environment acted as the ”brain” of this thesis’s work. All algorithms from the theories developed in previous chapters were implemented in MATLAB. Nature Point, Inc. [15] provides an SDK (Software Development Kit) which can connect the Optitrack system and Motive software directly to MATLAB. It’s called the NatNet SDK, and it allows for location/orientation information of our object to be streamed directly to MATLAB in real-time. For this project a sampling rate of up to 50

Hz was achieved directly in the MATLAB IDE.

MATLAB used information streamed from the OptiTrack system to make desired modifica- tions/calculations in implementing our control algorithms. MATLAB’s Bluetooth module in the Instrument Control ToolboxTM allowed for data to be sent to the Arduino. After

MATLAB did all the controls work and obtained the desired steering angle and driving

76 force, these information would be converted into pre-configured values which the Arduino could understand (e.g. Pulse Width Modulation signals). Then, these values would be sent from MATLAB to the Arduino via Bluetooth, which would in turn take the values and convert them into the necessary voltage values to steer and drive our vehicle in the desired way.

Vehicle:

As discussed in section 5.1, the final vehicle included various components. The base 1:28 vehicle came mainly equipped with its chassis and a servo motor (for front-wheel steering).

A 3-phase BL-DC motor (rear-wheel drive) was later added for rear-wheel driving. In order to connect a BL-DC motor to the Arduino, an ESC (Electronic Speed Control) shield (i.e. hardware add-on) is needed (e.g. from a safety and basic operational perspective). Since only one vehicle was being controlled in this project, a Bluetooth module was adequate

(communication-wise) to connect our Arduino to our computer (i.e. MATLAB). Lastly, the entire vehicle was powered from a single 2-cell 7.4 volt battery. The vehicle had no component with significant power requirements, hence a 7.4 volt 2-cell battery was beyond adequate.

There is one important thing to note about the Bluetooth module (HC-06) used in this project: though it can be powered by up to 6 volts (and transmit up to a similar voltage to the Arduino), its ”receiver” pin can only take up to 3.3 volts. This is even if this pin is not explicitly used, as by default the Arduino’s pins are 5 volts (i.e. high, pull-up resistor).

Hence two resistors in voltage divider configuration must be used to bring this 5 volt down to the 3.3 volts suitable for the Bluetooth’s receiver pin.

77 Table 5.1: Tire and vehicle parameter values

Parameter Value Units m 0.226 kg [Kilograms] lf 0.057 m [Meters] lr 0.063 m [Meters] −4 2 Jz 3.8 · 10 kg · m µf 0.37 No units µr 0.33 No units Cα,f 2.4 N/rad [Newton/radians] Cα,r 2.72 N/rad [Newton/radians]

5.3 System Identification

In order to use any of the insights gained from previous chapters, our vehicle model (i.e. equation 3.7) needs to be fitted to our actual vehicle (i.e. figure 5.1). This is done by the de- termination of (thus far) unknown model parameters through data collection. This process is referred to as ”system Identification”. System identification (sometimes referred to as

”modeling”) roughly falls into 3 categories: white box, grey box and black-box identification

.

White/Black-box identification: In black-box identification (or modeling), the system’s un- derlying model is not assumed to be known [25] (i.e. in our case, if we had no knowledge of equation 3.7). Some ”basis function” is chosen, and data is used to find parameter values which best fits our basis function. The fitted basis function is hoped to model the behav- ior of our system given the same inputs/initial conditions. This does not however mean that nothing is known of the potential structure of the system. For instance, if the system is observed over a long period and it is found to exhibit linear behavior (e.g. equation

2.9), we can find the best-fit matrices to fit our linear system. White-box identification (or modeling) is the complete opposite. It makes the assumption that the system’s model is al- most completely known, and that only prior knowledge is adequate to completely determine model/identify the system [25]. For obvious reasons, for certain systems this is unfeasible.

Grey-box identification: Grey-box identification on the other hand (much like our problem)

78 can be seen as a mix between the previous two (black and white-box). That is, the underly- ing model/structure of the system is known (e.g. equation 3.7) but the model is dependent on parameters which must be estimated through collected data. The nonlinear single track vehicle model discussed in chapter3 is dependent upon a few parameters, which can be broadly categorized into: physical parameters, tire/road parameters and vehicle input pa- rameters. Each of these will be discussed in subsequent sections.

Least Squares:

Least squares will be repeatedly used in this work to carry out the identification of our model parameters, hence, it is briefly introduced here. As previously discussed, in grey- box identification/modeling, our aim to collect data from the system and use this data estimate unknown model parameters. This is done in proceeding sections. At any given time we typically know the inputs (and states) of our system, as well as outputs (typically a function of a given state). By the method of ”curve fitting”, we hope to find a relationship between our inputs/states and the desired output (e.g. somewhat similar to a supervised

Machine Learning problem).

We typically have a pair of collected input (may be a combination of states and inputs) and output data points, {xi,yi} (i = 1, .., N), respectively. Here N is the number of collected data points. We also know a map between the input and output pairs, which may be

dependent on a set of unknown parameters φ = [φ1 . . . φL]. Our goal is to find φ such that the difference between our known map, g(φ, X), and collected output, y, is minimized.

There are many ways to choose to minimize this difference (e.g. loss function). If we chose

to minimize the square difference between our output and map, then this is referred to as:

least squares. The formulation for least squares can be given as:

i=N 2 X 2 min ky − g(φ, x)k2 = min (yi − g(φ, xi)) (5.1) φ φ i=1

2 k.k2 is the 2-norm (squared Euclidean norm). {xi,yi} (i = 1, .., N) are the aforementioned collected data points. One of the main ways to solve optimization problems of this sort is

79 through iterative methods, whereby we start with an initial guess of the parameters (φ) and

have an algorithm iteratively find φ which minimize our cost function above. One of the simplest yet widely used of these iterative methods is the so-called gradient descent method.

It is based on the fact that, by definition the gradient (or derivative) of a function is always in the direction of greatest ascent (of that function). Hence, by choosing parameters in the negative direction of the gradient we are always guaranteed to decrease the cost function in equation 5.1 at every time-step of our iterations, until our gradient is near 0. We usually hope for that place be our cost function’s minimum. But if the map g(.) is nonlinear, we often cannot guarantee that our minimum was the true global minimum e.g. places were gradient zero, but not necessarily smallest value of cost function. Optimization is an entire

field of its own with many nuances, more on Optimization may be learned from sources such as ”Nonlinear Programming” by Bertsekas, a popular book on the subject. MATLAB has many tools/methods for solving problems of this form, most notably lsqnonlin() and lsqcurvefit().

For certain classes of problems, there are simpler methods of obtaining our unknown pa- rameters. For instance, if the map g(.) is linear with respect to our unknown parameters φ, then the cost function in equation 5.1 has the important property called convexity which guarantees that our optimization problem has a unique global minimum. In such a case, the parameters can simply be found via the following equation:

φ = (XT X)−1XT y (5.2)

Where:       1 2 L y1 φ1 γ1(x1) γ2(x1) . . . γL(x1 )            1 2 L   y2  φ2   γ1(x2) γ2(x2) . . . γL(x2 ) y =   φ =   X =   (5.3)  .   .   . . . .   .   .   . . .. .   .   .   . . .        1 2 L yN φL γ1(xN ) γ2(xN ) . . . γL(xN )

j th Note that the notation xi means j input variable at time (or data point) i. That is: our

80 1 2 L input data (as defined before), xi, is a vector i.e. xi = [xi xi . . . xi ]. In this work yi is assumed to be a scalar (single output). γ(.) is any nonlinearity that the input might exhibit. We only made the assumption that g(.) needs to be linear with respect to our unknown parameters φ. The inputs themselves can be nonlinear, since they are known (e.g. our collected data) their nonlinearity makes no difference to the optimization problem.

5.3.1 Physical Parameters Identification

The first set of parameters to identify are our vehicle’s physical parameters. The methods of identifying these parameters may slightly differ depending on a vehicle’s scale (e.g. from

RC vehicle to full-sized car). These parameters are things such as: vehicle mass, various vehicle length attribute, yaw moment of inertia etc.

Vehicle Mass:

For the scaled vehicle in figure 5.1, a kitchen scale was adequate to obtain its mass. Due to the vehicle’s added components (on top of the base model), the final vehicle mass was more than twice the base vehicle’s mass. The total assembly came out to around m =

.226 kilograms. Note that units of kilograms were used for consistency, as the forces in our vehicle model are assumed to be in Newtons (i.e. kg · m · s−2).

For heavier vehicles (e.g. full-scaled cars) instruments such as a vehicle weighing beam is needed.

Center of Gravity and Lengths:

The center of gravity of the vehicle is identified next, as two of our vehicle model parameters lf and lr are measured with respect to the center of gravity. Crudely speaking, the center of gravity measures the average location of our object’s mass. Hence, one of the simplest ways to identify the center of mass is to try balancing it on a relatively thin surface (as shown in figure 5.5). For obvious reasons, this method is mainly suited to smaller scaled vehicle like ours. For larger vehicles, experiments or estimation methods can be found in

81 the literature.

Figure 5.5: Vehicle center of mass balancing.

A measuring tape was used to obtain the distance between the center of gravity to the vehicle’s front tire (lf ), also that of the center of gravity to the rear tire (lr). Both lr and lf are typically provided in the manufacture’s manual, but for obvious reasons the addition of multiple components (motor, battery etc.) changes our vehicle’s mass distribution (hence center of gravity). Therefore, they were re-obtained. These parameters were roughly lf =

0.057 meters , lr = 0.063 meters . The units of ”meters” was used for the same reasons that kilograms was used for the mass. It is to maintain consistency with the units of our forces (and earth’s gravitational constant). The vehicle’s width was approximately:

W = 0.076 meters .

Yaw Moment of Inertia:

The yaw moment of inertia, Jz, is obtained next. The estimation of this parameter is well covered in the literature e.g. [13, 17, 11]. These sources describe a way of obtaining this

82 parameter by suspending the vehicle above ground, and letting it oscillate about its yaw axis. The frequency of oscillation is then used to estimate the vehicle’s yaw moment of inertia. For the purposes of this thesis, a more basic alternative approach is used. The moment of inertia of standard geometries are well known. The vehicle can be approximated as as cuboid, and the yaw moment component of a cuboid is approximated by 5.4. This approximation is discussed as well in [17].

1 J ≈ m(W 2 + L2) (5.4) z 12

Where W: width of the vehicle (as identified before). L: length of the vehicle (L = lf + lr). m: mass of the vehicle. Our vehicle’s width was approximately 0.076 meters, hence Jz ≈ 3.8 ∗ 10−4 kg · m2

5.3.2 Tire and Road Parameters Identification

In the previous section, physical vehicle parameters were obtained. Next to be identified are the vehicle’s road/tire parameters, which are especially important to our vehicle tire model in equations 3.13a and 3.13b. They are parameters specific to our vehicle’s tires, and road testing surface. These parameters are our front and rear cornering coefficients

(Cα,r and Cα,f ) and coefficients of friction (µr and µf ).

The aforementioned parameters are typically obtained by collecting data through a method called a ramp-steer maneuver. As its name implies, it is a method where the vehicle’s steering angle input is linearly increased over time (typically around 1◦/sec or less). This allows for the vehicle sweep through a range of angles (and hopefully tire slip angles), as well as lateral forces. Using our recorded/estimated states we can obtain a map between

r f our lateral forces (Fy and Fy ) and slip-angles (αr and αf ). A slight alternative to the ramp-steer maneuver is described in [32], which may be useful if the testing area is too small for a full ramp-steer maneuver. In that work, the steering angle is fixed and the vehicle is allowed to run in a fixed circle (i.e. steady state cornering). This is done for

83 multiple angles, and the data is combined to estimate our lateral forces. It is very similar to the test which will be used to identify our steering angles in proceeding sections.

As discussed in chapter3, our tire model is a steady-state tire model. Hence during our ramp-steer experimentation the steady state assumptions must hold: V˙x = 0, V˙y = 0 (or β˙ = 0) andr ˙ = 0. From the equilibria discussion in chapter4 (for most part) these hold by default during cornering. Control (i.e. equation 4.9) may be used to sustain a fixed velocity if it changes too much. From equation 3.5, ofr ˙ = 0 and V˙y = 0 (or β˙ = 0) can be used to estimate front/rear lateral forces (at steady-state) as:

f mlrVxr Fy = (5.5a) (lr + lf )cos(δ)

r mlf Vxr Fy = (5.5b) lr + lf

At steady-state, the term ”Vxr” from equation 5.5 is our lateral acceleration [32]. That is: ay = Vxr. Hence, our lateral forces at steady-state can be more accurately calculated if ay can be accurately measured through a sensor.

However, in the absence lateral acceleration sensors, our forces will need to be calculated from our estimated states. For instance:

84 Figure 5.6: Raw OptiTrack data of vehicle location during a sample ramp steer maneuver.

Figure 5.7: Yaw rate (r) estimated from OptiTrack yaw (ψ) data. Blue: Finite difference estimation. Brown: Kalman filter estimation (using former).

85 The raw yaw rate (r) in figure 5.7 was estimated from a finite-difference (e.g. equation

2.2) of yaw data (ψ). This finite difference was used along with a Kalman filter to give better estimates of our yaw rates. The (Riccati-based) Extended Kalman filter from [23] was later adapted in our work, to give a better real-time estimation of all of our states.

For instance, the longitudinal velocity (Vx) may be estimated through a finite difference of the X-axis location over time. But, since that is the global-coordinate X-velocity, the

final local-coordinate longitudinal velocity is found through coordinate transformation (i.e. solving the first two terms of equation 3.6 for Vx and Vy). That data can then be used as our ”priori” in the Kalman filter.

One important thing to note is that the vehicle’s yaw is measured in an interval of 0◦ to 360◦

(or brought to it). This means that any yaw changes from near 360◦ to near 0◦ (vice versa) will see large swings in the yaw rate, though these are technically equivalent quantities.

MATLAB’s unwrap() method is used to take care of this problem. Once a yaw angle is near 360◦ for instance, instead of resetting it to 0 it continues increasing it (e.g. 361◦) so that the yaw-rate values are continuous.

Now that the estimation of our states has been established, after the ramp steer maneuver, we can use 5.5 calculate our steady-state lateral forces. Furthermore, the slip angles from equation 3.8 may be used as well. Though the aforementioned adapted Kalman filter does a better estimating these. We obtain the following graphs for our front and rear lateral forces:

86 (a) Rear Lateral Force (b) Front Lateral Force

Figure 5.8: Identified absolute front/rear lateral forces overlaid with Modified Fiala Model (Yellow), and cornering stiffness line (Orange).

The front and rear lateral forces are obtained above. Note that the plots above are those of our absolute forces. Depending on the slip angle convention (i.e. equation 3.8) used, the slip angles and lateral forces may have same or opposite signs. The magnitude of the identified parameters remain the same, irregardless. Another thing to note about the graphs above is that, the obtained slip angles of our test points must be sorted in increasing order to get the plot above. For obvious reasons, the corresponding lateral forces must also be resorted in that new order. Works such as [30] find a linear relationship between the input slip angles and steering angle to map the horizontal axis of figure 5.8. Lastly since the Fiala tire model is a steady-state tire model, there would be cases of transients (e.g. Fy ≈ 0 at many slip angles), these would have to filtered from the data.

The cornering stiffnesses (Cα,r and Cα,f ) are identified by the slope of the linear region

(e.g. Orange line in figure 5.8). In our case, they are approximately: Cα,r = 2.72 N/rad

and Cα,f = 2.4 N/rad.

In order to identify the coefficients of friction µr and µf , in our case the Modified Fiala tire model from equation 3.13a was fit to our data (i.e. yellow curve in figure 5.8). This was done by a nonlinear least square fit of our model to the collected data in MATLAB (e.g. as of the

2018 version: lsqnonlin() or lsqcurvefit()). Since the cornering stiffnesses were previously

87 identified, they are fixed in our optimization problem. The coefficients of frictions become our only parameters to estimate. This simplifies the problem, and lessens chances of local minimums. Since the vehicle tests were done during cornering, the longitudinal force can be assumed to be negligible (per the discussion in 4.2). Hence, the derating factor (ξr) from the Modified Fiala tire model in equation 3.13b can be assumed to be near 1 to further simplify our optimization problem. The final coefficients of friction were identified to be roughly: µr = 0.33 and µf = 0.37.

One last thing to note is that, in doing ramp-steer experiments, it may be a good idea to try different longitudinal velocities until an adequate velocity is found. However, that said, tests from multiple velocities must not be mixed with each other (i.e due to the V˙x = 0 assumption). For instance, in this project low velocities often resulted in too low (compared to noise) lateral forces to extract any useful information. Hence the fixed velocity must low enough for the vehicle to be mostly cornering in circles (as opposed to drifting) in circles but not too low to make make identification infeasible.

5.3.3 Steering and Force Identification

It is finally time to identify our inputs. Most of the system identification completed in previous sections relied on the knowledge of our steering angle input δ. Our full vehicle

r model (e.g. equation 3.7) also depends on our input longitudinal force Fx . But these are not technically ”known”, up to this points. In our case, our driving motor and steering are

r actuated by pre-set Arduino commands. Hence, δ and Fx must be ”identified” with respect to our vehicle (figure 5.1). These Arduino signals will be denoted as wr (driving motor) and ws (steering) throughout this section. The identification of our inputs involves the mapping

r of ws to δ and wr to Fx . In an full-size vehicle, ws may correspond to the angular position of the in-car steering wheel for instance. The basic mapping ideas remains the same, with slight potential modifications depending on the type of vehicle.

Drive Force:

88 r The longitudinal drive force, Fx , will be the first input parameter to identify. The identifi- cation of this parameter is based on our longitudinal velocity dynamics from equation 3.5.

To reiterate:

1 V˙ = (F r − F f sin(δ)) + rV x m x y y

In identifying the lateral forces in previous sections, a method (e.g. ramp steer) was used

which would allow the vehicle to go through multiple slip angles (and lateral forces), yet

in such a way that the longitudinal force influence was minimized (e.g. cornering). In the

r same vain, in identifying the vehicle driving force, Fx , we hope to minimize lateral dynamic influences. This occurs best when the vehicle is driven straight (i.e. δ = 0◦, or corresponding ws). A straight driven vehicle has a fixed yaw as well, hence r = ψ˙ = 0 from the equation above. Therefore the only remaining parts are:

1 V˙ ≈ F r x m x

r This equation will be used to find the relationship between our Arduino’s input wr and Fx .

As aforementioned, our Arduino’s motor input is denoted by wr. In our case, this signal has the following range: 400 ≤ wr ≤ 950. That is: our vehicle’s rear wheel starts rotating at wr = 400, and the wheel’s rotational speed increases for each increasing wr until wr = 950. At that point, additional input does not further increase the wheel speed. We aim to find

r our longitudinal force as a function of this parameter i.e. Fx (wr).

In deriving the original vehicle dynamics (e.g. equation 3.4), we had assumed that all the frictional forces along the longitudinal direction were ignored. This was to simplify control

r design. However, in order to get an accurate value of Fx these must be reconsidered. 2 These are our aforementioned drag force, CdVx (Cd drag force coefficient), and other static r frictional forces Fs. Hence our actual ”Fx ”, reincorporate these to give:

89 r 2 Fx = Cwrwr + ClVx + CDVx + Fs

Since wr is our Arduino’s motor signal, the first term is our actual motor force. The second term takes into account any drive-line friction dependent on the velocity (as done in [23]).

The last two terms are our drag friction and static resistive forces (of our test surface), respectively.

Hence, our drive force input identification becomes a problems of making the following hold: ˙ 1 r Vx ≈ m Fx , as previously discussed. This can be formulated as a least square problem of obtaining our coefficient parameters such that the difference between the these two values ˙ 1 r are minimized, that is: Vx − m Fx ≈ 0. Using the least square formulation (e.g. equation 5.1), our least square problem becomes:

2 ˙ 1 2 min Vx − (Cwrwr + ClVx + CDVx + Fs) (5.6) Cwr,Cl,CD,Fs m 2

V˙x is essentially the vehicle’s longitudinal acceleration in the body frame. As discussed in [11], if this quantity cannot not easily be measured, an estimation of it may be too noisy.

Since our data collection is done on a digital system at a fixed sampling time, T , a finite difference (e.g. equation 2.2) of our known longitudinal velocity, Vx, may be used to fix this ˙ 1 r T r issue. Henceforth, Vx − m Fx ≈ 0 can be converted to: (Vx,k − Vx,k−1) − m Fx ≈ 0. Since our sampling time, T , is fixed, and Vx at time k (e.g. Vx,k) is either well known or approximated, it mitigates noise. Hence from equation 5.1 our optimization problem becomes:

k=N 2 X  T 2  min (Vx,k − Vx,k−1) − (Cwrwr,k + ClVx,k + CDVx,k + Fs) (5.7) Cwr,Cl,CD,Fs m k=0

Now our experimentation is carried out. As discussed before, our steering angle needs to be calibrated to δ = 0◦ to minimize any lateral dynamics influences. The vehicle was ran at multiple wr values (in our case, Arduino input). Experiments were ran from values of

90 wr = 400 to wr = 830, at increments of around 30 (e.g. wr = 400, 430, .., 830). Around 14 wr were tested in total. For each wr value, the experiment was repeated 2-3 times. This was to insure that any outliers in our data were actual outliers. The vehicle was laid on the testing surface, and let to run at a given wr for a fixed time. Our resulting longitudinal velocities Vx were tracked.

For each wr the optimization in equation 5.7 was carried out to find our force coefficients

(Cwr,Cl,CD and Fs). The final value of each coefficient was a ”truncated” average of the coefficient over multiple wr runs. A truncated average was used to get rid of any outliers which could skew our average.

One thing to note about equation 5.7 or the previous equation, is that all our optimization parameters (Cwr,Cl,CD and Fs) are essentially linear. That is they are merely constant coefficients which scale functions our states. Therefore, we can use linear least squares from equation 5.2 to obtain our optimization parameters for each wr. Using the formation in equation 5.2 formulation, our linear least square problem becomes:

yk = Vx,k − Vx,k−1 (5.8a) T [γ (x1) γ (x2) γ (x3) γ (x4)] = [w V V 2 1] (5.8b) 1 k 2 k 2 k 4 k m r,k x,k x,k T T φ = [Cwr Cl CD Fs] (5.8c)

Where the first two equations correspond to the kth row (or data point) of our the output vector (y) and data matrix (X), respectively. φT is the transpose of our unknown parameters vector. Note that wr,k = wr, since the Arduino input is fixed for every round. Finally, equation 5.2 can be used to solve the our parameters for every wr. Then, as previously discussed, we obtain a trimmed average of our parameters over the multiple runs. For the multiple wr tests (some of which are repeated) we get:

91 (a) Motor input coefficient. (b) Velocity resistive force coefficient

(c) Drag force coefficient (d) Static friction

Figure 5.9: Identified force coefficients over multiple wr runs. Vertical line correspond to trimmed average of a given coefficient.

Note that except for the motor force coefficient, all other coefficients are negative. This is because they are resistive forces which impede our motor force. Alternatively, in equation

5.7 all the resistive forces could have been subtracted instead, which would lead to positive coefficients. There is no difference as long as there is consistency. Also note that there are outliers in our data (notably on test 1 (wr = 400)). This is because the vehicle’s steering

◦ angle was not well calibrated to 0 on the first run. However, since for each input signal wr the test was ran twice (redundantly), at the same wr = 400 (test 2) the identified coefficient is closer to the final trimmed average coefficient. 92 Steering Angle:

All the previous identifications thus far have relied on our steering angle, δ. In this section the steering angle is identified as a function of the aforementioned input Arduido (in our case) signal ws. The steering angle identification is rooted in the fact that, at relatively low vehicle speeds, the steering angle of a vehicle can be estimated with equation 5.9 below.

This formula and its relation to steering controllers is expended upon in [36].

L tan(δ) ≈ δ = (5.9) R

At low vehicle speeds and fixed steering signals, all vehicle states are ”stable”. This is as expended upon in section 4.2. Hence, the vehicle travels a steady-state equilibrium trajectory of radius R. Knowing the vehicle’s track length (L = lr + lf ), we can identify its steering angle using equation 5.9. In our case, the vehicle is ran at multiple Arduino steering signals ws, and the resulting trajectory radius R is used to map ws to δ.

In our project, the Arduino signal for steering ranges from ws = 1200 to ws = 1900. Physically, they correspond to the Arduino’s PWM (Pulse Width Modulation) signal sent to the steering servo motor. But this detail is not important, it might as well been the voltage. As long as there is a one-to-one map between the actuation signal and resulting steering angle. For each fixed Arduino input ws, the vehicle was ran at a constant (and low) velocity on the testing surface. The positions of the vehicle was recorded for each test.

This experiment was repeated from ws = 1200 to ws = 1860 in increments of (typically) 20. Then a Kasa fit (as done in [32]) is used to find the radius, R, of our circle for a given

Arduino input, ws. Least squares based on the formula of a circle could be used, but the formula of a circle is nonlinearly dependent on the parameters to be identified (e.g. radius

of circle, and its center location), hence a nonlinear least square would need to be used.

The Kasa fit was much simpler to implement, and was fairly accurate as shown below:

93 (a) Vehicle trajectory at small steering angle. (b) Vehicle trajectory at larger steering angle.

Figure 5.10: Vehicle trajectories at fixed steering signals versus fitted radius of trajectory.

Figure 5.11: Radius fit for multiple input Arduino signals (hence steering angles) superim- posed.

94 As discussed before, the Kasa fit method based on [7] was used to fit the circles. Since our physical testing space is relatively compact, the vehicle cannot travel in a full circle for certain signals (e.g. signals corresponding to smaller steering angles and therefore larger radii). This is shown in figure 5.10. The fitting algorithm is nonetheless capable of finding the best-fit circle approximation from the limited data. At signals corresponding to larger steering angles, the radii are smaller, hence full fits are obtained. For each one of our input steering signals ws, equation 5.9 is used to calculate the corresponding steering angle in radians. The linear version (without tangent) was sufficient in this work, as they are close enough. The following map is obtained as a result:

(a) Piecewise linear data fit (b) Quadratic data fit

Figure 5.12: Fit of Arduino signal ws to steering angle (degrees) using piece-wise linear and quadratic fit, respectively.

95 Figure 5.13: Less accurate linear fit of Arduino signal ws to steering angle (degrees).

As was done for the motor force identification, at each input Arduino signal ws data was collected multiple times (at least 5). In the most ideal case, our vehicle steering angle should be fairly linear with respect to the our input, and start diverging at larger steering angles. However, our testing surface was slightly uneven (e.g. material-wise) and the servo motor used could have been better. Hence, our vehicle had a more piece-wise linear or quadratic steering angle map as seen in figure 5.12. The left and right steering behaved slightly differently. This was something noticed while running the vehicle as well. The

◦ neutral steering position (e.g. δ = 0 ) was at around ws = 1440.

The fits above were done with linear least square of tree types of functions: piece-wise linear function, quadratic function and linear function, respectively. The best one to pick would depend on the project, in our case either the piecewise linear or quadratic maps (as seen in figure 5.12) work best. The linear, piecewise linear and quadratic maps, are respectively given as:

δ = alws + bl ws,min ≤ ws ≤ ws,max (5.10)

  ap,1ws + bp,1 ws,min ≤ ws < w0 δ = (5.11)  ap,2ws + bp,2 w0 ≤ ws ≤ ws,max 96 2 δ = aqws + bqws + cq ws,min ≤ ws ≤ ws,max (5.12)

Where: ws is our input Arduino signal. wo is the signal corresponding to neutral steering

i.e. δ = 0. ws,min and ws,max are the minimum/maximum input signals. These need not necessarily correspond to the minimum/maximum possible inputs, but only the range that

is accurately mapped. a∗, b∗, c∗ (where ∗ = l, p, q) correspond to the coefficients of our maps. These are the estimated parameters of our fits in figures 5.12 and 5.13.

Our project’s identified coefficients are given in table 5.2.

Table 5.2: Identified steering coefficient values

Coefficient Value

al 0.1 bl −147.32 ap,1 0.14 ap,2 0.081 bp,1 −201.83 bp,2 −116.5 −5 aq −9.74 ∗ 10 bq 0.39 cq −365.78 ws,min 1260 ws,max 1740

Irregardless of the fit in our case, |δ| < 22◦ seems to be the absolute accurately mapped angle range estimate. This is because the assumption made in equation 5.9 starts to breakdown at larger steering angles. Hence, this was the chosen maximum range of steering angles for this project. As aforementioned, the linear fit did not do a good job at fitting our Arduino signals to steering in this project, but in most cases the data should be fairly linear as was in this project [32].

In control design, we typically have the opposite problem, whereby our feedback algorithm gives us a desired steering angle. Given this steering angle the corresponding signal (e.g.

97 Arduino signal, ws) needs to be found to actuate the system. This is merely a problem of finding the inverse of equations 5.10, 5.11 and 5.12. These are given as:

δ − bl ws = δmin ≤ δ ≤ δmax (5.13) al

  δ−bp,1 δ ≤ δ < 0 rad  ap,1 min ws = (5.14)  δ−bp,2 0 rad ≤ δ ≤ δ  ap,2 max

q −b ± b2 − 4a (c − δ) q q q q (5.15) ws = δmin ≤ δ ≤ δmax 2aq

Note that the steering angles above must be in radians. For the quadratic inverse map

(i.e. equation 5.15), the map will be always either only be + or only −. In our case it was

”−bq + ...”. An easy test is to plug in a random δ (e.g. 0.1 rads), and see which one gives back a ws that is within our range of [ws,min, ws,max]. That will determine whether our inverse map is ”+” or ”-”.

98 Chapter 6 Experimentation

In this chapter, experiments of the theories introduced in previous chapters are conducted.

Due to the ground-work already established in chapters3 and4, the experiments are briefly discussed. The chapter begins with giving a high level introduction to the vehicle’s model simulation in Simulink R , then briefly discusses experiments carried out for both steady-state drifting and transient drifting (i.e. drifting and path tracking).

6.1 Simulation Overview

Prior to running the actual vehicle in a ”real-world” environment, the vehicle dynamics in equations 3.7 needed to be virtually tested to insure the feasibility of our algorithms.

Simulink was used for this feat, as it allows for the simulation of ”continuous-time” (or dis- credited) dynamics of our model’s differential equations. The dynamics were implemented in a Level-2 S-Function, as opposed to building the entire model through Simulink blocks.

The image below gives a high-level overview of the system’s architecture:

99 Figure 6.1: Vehicle dynamics simulation.

(a) Input Systems (b) Look-up Tables

Figure 6.2: Vehicle dynamics inner control system overview.

Figure 6.1 is the overall model architecture set-up. The ”vehicle model” receives our inputs

100 r (δ, Fx ), and Simulink solves the differential equation (i.e. equation 3.7) and gives back our states for a set initial condition. The states are then used as feedback into our control system to produce proper control action for state tracking. The look-up tables in figure

6.2 correspond to planned state-trajectories (i.e. chapter4) and gains. The appropriate vehicle state (r, vx etc.) to track at any given time is determined by the point (x-y location) closest to our desired path. Also whether the rear tires are saturated (to determine whether drifting feasible). Lastly, not shown in the diagram above (but implemented in the vehicle subsystems) are saturation blocks. Our real system has a limited steering angle (e.g. 22◦)

r and drive force (µrFz ), hence, these should be reflected in our simulation as well.

6.2 Steady State Drifting

Before doing path tracking and drifting, we need to test our vehicle’s ability to steady-state drift, as sustained drifting is imperative. This mostly involves the linearization of our lateral dynamics about a given equilibrium (i.e. as seen in equation 4.7). That is: using LQR for feedback control of our yaw rate (r) and slip angle (β) about the chosen equilibrium (e.g.

4.20). As discussed in section 4.4.1, equation 4.7 can be changed to depend on Vy to reduce our lateral dynamics’ dependence on our longitudinal ones (e.g. Vx). In the first test the the vehicle model (see figure 6.2) is initiated at equilibrium at open-loop:

(a) Vehicle model concentrated at equilibrium (b) Vehicle drift trajectory in space

Figure 6.3: Vehicle model started near drifting equilibrium at open loop.

101 Despite ”starting at equilibrium”, in the absence of control, the vehicle in unable to sustain drifting at open-loop. As seen in figure 6.3 (b), the vehicle begins to spin out control essentially exiting equilibrium. This was to be expected due to the discussion in section 4.2, the vehicle equilibria during drifting are saddle points. Next, the vehicle is brought into a configuration which saturates the rear tire before activating the drift controller, and the behavior in figure 6.4 is obtained.

Figure 6.4: Induced vehicle drift.

As discussed in section 4.4.1, drifting needs to be induced before it can be tracked or sustained. Works such as [39, 11] both discuss a regions of attraction before closed-loop drift control can be applied. This typically involves driving the vehicle in such a configuration which brings the rear tire to saturation, and brings it into a configuration suitable for the closed-loop controller to kick-in and start tracking our desired equilibrium. This is what was done in figure 6.4, the vehicle was given a median velocity, made to aggressively turn, which saturates the rear tire leading the LQR drift controller to be activated. Also leading to the vehicle successfully tracking our desired drifting trajectory, and maintain the drift. The vehicle is brought into a desired drifting behavior (e.g. fixed radius of drift R = 0.25meters) from a relatively arbitrary tire saturation not in the vicinity of our desired drift equilibrium.

In the simultaneous path tracking and steady-state drifting set-up, there are more suitable

102 conditions for induced drifting hence making the transition smoother.

6.3 Transient Drifting

The main testing track for this experiment is as given in figure 6.5. As discussed in chapter5, the relatively uneven testing surface and physical limits of the testing area made it arduous to test our physical vehicle on a more complex map, but the work in section 4.4.1 (as also discussed before) makes it realizable on an arbitrary map.

Figure 6.5: Project’s clothoid testing map.

The testing was first tried on the simulation of our nonlinear single-track vehicle dynamics in equation 3.7.

103 Figure 6.6: Resulting vehicle drift in simulation.

In order to give an overall ”unobstructed” view of the final trajectory, our final result was sampled at 5 evenly spaced points (or so) per segment. The experiment in figure 6.6 encompasses our vehicle’s position, orientation (technically mixed with vehicle slip during drift) etc. The vehicle is initiated at around the position of (0, 0) and runs one lap about

the given track. In simulation, the vehicle seems to be able to follow the straight path

with no issues. As discussed in section 4.2, little to no control effort is necessary during

cornering conditions given their relative stability. It also seems that the vehicle is able

to amass the required rear tire saturation to initiate drifting by simply turning a corner

(e.g. near (0.9, 0)). This is because the vehicle slip angle becomes non-zero early-on in

the transition region (blue), giving the vehicle controller ”the go” to activate drifting and

track it. The transition from drifting to the straight path around (0.9, 0.5) seems to also go relatively smoothly; as the drift controller is successfully deactivated for path racking.

This transition sometimes failed in initial tests due to the vehicle’s tendency to sustain countersteer in the vicinity of this transition region; tuning the path-following feedback controller more aggressively seemed to give the more desired behavior shown in the image.

Another noticeable thing is that, there is almost a symmetry across the track (e.g. left and right oval) in regards to how the vehicle enters and exists drift. This is because the behavior 104 of the simulated nonlinear model is relatively ”ideal”.

Figure 6.7: Drifting test on scaled vehicle.

As for the scaled vehicle’s test, multiple experiments were ran, while the controllers were tuned quite a bit to get close to the desired ideal behavior. As was done in the simulation experiment, final experiment data was sampled at evenly spaced intervals of 5 points on the same reference map. In fact, the reference map in figure 6.5 was built specifically for our testing surface (i.e. taking into account the space constraints of the testing space). As expected, simultaneous drifting ability of the vehicle and path tracking was not as it was in the more ”ideal” model simulation case in figure 6.6. Nonetheless the achieved drifting was relatively close to ideal, after multiple tests. From the first straight part exit at (0.9, 0), the vehicle seems be in the proper conditions to drift by simply aggressively turning, leading to the appropriate rear tire saturation necessary for the drift controller to activate. Though, as noticeable on the right side of the track, initiating the drift on the scaled vehicle leads to some tracking performance issues. This was partly due to the uneven testing surface, as the the path tracking and drifting seem to resume to the more ideal case on the second drifting path. There were also slight deviations in the upper straight path during simple

105 path tracking, for partly the same reasons.

106 Chapter 7 Conclusion and Future Work

In this work, an autonomous vehicle control scheme capable of drifting and path tracking was designed. This was accomplished through the study of the nonlinear single track vehicle model (as introduced in chapter3) during equilibrium (in section 4.2), and the use of cloithoid-based maps (in section 4.1) for smooth path trajectories. System identification

(as extensively covered in chapter5) was also instrumental to fitting our final model to our actual vehicle. Though our physical testing space allowed for good system identification results and adequate drift and path tracking, given another iteration of this project, the vehicle would be tested in a wider and more open space. This would allow for the testing of more arbitrary clothoids paths for multiple segment drifting. Also, though the 1:28 scale vehicle gives adequate ”real-world” testing abilities for our algorithms, it would be interesting to see how they fare in on 1:10 scale car or larger (especially one not optimized for drifting).

107 Bibliography

[1] National Highway Traffic Safety Administration et al. “2015 Motor vehicle Crashes:

Overview”. In: Traffic safety facts research note 2016 (2016), pp. 1–9.

[2] Oscar Agudelo. Introduction to Model Predictive Control (MPC). 2017. url: http:

//homes.esat.kuleuven.be/~maapc/static/files/CACSD/Slides/all_slides_ mpc.pdf.

[3] Matthias Althoff. “CommonRoad: Vehicle Models”. In: Technische niversit¨at

M¨unchen,Garching (2017), pp. 1–25.

[4] Karl Johan Astr¨omand Richard M Murray. Feedback Systems: An Introduction for

Scientists and Engineers. Princeton university press, 2010.

[5] Stephen Boyd. Lecture notes in Linear Dynamical Systems. Stanford University. Sept.

2018. url: https://stanford.edu/class/ee363/lectures/allslides.pdf.

[6] Mei Chen. Lecture Notes in Applied Mathematics I. http://macs.citadel.edu/

chenm/234.dir/03.dir/lect1_1.pdf. Sept. 2003.

[7] Nikolai Chernov. Circle Fit (Kasa method). MATLAB Central File Exchange. Jan.

2009. url: https://www.mathworks.com/matlabcentral/fileexchange/22642- circle-fit-kasa-method.

[8] Ezra Dyer. Why Cars Are Safer Than They’ve Ever Been. Popular Mechanics. Nov.

2017.

108 [9] Charles Fleming. Economic Impact of Traffic Accidents? About 1 Trillion a Year. Los

Angeles Times. May 2014.

[10] Vicent Girb´eset al. “Smooth kinematic controller vs. pure-pursuit for non-holonomic

vehicles”. In: Conference Towards Autonomous Robotic Systems. Springer. 2011,

pp. 277–288.

[11] Jon Matthew Gonzales. Planning and Control of Drift Maneuvers with the Berkeley

Autonomous Race Car. University of California, Berkeley, 2018.

[12] M. Herceg et al. “Multi-Parametric Toolbox 3.0”. In: Proc. of the European Control

Conference. http://control.ee.ethz.ch/~mpt. Z¨urich, Switzerland, July 2013, pp. 502–510.

[13] Yung-Hsiang Judy Hsu. Estimation and control of lateral tire forces using steering

torque. Stanford University, 2009.

[14] Justin Hughes. Car Autonomy Levels Explained. The Drive. Nov. 2017.

[15] Natural Point Inc. OptiTrack Documentation Wiki. English. Version 2.1. NaturalPoint,

Inc. 10 pp. November 9, 2018.

[16] Bengt Jacobson. Vehicle Dynamics Compendium for Course MMF062; edition 2016.

Tech. rep. Chalmers University of Technology, 2016.

[17] Jakob Lieng Jakobsen. “Autonomous Drifting of a 1: 5 Scale Model Car”. MA thesis.

Institutt for teknisk kybernetikk, 2011.

[18] Nitin R Kapania and J Christian Gerdes. “Design of a feedback-feedforward steering

controller for accurate path tracking and stability at the limits of handling”. In: Vehicle

System Dynamics 53.12 (2015), pp. 1687–1704.

[19] Fremont E Kast and James E Rosenzweig. “General Systems Theory: Applications

for Organization and Management”. In: Academy of management journal 15.4 (1972),

pp. 447–465.

[20] Krisada Kritayakirana. “Autonomous Vehicle Control At The Limits Of Handling”.

PhD thesis. Standford University, 2012. 109 [21] Carol L Langer and Cynthia Lietz. Applying Theory to Generalist Social Work Prac-

tice. John Wiley & Sons, 2014.

[22] FL Lewis. “Applied Optimal Control and Estimation, Digital Design and Implemen-

tation. Prentics Hall”. In: Inc., Englewood Cliffs, NJ, USA (1992).

[23] Jonatan Liljestrand. State Estimation of RC cars For the Purpose of Drift Control.

2011.

[24] Pedro F Lima et al. “Clothoid-based model predictive control for autonomous driv-

ing”. In: 2015 European Control Conference (ECC). IEEE. 2015, pp. 2983–2990.

[25] Lennart Ljung. “Black-box Models from Input-Ouput Measurements”. In: IMTC

2001. Proceedings of the 18th IEEE instrumentation and measurement technology con-

ference. Rediscovering measurement in the age of informatics (Cat. No. 01CH 37188).

Vol. 1. IEEE. 2001, pp. 138–146.

[26] Matthew Lynberg. “Automated Vehicles for Safety”. In: National Highway Safety

Administration (Nov. 2018).

[27] Cristina Mele, Jacqueline Pels, and Francesco Polese. “A Brief Review of Systems The-

ories and Their Managerial Applications”. In: Service Science 2.1-2 (2010), pp. 126–

135.

[28] Mobileye. Autonomous Driving. Intel. url: https://www.mobileye.com/future- of-mobility/history-autonomous-driving/.

[29] Brian Paden et al. “A survey of motion planning and control techniques for self-driving

urban vehicles”. In: IEEE Transactions on intelligent vehicles 1.1 (2016), pp. 33–55.

[30] Rajesh Rajamani. Vehicle dynamics and control. Springer Science & Business Media,

2011.

[31] Y Hindiyeh Rami. “Dynamics and Control of Drifting in Automobiles”. In: Doctor

degree thesis Stanford University (2013).

[32] Carlo Ravizzoli. “Identification and control of an RC car for drifting purposes”. In:

(2017). 110 [33] Derek Rowell. State-Space Representation of LTI Systems. http://web.mit.edu/2.

14/www/Handouts/StateSpace.pdf. Oct. 2002.

[34] Al Rundio. Nurse Management & Executive Practice. Lippincott Williams & Wilkins,

2018.

[35] Santokh Singh. Critical Reasons for Crashes Investigated in the National Motor Ve-

hicle Crash Causation Survey. Tech. rep. U.S. Department of Transportation, 2015.

[36] Jarrod M Snider et al. “Automatic steering methods for autonomous automobile

path tracking”. In: Robotics Institute, Pittsburgh, PA, Tech. Rep. CMU-RITR-09-08

(2009).

[37] Paul A Theodosis. Path Planning for an Automated Vehicle Using Professional Racing

Techniques. Stanford University. 2014.

[38] Efstathios Velenis et al. “Steady-state drifting stabilization of RWD vehicles”. In:

Control Engineering Practice 19.11 (2011), pp. 1363–1376.

[39] Christoph Voser, Rami Y Hindiyeh, and J Christian Gerdes. “Analysis and control of

high sideslip manoeuvres”. In: Vehicle System Dynamics 48.S1 (2010), pp. 317–336.

[40] Tim Wescott. “PID without a PhD”. In: Embedded Systems Programming 13.11

(2000), pp. 1–7.

[41] F Zhang et al. “Autonomous Drift Cornering with Mixed Open-Loop and Closed-loop

Control”. In: IFAC-PapersOnLine 50.1 (2017), pp. 1916–1922.

111