<<

Sensor Fusion for -based

JOHAN WAHLSTRÖM

Doctoral Thesis Stockholm, Sweden, 2017 KTH Royal Institute of Technology School of TRITA-EE 2017:173 Information Science and Engineering ISSN 1653-5146 SE-100 44 Stockholm ISBN 978-91-7729-628-7 SWEDEN

Akademisk avhandling som med tillstånd av Kungliga Tekniska högskolan framlägges till offentlig granskning för avläggande av teknologie doktorsexa- men i elektro- och systemteknik onsdagen den 20 december 2017 klockan 13.15 i Kollegiesalen, Brinellvägen 8, Stockholm.

© Johan Wahlström, december 2017.

Tryck: Universitetsservice US AB. Abstract

The fields of navigation and motion inference have rapidly been transformed by advances in computing, connectivity, and sensor design. As a result, unprecedented amounts of data are today being collected by cheap and small navigation sensors residing in our surroundings. Of- ten, these sensors will be embedded into personal mobile devices such as and tablets. To transform the collected data into valu- able information, one must typically formulate and solve a statistical inference problem. This thesis is concerned with inference problems that arise when trying to use smartphone sensors to extract information on driving behavior and traffic conditions. One of the fundamental differences be- tween smartphone-based driver behavior profiling and traditional anal- ysis based on vehicle-fixed sensors is that the former is based on mea- surements from sensors that are mobile with respect to the vehicle. Thus, the utility of data from smartphone-embedded sensors is dimin- ished by not knowing the relative orientation and position of the smart- phone and the vehicle. The problem of estimating the relative smartphone-to-vehicle orien- tation is solved by extending the state-space model of a global naviga- tion satellite system-aided inertial navigation system. Specifically, the state vector is augmented to include the relative orientation, and the measurement vector is augmented with pseudo observations describing well-known characteristics of dynamics. To estimate the relative positions of multiple smartphones, we exploit the kinematic relation between the accelerometer measurements from different smartphones. The characteristics of the estimation problem are examined using the Cramér-Rao bound, and the positioning method is evaluated in a field study using concurrent measurements from seven smartphones. The characteristics of smartphone data vary with the smartphone’s placement in the vehicle. To investigate this, a large set of vehicle trip segments are clustered based on measurements from smartphone- embedded sensors and vehicle-fixed accelerometers. The clusters are interpreted as representing the smartphone being rigidly mounted on a cradle, placed on the passenger seat, held by hand, etc. Trip seg- ments in clusters where the smartphone is believed to be held hand by hand display low maximum speeds and low correlations between the measurements from smartphone-embedded and vehicle-fixed accelerom- eters. Finally, the problem of fusing speed measurements from the on-board diagnostics system and a global navigation satellite system receiver is considered. Estimators of the vehicle’s speed and the scale factor of the wheel speed sensors are derived under the assumptions of synchronous and asynchronous samples.

Sammanfattning

Forskningen inom navigering och positionsbestämning har de se- naste åren genomgått stora förändringar pådrivna av framsteg inom beräkning, kommunikation och sensordesign. Som en konsekvens sam- las idag enorma mängder av data in med små och billiga navigerings- sensorer runt om i vår omgivning. Dessa sensorer är ofta inbyggda i mobila enheter såsom smartphones och surfplattor. För att kunna er- hålla användbar information från den data som samlas in så måste man vanligtvis formulera och lösa ett statistiskt inferensproblem. Denna avhandling studerar inferensproblem som uppstår då man försöker använda smartphone-sensorer för att extrahera information om körbeteende och trafikförhållanden. Till skillnad från traditionell ana- lys baserad på bilmonterade sensorer så är smartphone-baserad ana- lys av körbeteende baserad på mätningar från sensorer som är rörliga relativt bilen. Följaktligen så är nyttan av data som samlats in med smartphone-sensorer begränsad på grund av att telefonen och bilens relativa orientering och position i allmänhet inte är känd. Problemet att estimera en smartphones och en bils relativa ori- entering löses genom att utöka tillståndsmodellen för ett satellitpo- sitioneringsstöttat tröghetsnavigeringssystem. Mer specifikt så utökas tillståndsvektorn till att inkludera den relativa orienteringen, och obser- vationerna utökas med pseudomätningar som beskriver typisk bildyna- mik. För att estimera två eller flera smartphones relativa positioner så utnyttjar vi det kinematiska sambandet mellan de olika telefonernas ac- celerometermätningar. Estimeringsproblemets karakteristik undersöks med hjälp av Cramér-Rao olikheten, och positioneringsmetoden utvär- deras i en fältstudie som använder samtida mätningar från sju olika telefoner. Karakteristiken hos data som spelats in från smartphones varie- rar kraftigt med telefonens placering i bilen. För att studera detta så grupperas en stor mängd bilresor in i olika kluster baserat på mät- ningar från smartphone-sensorer och bilmonterade accelerometrar. De olika kluster som genereras tolkas som att de representerar resor där telefonen är fastmonterad i bilen, ligger på passagerarsätet, är hand- hållen, etc. Resor från kluster där telefonen tros vara handhållen vi- sas vara associerade med låga maxhastigheter och låga korrelationer mellan mätningarna från de bilmonterade och smartphone-integrerade accelerometrarna. Slutligen fokuserar vi på behandling av hastighets- mätningar från en bils inbyggda sensorsystem och från satellitsystem. Estimatorer av bilens hastighet och av skalfaktorn hos de inbyggda sen- sorerna presenteras under antaganden om både synkrona och asynkrona mätningar.

Acknowledgements

First and foremost, I would like to express my gratitude to Prof. Isaac Skog for helpful guidance and encouraging support throughout this period. Isaac has had a big influence on my thinking as a researcher, and his suggestions have greatly improved the quality of this thesis. My sincere thanks also goes to Prof. Peter Händel, who has not only provided me with excellent comments on my research, but also given me insightful career advice and enabled me to do several valuable research travels. I would also like to thank Prof. Arye Nehorai for allowing me to spend six months at his research laboratory at Washington University in St. Louis, beginning in August 2015. Arye greatly helped to advance my research while inspiring me with his humour and strong work ethic. I wish to express my gratitude to Prof. Hari Balakrishnan and Prof. Samuel Madden for giving me the opportunity to spend six weeks at Cambridge Mobile Telematics at the end of 2016. A big thanks goes out to Dr. Bill Bradley and everyone else at Cambridge Mobile Telematics for making this a fun stay. In the summer of 2015, I was fortunate to spend one month at Porto University, being hosted by Dr. Jo˜ao G. P. Rodrigues and Prof. Ana Aguiar. The trip was very enjoyable! I would like to take the opportunity to thank my co-authors: Prof. Joakim Jald´en, Prof. Phyllis K. Stein, Dr. Patricio S. La Rosa, Dr. Farzad Khosrow- khavar, Dr. Kouhyar Tavakolian, and Robin Larsson Nordström. A special thanks also goes to Prof. Joakim Jald´en for great support in my teaching of undergraduate courses. I am particularly grateful for having been able to do my PhD in a creative and positive work environment. Hence, I would to thank all the PhD students and other academics that I have encountered at the Department of Information Science and Engineering, at the previous Signal Processing and Communica- tion Theory Departments, at the Institution of at Porto University, and at the Center for Sensor Signal and Information Processing at Washington University in St. Louis. I have had a really good time! I am thankful to Dr. Erik Gudmundson for taking the time to serve as opponent. I am also thankful to Prof. Jonas Sjöberg, Prof. Thomas Schön, and Prof. Helena Stigson for serving in the grading committee. I am deeply grateful to my parents and extended family for their encou- ragement and patience during all these years. Finally, I would like to thank Freyja Steenman-Clark for all her love and support. Johan Wahlström Stockholm, December 2017 Contents

Contents viii

1 Introduction 1 1.1 Smartphone-based Sensing and Data Inference ...... 1 1.2 Contributions and Outline ...... 4 1.3 Publications Outside the Thesis ...... 9

2 Estimation Theory 11 2.1 Point Estimation ...... 11 2.2 Filtering ...... 13 2.3 The Cramér-Rao Bound ...... 15 2.4 Expectation-Maximization ...... 17

3 Alternative EM Algorithms for Nonlinear State-space Models 19 3.1 Introduction ...... 19 3.2 Problem Formulation ...... 21 3.3 Alternative Approximate M Steps ...... 24 3.4 Examples ...... 29 3.5 Conclusions ...... 35

4 Smartphone-based Vehicle Telematics 39 4.1 Introduction ...... 39 4.2 Academic and Industrial Projects ...... 44 4.3 System Aspects ...... 48 4.4 Services and Applications ...... 55 4.5 Summary, Historical Perspective, and Outlook ...... 74

5 GNSS-aided Inertial Navigation Systems 77 5.1 A One-dimensional Example ...... 78 5.2 The Three-dimensional Case ...... 79

viii CONTENTS ix

6 IMU Alignment for Smartphone-based Automotive Naviga- tion 83 6.1 Introduction ...... 83 6.2 Problem Formulation ...... 84 6.3 State-Space Model ...... 86 6.4 EKF-based Navigation System ...... 89 6.5 Simulation Study ...... 92 6.6 Field Study ...... 95 6.7 Conclusions ...... 96 6.A Derivation of Measurement Matrix ...... 98

7 IMU-based Smartphone-to-Vehicle Positioning 101 7.1 Introduction ...... 102 7.2 Model and Estimation Framework ...... 105 7.3 The Cramér-Rao Bound ...... 109 7.4 Field Study ...... 114 7.5 Summary ...... 117 7.A Derivation of Cramér-Rao Bounds ...... 118 7.B Method for left/right Classification ...... 119

8 Smartphone Placement within 121 8.1 Introduction ...... 121 8.2 Trip Segmentation ...... 124 8.3 Kernel-based k-means Clustering ...... 126 8.4 Experimental Results ...... 133 8.5 Discussion and Concluding Remarks ...... 142

9 Fusion of OBD and GNSS Measurements of Speed 145 9.1 Introduction ...... 145 9.2 Estimation ...... 148 9.3 The Cramér-Rao Bound ...... 153 9.4 Experimental Study ...... 155 9.5 Summary ...... 161 9.A Derivation of EM Iterations ...... 163 9.B Derivation of Cramér-Rao Bound ...... 164

10 Conclusions 165

Bibliography 173

Chapter 1 Introduction

The advent of modern telecommunications has rapidly changed the fundamen- tal conditions of human life. Consider, for example, the case of Stockholm, where Sweden’s first telephone call was made in August 1877. As illustrated in Figure 1.1, telephones in Stockholm were initially leased in pairs and a new line had to be set up between each new pair. With a subscription price of about half the annual salary of a typical worker, the phones were at first a luxury restricted to the upper class. When the first news about the newly invented telephone arrived, the board of Kungliga Telegrafverket (the Royal Telegraph Agency) expressed their skepticism by quoting the Latin proverb verba volant, scripta manent (“spoken words fly away, written words remains”) [1]. How- ever, in the first quarter of 1878, telegraph builder C. E. Nilsson installed 23 telephone lines in Stockholm, and in the fall of 1880, Stockholms Telefon- förening (Stockholm Telephone Association) published a pamphlet listing 156 subscribers in alphabetical order. By 1885, Stockholm had more telephones than any other city, both in absolute numbers and per capita! Only a year later, more than fifty Swedish cities had their own telephone association, and the expression telefonlandet Sverige (Sweden, the telephone country) had been well-established [2, 3]. Today, a similar revolution is taking place following the rapid proliferation of commercial smartphones.

1.1 Smartphone-based Sensing and Data Inference

A smartphone is a portable device that incorporates typical features of both telephones and personal computers. In a short period of time, the smartphone has gone from being a technological novelty to becoming all but a necessity in

1 2 1 Introduction

Figure 1.1: Ad for telephones in Dagens Nyheter (Today’s News) published in October 13, 1877. the modern world. As an example, the percentage of mobile phone owners in the US that also were smartphone users went from 2 percent to 79 percent in the ten-year span between 2005 and 2015 [4]. In recent years, smartphone own- ership has also skyrocketed in emerging economies such as Turkey, Malaysia, Chile, and Brazil [5]. As opposed to more simple mobile phones such as feature phones, smart- phones have the ability to run third-party software known as applications or “apps”. Normally, a smartphone is also equipped with a wide variety of sensors, including accelerometers, gyroscopes, magnetometers, barometers, satellite positioning receivers, and microphones. Hence, by combining easily upgradeable software with a diverse set of sensors, smartphones can act as ver- satile and convenient substitutes for computers, personal stereos, navigation devices, compasses, digital voice recorders, remote controls, etc. Additionally, smartphones can be used for at-home medical services, barcode scanning and interactive shopping, augmented reality gaming, wallet services, search and rescue services, tracking of public transport, and driver profiling [6]. To realize these applications, it is typically necessary to formulate and solve a statistical estimation problem. More often than not, this is a sensor fusion problem involving asynchronous measurements gathered from sensors of varying modalities. In other words, there is a great need for algorithms that can efficiently manipulate measurements and data from smartphone sensors and other available data sources. The objective of these algorithms is to supply 1.1 Smartphone-based Sensing and Data Inference 3

(a)

(b)

Figure 1.2: Smartphone in (a) portrait mode and (b) landscape mode. humans and machines with valuable and easily interpreted information. Commonly, the algorithms must consider both physical laws and the char- acteristics of human behavior. As an example, Figure 1.2 illustrates the stan- dard smartphone capability of alternating between landscape and portrait mode depending on how the smartphone is held. This is made possible by uti- lizing measurements from smartphone-embedded accelerometers. Generally, accelerometers measure specific force, i.e., the acceleration relative to free-fall. Assuming that the smartphone is not accelerating with respect to earth (as is the case at standstill), the measurements gathered from the smartphone’s three-axis accelerometer at a single sampling instance can be modeled as

a = g + ba + a. (1.1)

Here, g denotes the normal force (that cancels the gravity force) per unit mass in the coordinate frame of the accelerometers, ba denotes the accelerometer bias, and a denotes random noise. Now, let us consider the case when the accelerometer triad has its three coordinate axes pointing to the right (as seen from a user facing the display in portrait mode), pointing upwards along the display, and pointing in the direction of the display. Denoting the magnitude of the gravity force by g, it then holds that g = [0 g 0]| and g = [ g 0 0]| for the smartphones in Figure − 1.2 (a) and Figure 1.2 (b), respectively. Hence, assuming that the bias and noise terms can either be modeled in a statistical manner or neglected, it is possible to alternate between different screen orientations by comparing the accelerometer measurements with the measurements that would be expected 4 1 Introduction when the smartphone is in a perfect vertical or horizontal orientation. The described process could also be fine-tuned to consider the user’s preferences with regards to e.g., response time and sensitivity. Although accelerometers provide some information about a static smart- phone’s orientation, a smartphone’s full three-dimensional orientation can typ- ically not be estimated solely using accelerometer measurements. This is easily realized by observing that the accelerometers will measure the same specific force regardless of the smartphone’s orientation around the axis that is aligned with the gravity force. Thus, to estimate the smartphone’s three-dimensional orientation (these estimates may be of use in e.g., augmented reality applica- tions), we would have to consider other sensors (possibly in combination with accelerometers). One alternative is to fuse accelerometer and magnetometer measurements. In this context, the interpretation of the magnetometer mea- surements is analogous to that of the accelerometer measurements. However, instead of measuring the direction of the gravity force with respect to the smartphone frame, the magnetometers measure the direction of the earth’s magnetic field (disregarding local magnetic disturbances) with respect to the smartphone frame. In this way, the magnetometers enable us to resolve the smartphone’s orientation along the dimension that is left unobservable by the accelerometers. To increase the precision when tracking a time-varying ori- entation, it is possible to also incorporate gyroscope measurements. This is the basis of an attitude and heading reference system (AHRS) [7]. Next, we describe the sensor fusion problems that are addressed in this thesis.

1.2 Contributions and Outline

This thesis considers sensor fusion problems within smartphone-based vehicle telematics. Chapter 2 reviews estimation theoretical concepts used through- out the thesis, while Chapter 3 is devoted to a general parameter identifica- tion problem. In Chapter 4, we review and categorize previous work within smartphone-based vehicle telematics, while Chapter 5 describes the mathe- matical basis of global navigation satellite system (GNSS)-aided inertial nav- igation. Chapters 6 8 consider sensor fusion problems that arise due to the − smartphone’s mobility within the vehicle. In Chapter 9, we study fusion of measurements from GNSS receivers and in-vehicle sensors to estimate a vehi- cle’s speed. In addition to its importance for smartphone-based sensor setups, the problem considered in Chapter 9 is also highly relevant for general low- cost telematics solutions. The thesis is concluded in Chapter 10. Overall, the idea is to transition from theoretical analysis (Chapters 2 and 3), to surveys and reviews on telematics and navigation (Chapters 4 and 5), to application- 1.2 Contributions and Outline 5 specific contributions within smartphone-based vehicle telematics (Chapters 6 9). Although Chapters 2, 4, and 5 should be seen as an introduction to − Chapters 6 9, all chapters can be read independently. − It will be assumed that the reader is familiar with basic linear algebra and probability theory. While the mathematical notation in different chap- ters is sometimes similar, it does in general not carry over between chapters. In the following, we give a more detailed description of the individual chapters.

Chapter 2: Estimation Theory

Chapter 2 briefly introduces fundamental estimation theoretical concepts that are used in later chapters. This includes point estimators, state-space mod- els, the Cramér-Rao bound (CRB), and the expectation-maximization (EM) algorithm.

Chapter 3: Alternative EM Algorithms for Nonlinear State-space Models

Chapter 3 considers the problem of estimating the parameters of a nonlinear state-space model with additive Gaussian noise. To begin with, we review the well-known EM gradient algorithm, which performs an approximate max- imization based on a single iteration of Newton’s method. Unfortunately, for many models, the objective function will not be concave, and Newton’s method will therefore be unsuitable. Thus, we propose alternative maximiza- tion steps that instead use combinations of the Gauss-Newton method, trust region methods, and damped methods. The resulting algorithms are shown to outperform the EM gradient algorithm in a simulation study using standard observation models in tracking and localization. The chapter is based on:

J. Wahlström, J. Jald´en, I. Skog, and P. Händel, ‘Alternative EM Algorithms for Nonlinear State-space Models,” in preparation.

Chapter 4: Smartphone-based Vehicle Telematics

Chapter 4 serves as an introduction to the field of smartphone-based vehicle telematics. The chapter begins with a description of the top-level design of a smartphone-based telematics system and a summary of notable academic and commercial projects. We then review the available sensors and considerations regarding energy consumption and human-machine interfaces. The section on services and applications discusses navigation, transportation mode classifica- tion, cooperative intelligent transportation systems, cloud computing, driver 6 1 Introduction behavior classification, and road condition monitoring. All topics are discussed with a focus on the unique challenges of smartphone-based implementations, and references for further reading are provided in an easily accessible manner. The chapter is based on:

J. Wahlström, I. Skog, and P. Händel, “Smartphone-based Vehicle Telematics — A Ten-Year Anniversary,” IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 10, pp. 2802- 2825, Oct. 2017.

Chapter 5: GNSS-aided Inertial Navigation Systems

In Chapter 5, we provide a concise review of GNSS-aided inertial navigation systems (INSs). The chapter begins with an example in one-dimension, and then proceeds to examine the full three-dimensional case.

Chapter 6: IMU Alignment for Smartphone-based Automotive Nav- igation

Chapter 6 considers the problem of implementing a smartphone-based GNSS- aided INS which provides estimates of a vehicle’s dynamic state without any a priori knowledge of the relative orientation of the smartphone and the ve- hicle. The main contribution lies in extending the state-space model of the standard GNSS-aided INS, thereby enabling simultaneous vehicle navigation and smartphone-to-vehicle alignment. The accuracy of the estimated vehicle attitude is shown to be in the order of 2 [◦] for each Euler angle, which is suffi- cient to enable a reduction in the position error growth during GNSS outages based on the employment of non-holonomic constraints. The chapter is based on:

J. Wahlström, I. Skog, and P. Händel, “IMU Alignment for Smartphone- based Automotive Navigation,” in IEEE Conference on Informa- tion Fusion, Washington, DC, Jul. 2015, pp. 1437-1443.

Chapter 7: IMU-based Smartphone-to-Vehicle Positioning

In Chapter 7, we study the problem of using inertial measurements to posi- tion one or more smartphones with respect to a vehicle-fixed accelerometer triad. Specifically, we exploit the fact that the specific force, measured by smartphone-embedded accelerometers, varies with the smartphone’s position in the vehicle. The performance of an unscented Kalman filter is compared 1.2 Contributions and Outline 7 to the CRB, and analytical CRBs are derived to gain further insight into the estimation problem. A field study is conducted with seven smartphones, re- sulting in a root-mean-square error (RMSE) below 0.5 [m] in the horizontal plane. The proposed method is the first to enable joint estimation in all spatial directions using an arbitrary number of smartphones. The chapter is based on:

J. Wahlström, I. Skog, P. Händel, and A. Nehorai, “IMU-based Smartphone-to-Vehicle Positioning,” IEEE Transactions on Intel- ligent Vehicles, vol. 1, no. 2, pp. 139-147, Jun. 2016.

Chapter 8: Smartphone Placement within Vehicles

Chapter 8 uses kernel-based k-means clustering to infer the placement of smartphones within vehicles. Clustering features are computed using data from smartphone sensors and a vehicle-fixed sensor tag equipped with an ac- celerometer triad. Each data point is associated with a vehicle trip segment, and the segments are clustered into fifteen different states corresponding to the smartphone being rigidly mounted on a cradle, placed on the passenger seat, held by hand, etc. The most distinctive feature distributions are shown to be associated with clusters of segments where the smartphone was held by hand. These segments generally display higher gyroscope variances, lower maximum speeds, lower tag-smartphone correlations, and shorter segment du- rations. The chapter is based on:

J. Wahlström, I. Skog, P. Händel, B. Bradley, S. Madden, and H. Balakrishnan, “Smartphone Placement within Vehicles,” submit- ted to IEEE Transactions on Intelligent Vehicles.

Chapter 9: Fusion of OBD and GNSS Measurements of Speed

Chapter 9 deals with the problem of fusing speed measurements from GNSS receivers and in-vehicle sensors accessed through the on-board diagnostics (OBD) system to jointly estimate a vehicle’s speed and the scale factor of the wheel speed sensors. Maximum likelihood and maximum a posteriori es- timators are derived under the assumption of synchronous and asynchronous samples, respectively. Further, it is shown how to estimate model parameters using the EM algorithm. The estimators are evaluated using real-world vehicle dynamics and simulated measurement errors. The experiments align well with the CRB, and the proposed estimators are shown to provide a significantly better performance than can be achieved by either of the sensors individually. The chapter is based on: 8 1 Introduction

J. Wahlström, I. Skog, R. Larsson Nordström, and P. Händel, “Fu- sion of OBD and GNSS Measurements of Speed,” submitted to IEEE Transactions on Instrumentation and Measurement.

Chapter 10: Conclusions

In the last chapter, we summarize the contributions of the thesis and provide an outlook on future research in the area.

The publications that form the basis of Chapters 3, 4, 6, 7, 8, and 9 were all collaborations with multiple authors. In all cases, I developed the inference al- gorithms, implemented the algorithms, and wrote the papers. My co-authors contributed with proofreading and discussions regarding technical details and the exposition. Moreover, I collected the data that were used in Chapters 6 and 7. The data in Chapter 8 were collected by Cambridge Mobile Telemat- ics, and the first and second data sets in Chapter 9 were collected by Robin Larsson Nordström and myself, respectively.

Table 1.1 illustrates the connections between Chapters 2 9 by summarizing − recurring estimation concepts and sensors.

Estimation Concepts Sensors Smartphone-

Ch NF† CRB EM IMU‡ GNSS OBD focused 2 XXX 3 XX 4 XXX X 5 X XX 6 X XX X 7 XX X X 8 XX X 9 XX XX

† Nonlinear filtering. ‡ Inertial measurement unit.

Table 1.1: Overview of the estimation concepts and sensors that are focused on in the respective chapters. 1.3 Publications Outside the Thesis 9

1.3 Publications Outside the Thesis

This section lists papers that were completed during my PhD studies, but were excluded from this thesis for thematic or other reasons.

I was involved in the completion of two surveys on smartphone-based insur- ance telematics:

P. Händel, I. Skog, J. Wahlström, F. Bonawiede, R. Welch, J. Ohlsson, and M. Ohlsson, “Insurance Telematics: Opportunities and Challenges with the Smartphone Solution,” IEEE Intelligent Transportation Systems Magazine, vol. 5, no. 4, pp. 57-70, Oct. 2014.

J. Wahlström, I. Skog, and P. Händel, “Driving Behavior Analysis for Smartphone-based Insurance Telematics,” in ACM Workshop on Physical Analytics, Florence, Italy, May 2015.

There were two publications studying the detection of vehicle cornering for insurance purposes using only GNSS measurements:

J. Wahlström, I. Skog, and P. Händel, “Detection of Dangerous Cornering in GNSS Data Driven Insurance Telematics,” IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 6, pp. 3073-3083, Dec. 2015.

J. Wahlström, I. Skog, and P. Händel, “Risk Assessment of Vehicle Cornering Events in GNSS Data Driven Insurance Telematics,” in IEEE Conference on Intelligent Transportation Systems, Qingdao, China, Oct. 2014, pp. 3132-3137.

My stay at Porto University resulted in a paper on map-aided dead-reckoning using data from OpenStreetMap. The results were interpreted from a privacy perspective.

J. Wahlström, I. Skog, J. G. P. Rodrigues, P. Händel, and A. Aguiar, “Map-aided Dead-reckoning using only Measurements of Speed,” IEEE Transactions on Intelligent Vehicles, vol. 1, no. 3, pp. 244-253, Sep. 2016.

Outside of the field of telematics, I had one publication studying estimation in networks from a signal processing perspective: 10 1 Introduction

J. Wahlström, I. Skog, P. S. La Rosa, P. Händel, and A. Neho- rai, “The β-model for Random Graphs — Maximum Likelihood, Cramér-Rao Bounds, and Hypothesis Testing,” IEEE Transactions on Signal Processing, vol. 65, no. 12, pp. 3234-3246, Jun. 2017.

There was also one publication in cardiology, which described how to model seismocardiograms using hidden Markov models:

J. Wahlström, I. Skog, P. Händel, F. Khosrow-khavar, K. Tavako- lian, P. K. Stein, and A. Nehorai, “A Hidden Markov Model for Seismocardiography,” IEEE Transactions on Biomedical Engineer- ing, vol. 64, no. 10, pp. 2361-2372, Oct. 2017. Chapter 2 Estimation Theory

The objective of statistical inference is to extract information on some given set of parameters or states on the basis of observations or data. In this chap- ter, we briefly review point estimators in the Bayesian and classical frame- works, standard state-space models and the filters that are used to solve them, the Bayesian and classical Cramér-Rao bounds (CRB), and the expectation- maximization (EM) algorithm.

2.1 Point Estimation

We will begin by describing standard methods for point estimation, i.e., the computation of some estimate xˆ of the quantity x on the basis of the obser- vations y.

2.1.1 The Bayesian Approach In the Bayesian setting, both x and y are considered to be random variables, and one assumes knowledge of the joint probability density function (pdf) p(x, y). Typically, both the likelihood function p(y x) and the prior p(x) are | available. The minimum mean square error (MMSE) estimator is defined as

∆ | xˆmmse = arg min Ep(x,y)[(x xˆ) (x xˆ)] (2.1) xˆ − − where Ep[ ] denotes the expectation with respect to the pdf p and the super- |· ∆ script ( ) denotes the transpose of a matrix. Now, let µx y = Ep(x y)[x]. It · | | 11 12 2 Estimation Theory then holds that

| | Ep(x,y)[(x xˆ) (x xˆ)] = Ep(x,y)[(x µx y) (x µx y)] − − − | − | | + Ep(x,y)[(µx y xˆ) (µx y xˆ)] (2.2) | − | − | Ep(x,y)[(x µx y) (x µx y)] ≥ − | − | | where we have used that Ep(x,y)[(x µx y) (µx y xˆ)] = 0. Hence, the mean − | | − square error (MSE) is minimized by

xˆmmse = Ep(x y)[x], (2.3) | i.e., the MMSE estimator is the conditional expectation of x given y. The computation of the MMSE estimate often requires high-dimensional integration, and the estimator can therefore easily become intractable. A potential remedy is to instead compute the mode of the conditional pdf. This yields

∆ xˆmap = arg max p(x y) x | (2.4) = arg max ln p(y x) + ln p(x) x | which is known as the maximum a posteriori (MAP) estimator. Here, ln denotes the natural logarithm.

2.1.2 The Classical Approach If there is no prior information, x can be considered to be an unknown de- terministic parameter. In this case, y is assumed to be a random variable distributed according to p(y x), and the conditional MSE can be written as | | | Ep(y x)[(x xˆ) (x xˆ)] = tr(Cov(xˆ)) + b b (2.5) | − − ∆ | ∆ where Cov(xˆ) = Ep(y x)[(xˆ Ep(y x)[xˆ])(xˆ Ep(y x)[xˆ]) ], b = x Ep(y x)[xˆ], | − | − | − | and tr( ) denotes the trace of a matrix. In other words, the conditional MSE is · the sum of the estimator’s variance and squared bias along all dimensions. An estimator that aims to minimize the conditional MSE will generally depend on x through the bias and therefore be unrealizable. One way to circumvent this problem is to instead constrain the estimator to the class of unbiased estimators, i.e., estimators for which Ep(y x)[xˆ] = x, x, while minimizing the | ∀ variance. This gives the minimum-variance unbiased (MVU) estimator

∆ | xˆmvu = arg min Ep(y x)[(x xˆ) (x xˆ)], xˆ | − − (2.6) subject to Ep(y x)[xˆ] = x, x. | ∀ 2.2 Filtering 13

When the MVU does not exist, an alternative is to instead use the maximum likelihood (ML) estimator

∆ xˆml = arg max p(y x). (2.7) x |

Since p(x y) p(y x)p(x), the ML estimator can be seen to be equivalent to | ∝ | the MAP estimator with an uninformative prior. For further details on point estimation, refer to [8] and [9].

2.2 Filtering

A discrete-time state-space model is a model where the time-development of some dynamic states, and the relation between the dynamic states and some observed output, can be described by a system of first-order difference equations. The objective of filtering algorithms is to recursively estimate the state at a given sampling instance using measurements obtained up to the same sampling instance. In this section, we consider filters for linear state- space models and nonlinear state-space models with additive noise.

2.2.1 The Kalman Filter

Consider the linear state-space model

xk+1 = Fkxk + wk, (2.8a)

yk = Hkxk + k, (2.8b)

where the covariance of our initial estimate xˆ0 0 is P0 0. The process noise | | wk and measurement noise k are assumed to be zero-mean with known co- variance matrices Qk and Rk, respectively. All noise terms are assumed to be temporally and mutually uncorrelated. The matrices Fk and Hk are the state transition matrix and the measurement matrix, respectively, while the subindex k is used to denote quantities at sampling instance k. The Kalman filter (KF) recursions provide prior and posterior state esti- mates with associated error covariance matrices. The prior estimate xˆk+1 k is | an estimate of xk+1 using all observations up to sampling instance k, and the posterior estimate xˆk+1 k+1 is an estimate of xk+1 using all observations up | to sampling instance k + 1. The associated covariance matrices are denoted 14 2 Estimation Theory

Pk+1 k and Pk+1 k+1, respectively. The full set of recursions can be written as | |

xˆk+1 k = Fkxˆk k, | | | Pk+1 k = FkPk kFk + Qk, | | | | 1 Kk+1 = Pk+1 kHk+1(Hk+1Pk+1 kHk+1 + Rk+1)− , (2.9) | | xˆk+1 k+1 = xˆk+1 k + Kk+1(yk+1 Hk+1xˆk+1 k), | | − | Pk+1 k+1 = (I Kk+1Hk+1)Pk+1 k. | − | Here, I is an identity matrix with the same dimension as the state vector. The matrix Kk+1 is known as the Kalman gain. The KF is the recursive linear minimum mean square error (LMMSE) estimator. In other words, the KF minimizes the MMSE among all filters with estimates that are a linear combination of the measurements. Under the assumption of Gaussian noise, the LMMSE and the MMSE coincides, and the KF provides the MMSE estimates.

2.2.2 Nonlinear Filtering In practical applications, the state-space model often needs to include nonlin- earities. Assuming zero-mean additive noise, the state-space model can then be written as

xk+1 = f(xk) + wk, (2.10a)

yk = h(xk) + k. (2.10b)

The optimal filter (in mean-square sense) for this model is generally intractable. One approach is to linearize the transition and measurement models around the posterior and prior estimates, respectively. It is then possible to apply the optimal KF recursions to the resulting linearized model. This is known as the extended Kalman filter (EKF). An alternative approximation is provided by the unscented Kalman filter (UKF). In this method, the state estimates are propagated through the state- space model by the use of a standardized number of deterministic samples, so called sigma points. The UKF generally displays better performance than the EKF in the presence of severe nonlinearities [10]. Refer to [11] for a discussion on links between the EKF and the UKF. Instead of relying on model approximations, the particle filter (PF) nu- merically approximates the optimal solution to the exact model. Specifically, the filtering density p(x ,..., x y ,..., y ) is approximated by the use of 1 k | 1 k stochastic samples, so called particles. In its simplest form, the PF propagates particles by drawing samples from the state-transition model. The weights of 2.3 The Cramér-Rao Bound 15 the particles are then updated based on the likelihood of each particle con- ditioned on the new observations. While the EKF and UKF are limited to unimodal posteriors, the particle representation enables the PF to approxi- mate any posterior distribution. Typically, the PF must include a resampling step that eliminates particles with low weights and multiplies particles with high weights. The number of particles required for a successful PF increases rapidly with the state dimension, and the PF is therefore a poor choice for high-dimensional problems. The KF, EKF, UKF, and PF all have their well- established smoothing counterparts. The smoothing algorithms estimate the state at a given sampling instance using measurements up until a later sam- pling instance. In some applications, the nonlinear state-space model has a linear sub- structure. This can be exploited by marginalizing the corresponding linear state variables and then estimating these using the KF. The remaining state variables are estimated using a PF. This is known as a marginalized particle filter (MPF) or a Rao-Blackwellized particle filter (RBPF) [12]. In comparison to the full PF applied to all state variables, the MPF tends to both reduce the estimation variance and the computational complexity. Refer to [13, 14, 15] for further details on filtering.

2.3 The Cramér-Rao Bound

A common way to assess the performance of an estimator is to compare its MSE matrix to the CRB. The CRB provides a lower bound on the MSE matrix of estimators, and can be computed directly from the likelihood function. What is more, the CRB can often be used to characterize the estimation problem in terms of its underlying parameters. In this section, we review Bayesian and classical CRBs. Special attention is given to CRBs for filtering and CRBs under parametric constraints.

2.3.1 The Bayesian Cramér-Rao Bound Under weak regularity conditions on p(y x) and p(x) [16], the Bayesian (pos- | terior) CRB says that any estimator xˆ must satisfy

| 1 Ep(x,y)[(x xˆ)(x xˆ) ] (Ep(x)[I(x)] + Iprio(x))− (2.11) − −  where

∆ | I(x) = Ep(y x)[sy x sy x], (2.12a) | | | ∆ | Iprio(x) = Ep(x)[sx sx], (2.12b) 16 2 Estimation Theory and we have used A B to denote that A B is positive semidefinite. Here, ∆ | ∆ |− sy x = (∂x ln p(y x)) and sx = (∂x ln p(x)) , where ∂( ) is used to denote the | | · Jacobian with respect to the subindex. The matrix I(x) is known as the Fisher information matrix (FIM).

2.3.2 The Classical Cramér-Rao Bound

Under weak regularity conditions on p(y x) [8], the classical CRB says that | any unbiased estimator xˆ must satisfy

| 1 Ep(y x)[(x xˆ)(x xˆ) ] I(x)− . (2.13) | − −  Estimators that attain this bound are said to be efficient. An efficient esti- mator is also the MVU estimator. Since the bound only holds for unbiased estimators, it follows that the left-hand side can be replaced with Cov(xˆ). In practice, the CRB is often reduced to the scalar inequality

| 1 Ep(y x)[(x xˆ) (x xˆ)] tr(I(x)− ). (2.14) | − − ≥

2.3.3 The Cramér-Rao Bound for State-Space Models

Consider the nonlinear state-space model with additive noise from equation (2.10). Further, assume that wk and k are white Gaussian noise processes with covariance matrices Qk and Rk, respectively. The classical CRBs for the corresponding filtering and prediction problems are then given by the Riccati recursion for the estimation covariance in the EKF. The linearized state tran- sition matrices and measurement matrices are obtained by linearizing around the true state realization. In other words, it holds that

| Ep(y1:k x0:k+1)[(xk+1 xˆk+1 k)(xk+1 xˆk+1 k) ] Pk+1 k, (2.15a) | − | − |  | | Ep(y1:k+1 x0:k+1)[(xk+1 xˆk+1 k+1)(xk+1 xˆk+1 k+1) ] Pk+1 k+1, (2.15b) | − | − |  | where y =∆ y ,..., y and x =∆ x ,..., x for any k, while 1:k { 1 k} 0:k { 0 k}

| Pk+1 k = FkPk kFk + Qk, (2.16) | | | | 1 Pk+1 k+1 = Pk+1 k Pk+1 kHk+1(Hk+1Pk+1 kHk+1 + Rk+1)− Hk+1Pk+1 k. | | − | | |

∆ ∆ The linearized filter matrices are here defined as Fk = ∂xk f(xk) and Hk =

∂xk h(xk). Refer to [17] for discussions on the Bayesian CRB for filtering. 2.4 Expectation-Maximization 17

2.3.4 The Cramér-Rao Bound under Parametric Constraints Let us once again consider the classical CRB presented in Section 2.3.2. If there are equality constraints that confine the parameters x to some subset of the parameter space, this can result in a reduction of the CRB. Specifically, assume that some equality constraints

g(x) = 0 (2.17) are at our disposal. Here, 0 denotes a zero vector with a dimension smaller than the parameter dimension. The Fisher information for the constrained problem can then be written as [18]

| 1 | 1 Iconstr(x) = (U(U I(x)U)− U )− (2.18) where I(x) is the Fisher information for the corresponding unconstrained problem and the columns in U form an orthogonal basis for the null space of ∂xg(x). For further details on CRBs, refer to [8] and [19].

2.4 Expectation-Maximization

The likelihood function is in some models dependent on hidden or latent vari- ables. In these situations, the computation of ML or MAP estimates requires marginalization over the latent variables, which easily becomes computation- ally intractable. This problem is commonly attacked by employing the EM algorithm.

2.4.1 The Expectation-Maximization Algorithm The EM algorithm separates an estimation problem into two steps: the ex- pectation (E) step and the maximization (M) step. Given some preliminary estimate θ(i), the E step consists of computing

(i) ∆ Q(θ, θ ) = Ep (i) (x y)[ln pθ(x, y)] (2.19a) θ | where θ is a free variable, while x and y denote the latent and observed variables, respectively. In the M step, one attempts to find

θ(i+1) = arg max Q(θ, θ(i)). (2.19b) θ These two steps are then iterated until convergence. As shown in [20], the EM algorithm always increases or maintains the likelihood function, so that

p (i+1) (y) p (i) (y). (2.20) θ ≥ θ 18 2 Estimation Theory

If the M step is computationally expensive, it can be replaced with a single iteration of Newton’s method, giving

(i+1) (i) 2 (i) (i) 1 (i) (i) θ = θ ∂ Q(θ , θ )− (∂ Q(θ , θ ))| (2.21) − θ θ 2 where ∂θ denotes the Hessian with respect to θ. This is known as the EM gradient algorithm. The EM algorithm outlined above is designed to converge to the ML esti- ∆ mate θˆml = arg maxθ pθ(y). If the MAP estimate is sought, the term ln p(θ) should be added to the M step.

2.4.2 A Linear State-Space Model To illustrate the EM iterations, consider the state-space model

xk+1 = Fxk + wk, (2.22a)

yk = Hxk + k, (2.22b) where the process noise wk and measurement noise k are assumed to be Gaus- sian and white with covariance matrices Q and R, respectively. Obviously, this is a special case of the linear state-space model from Section 2.2.1. Now, assume that we wish to estimate θ = F, H, Q, R from the mea- { } surements y , and that our ith estimate is θ(i) = F(i), H(i), Q(i), R(i) . The 1:N { } M step can then be derived as [21]

(i+1) N | F = k=1 Ep (i) (xk−1, xk y1:N )[xkxk 1] θ | − (2.23a) P N | 1 ( k=1 Ep (i) (xk−1 y1:N )[xk 1xk 1])− , · θ | − − (i+1) PN | H = k=1 ykEp (i) (xk y1:N )[xk] θ | (2.23b) P N | 1 ( k=1 Ep (i) (xk y1:N )[xkxk])− , · θ | (i+1) 1P N | Q = k=1 Ep (i) (xk y1:N )[xkxk] N θ |  (2.23c) (Pi+1) N | F k=1 Ep (i) (xk−1, xk y1:N )[xk 1xk] , − θ | −  (i+1) 1 N P | (i+1) N | R = k=1 ykyk H k=1 Ep (i) (xk y1:N )[xk]yk . (2.23d) N − θ |  P P  Since the smoothing densities pθ(i) (xk y1:N ) and pθ(i) (xk 1, xk y1:N ) are Gaus- | − | sian, the expectations can be computed from the means and covariances given by a Kalman smoother [13]. Refer to [20, 22, 23] for further details on the EM algorithm. Chapter 3 Alternative EM Algorithms for Nonlin- ear State-space Models

Abstract

The expectation-maximization algorithm is a commonly employed tool for system identification. However, for a large set of state- space models, the maximization step cannot be solved analyti- cally. In these situations, a natural remedy is to make use of the expectation-maximization gradient algorithm, i.e., to replace the maximization step by a single iteration of Newton’s method. We propose alternative expectation-maximization algorithms that re- place the maximization step with a single iteration of some other well-known optimization method. These algorithms parallel the expectation-maximization gradient algorithm while relaxing the assumption of a concave objective function. The benefit of the pro- posed expectation-maximization algorithms is demonstrated with examples based on standard observation models in tracking and localization.

3.1 Introduction

The problem of estimating the parameters of a nonlinear state-space model has received extensive study in the literature, and is of importance for appli- cations within e.g., biomedicine [24, 25], neuroscience [26], and localization [27]. The main challenge of this problem is that the likelihood function gen-

19 20 3 Alternative EM Algorithms for Nonlinear State-space Models erally is intractable. Although particle-based solutions have received plenty of attention [28, 29], they often come with limitations that require extensive workarounds. For example, standard particle-based methods for evaluating the likelihood function tend to be discontinuous in the parameter vector [30]. Similarly, the option of including the parameter in the state vector of a par- ticle filter is problematic since the augmented process does not possess any forgetting property, and hence, the variance of the particle estimates is bound to diverge. While it is to some extent possible to bypass this problem by introducing artificial dynamics for the parameter vector, these methods re- quire a significant amount of tuning [28]. Moreover, particle-based methods are generally not suitable for high-dimensional estimation problems since their performance degrades quickly with the dimension of the problem [31]. An alternative solution to the parameter identification problem is provided by the expectation-maximization (EM) algorithm [20]. The idea of the EM algorithm is to decompose an estimation problem into two steps: the expecta- tion (E) step, which includes finding the posterior distribution of some hidden states given the current parameter estimate; and the maximization (M) step, where the parameter estimate is updated based on the posterior distribution computed in the E step. For many models, each of these two steps is more tractable than the original problem [32]. Generally, the EM algorithm is con- sidered to be more numerically stable than gradient-based techniques, and tend to be favored by practitioners whenever it is applicable [28]. If the M step cannot be solved analytically, a convenient simplification is to perform an approximate maximization based on a single iteration of Newton’s method. This is known as the EM gradient algorithm [33]. In this chapter, we address the problems that arise when the EM gradient algorithm is ap- plied to a state-space model that is not guaranteed to give a concave objective function. Specifically, we propose EM algorithms that replace the standard M step with a single iteration of some alternative optimization method. For nonlinear state-space models with additive Gaussian noise, the M step can be reformulated as a stochastic nonlinear least-squares problem. This can be attacked using Gauss-Newton type methods. In addition, we explore solu- tions based on trust region and damped methods. To summarize, our main contribution lies in combining the idea of the EM gradient algorithm with op- timization methods that are more generally applicable than Newton’s method. In this way, we maintain the simplicity of the EM gradient algorithm while circumventing problems related to the shape of the objective function. Two numerical examples are used to benchmark the proposed algorithms against the EM gradient algorithm and standard filtering methods. 3.2 Problem Formulation 21

3.2 Problem Formulation

Consider the state-space model

xk+1 = fθ(xk) + wk, (3.1a)

yk = hθ(xk) + k. (3.1b)

Nx Ny Here, xk R denotes the state variable and yk R denotes the mea- ∈ Nx ∈ Ny surements. The noise processes wk R and k R are assumed to be ∈ ∈ Gaussian and white with positive definite covariances Q and R, respectively. Furthermore, θ RNθ denotes a vector of unknown parameters that specifies ∈ the transition and measurement functions f ( ) and h ( ), respectively, and θ · θ · the initial state is distributed according to some distribution p(x0) that is independent of θ. Throughout the chapter, the subindex k is used to denote quantities at sampling instance k. Nonlinear state-space models of the type described by (3.1) are abundant in engineering and signal processing and have been well-studied in the literature [34, 35, 36, 37, 38]. This chapter is con- cerned with the problem of estimating the parameter vector θ from a set of measurements y =∆ y N . Next, we briefly review variations of the EM 1:N { k}k=1 algorithm and describe how these can be used to solve the stated problem. The review will both clarify the relation between the methods proposed in Sec- tion 3.3 and state-of-the-art EM algorithms, as well as provide the necessary background to the EM algorithm.

3.2.1 The Expectation-Maximization Algorithm The EM algorithm attempts to solve the maximum likelihood problem

θˆml = arg max pθ(y1:N ) (3.2) θ by writing the likelihood as if the missing data x =∆ x N were available. 0:N { k}k=0 The log-likelihood of the complete data is then integrated with respect to (i) the posterior distribution p (i) (x y ), where θ denotes the current best θ 0:N | 1:N estimate of θ. Thus, it is possible to alternate between updating the parameter estimate by maximizing the expected log-likelihood of the complete data, and using the new parameter estimate to update the posterior distribution of the missing data [20]. Formally, the E step consists of computing (i) ∆ Q(θ, θ ) = E[ln pθ(x0:N , y1:N )] (3.3a) where we let E[ ] denote the expectation with respect to pθ(i) (x0:N y1:N ). Sim- · | ilarly, M step amounts to solving θ(i+1) = arg max Q(θ, θ(i)). (3.3b) θ 22 3 Alternative EM Algorithms for Nonlinear State-space Models

These two steps are then repeated until convergence. As shown in [20], each iteration of the EM algorithm is guaranteed to either increase or maintain the likelihood, so that p (i+1) (y ) p (i) (y ). (3.4) θ 1:N ≥ θ 1:N In the remainder of this section, we describe variations of the EM algorithm.

3.2.2 Extensions of the Expectation-Maximization Algorithm There are many examples where either the E step or the M step does not have a closed-form solution [33]. In the former case, we first note that there are several different smoothers than can be used to approximate the posterior (i) p (i) (x y ) [35]. Likewise, the expectations in Q(θ, θ ) can be approxi- θ 0:N | 1:N mated by averaging over samples from the posterior distribution p (i) (x y ) θ 0:N | 1:N [39]. This is known as Monte Carlo EM (MCEM). In applications where maxi- mization is cheaper than simulation, the algorithm can be made more efficient by reusing samples from previous iterations, so called stochastic approxima- tion EM (SAEM) [40]. If there is no closed-form solution to the M step, it can be replaced by some method for identifying a parameter estimate θ(i+1) that satisfies

Q(θ(i+1), θ(i)) Q(θ(i), θ(i)). (3.5) ≥ The resulting algorithms are known as generalized EM (GEM) algorithms [20]. An example of a GEM algorithm is the expectation conditional max- imization (ECM) algorithm. The ECM algorithm replaces the standard M step with a sequence of conditional maximization steps, each of which max- imizes Q(θ, θ(i)), but with some vector function of θ held fixed [41]. The strength of the ECM algorithm is that the conditional maximizations often have analytic solutions or at least are more simple to implement than the original M step [42]. To ensure convergence properties similar to those of the EM algorithm, the constraints are chosen so that the complete maximization resulting from a sequence of conditional maximization steps (performed in be- tween two E steps) is over the full parameter space of θ. This is referred to as the space-filling condition. Although the ECM algorithm typically converges more slowly than the EM algorithm in terms of number of iterations, it can be much faster in total computer time [23, p. 159]. The following modifications of the ECM algorithm can be used to improve its convergence properties: • The multicycle ECM (MCECM) algorithm adds E steps in between some of the conditional maximization steps [41]. Hence, each iteration is di- vided into different cycles, where each cycle consists of one E step fol- lowed by an ordered set of conditional maximizations. 3.2 Problem Formulation 23

• The expectation conditional maximization either (ECME) algorithm re- (i) places Q(θ, θ ) with the actual likelihood function pθ(y1:N ) in some of the maximization steps. In [43], it was concluded that the ECME is nearly always faster than the EM and ECM algorithms in terms of number of iterations. In terms of total computer time, it can be faster by several orders of magnitude.

• The alternating expectation conditional maximization (AECM) algo- rithm allows the set of hidden states and the constraints to vary within and between iterations [42]. An iteration of the AECM algorithm is con- sidered to consist of the minimal number of cycles (starting after the end of the previous iteration) that are needed to fulfill the space-filling con- dition. A table that clarifies how the EM, ECM, MCECM, and ECME algorithms can be seen as special cases of the AECM algorithm is pro- vided in [42]. Although an AECM algorithm is not guaranteed to be a GEM algorithm [41], any iteration of an AECM algorithm is guaranteed to either increase or maintain the value of the likelihood function [42].

The drawbacks of the alternative M steps discussed in this subsection is that they are dependent on user design, and can require a significant amount of an- alytical and implementation effort [44]. Moreover, some of the maximization steps could still need to be solved numerically, and there are no general guar- antees on the relative performance of these algorithms and the standard EM algorithm. We conclude this section by considering a closed-form approximate M step that typically is more straightforward to implement.

3.2.3 The Expectation-Maximization Gradient Algorithm

For nonlinear state-space models with additive Gaussian noise, the derivatives of the intermediate function Q(θ, θ(i)) are readily available. It is then uncom- plicated to apply the EM gradient algorithm, i.e., to replace the M step in the EM algorithm with a single iteration of Newton’s method. Specifically, the EM gradient algorithm makes use of the second order Taylor expansion [33]

Q(θ, θ(i)) Q(θ, θ(i)) ≈ =∆ Q(θ(i), θ(i)) + ∂ Q(θ(i), θ(i))(θ θ(i)) (3.6) b θ − + 1 (θ θ(i))|∂2Q(θ(i), θ(i))(θ θ(i)) 2 − θ −

2 where ∂θ and ∂θ denote the Jacobian and Hessian with respect to θ, respec- tively. All derivatives are taken with respect to the first argument in Q(θ, θ(i)). 24 3 Alternative EM Algorithms for Nonlinear State-space Models

The parameter update is then defined as

θ(i+1) = arg max Q(θ, θ(i)) θ (3.7) (i) 2 (i) (i) 1 (i) (i) = θ ∂ Qb(θ , θ )− (∂ Q(θ , θ ))|. − θ θ The EM gradient algorithm is often favored for computational reasons since it only requires evaluations at θ = θ(i) and does not include any line search. At the same time, the EM gradient algorithm has local properties (convergence rate, monotonicity, etc.) that are similar to those of the EM algorithm [33]. A modification of the EM gradient algorithm was proposed in [45], which incorporated the idea of the ECME algorithm by replacing the Hessian of the objective function with a negative definite approximation of the Hessian of the likelihood function. The Taylor expansion in (3.6) is typically performed under the assump- 2 (i) (i) tion that the Hessian ∂θQ(θ , θ ) is negative definite [23, 33, 45]. This assumption is not only employed when studying the convergence properties of Newton’s method [46], but is also needed to ensure that the parameter updates are in ascent directions. However, as will be shown in Section 3.4, it is easy to find examples of real-world system identification problems, based on nonlinear state-space models of the type described in (3.1), where this as- sumption does not hold and where the Newton updates defined by (3.7) easily lead to convergence to a local minimum or divergence. Next, we consider several possible alternatives to the parameter update in (3.7). The resulting algorithms inherit the simplicity of the EM gradient algorithm, while being 2 (i) (i) more suitable to models where the Hessian ∂θQ(θ , θ ) is not guaranteed to be negative definite.

3.3 Alternative Approximate M Steps

This section considers the system identification problem discussed in Section 3.2, and describes how the single iteration of Newton’s method that is em- ployed in the EM gradient algorithm can be replaced by a single iteration of some other optimization method. We consider approximate M steps based on the Gauss-Newton algorithm, trust region algorithms, and damped algo- rithms. The resulting EM algorithms are compatible with any methods for approximating the expectations and posteriors that are needed in the E step. 3.3 Alternative Approximate M Steps 25

3.3.1 The Gauss-Newton Method The joint distribution of the state variables and the measurements in the model (3.1) can be decomposed according to

N ln pθ(x0:N , y1:N ) = ln p(x0) + k=1 ln pθ(xk xk 1) | − (3.8) + N ln pP(y x ). k=1 θ k| k This further means that P

(i) N N Q(θ, θ ) = k=1 E[ θ(xk 1, xk)] + k=1 E[ θ(xk, yk)] (3.9) F − H P P with

∆ 1 | 1 θ(xk 1, xk) = (xk fθ(xk 1)) Q− (xk fθ(xk 1)) (3.10a) F − − 2 − − − − and ∆ 1 1 (x , y ) = (y h (x ))|R− (y h (x )), (3.10b) Hθ k k − 2 k − θ k k − θ k where Q and R are the covariance matrices associated with model (3.1). Hence, (3.9) illustrates that the M step in (3.3b) can be cast as a stochas- tic nonlinear least-squares problem. As a result, the M step can be approxi- mately solved by applying a single iteration of the Gauss-Newton algorithm. The Gauss-Newton algorithm can be seen as making the first order Taylor expansions

(i) f (x) f (i) (x) + ∂ f (i) (x)(θ θ ), (3.11a) θ ≈ θ θ θ − (i) h (x) h (i) (x) + ∂ h (i) (x)(θ θ ), (3.11b) θ ≈ θ θ θ − in Q(θ, θ(i)), which gives

Q(θ, θ(i)) Q(θ, θ(i)) ≈ =∆ Q(θ(i), θ(i)) + ∂ Q(θ(i), θ(i))(θ θ(i)) (3.12) e θ − + 1 (θ θ(i))|H(θ(i))(θ θ(i)) 2 − − where

(i) ∆ N H N H H(θ ) = k=1 E[ (i) (xk 1)] + k=1 E[ (i) (xk)], (3.13) Fθ − Hθ P P with

H ∆ | 1 (i) (x ) = ∂ f (i) (x ) Q− ∂ f (i) (x ), (3.14a) Fθ k − θ θ k θ θ k H ∆ | 1 (i) (x ) = ∂ h (i) (x ) R− ∂ h (i) (x ). (3.14b) Hθ k − θ θ k θ θ k 26 3 Alternative EM Algorithms for Nonlinear State-space Models

Obviously, (3.13) and (3.14) make the assumption that the considered deriva- tives and expectations can be interchanged. The Gauss-Newton update is then defined as θ(i+1) = arg max Q(θ, θ(i)) θ (3.15) (i) (i) 1 (i) (i) = θ H(eθ )− (∂ Q(θ , θ ))|. − θ One of the main benefits of the Gauss-Newton algorithm is that H(θ(i)) always is negative semidefinite and typically negative definite. Since Q and R are assumed to be positive definite, a sufficient condition for the latter is for example that either ∂θfθ(i) (xk) or ∂θhθ(i) (xk) have full rank. In this case, the Gauss-Newton update in (3.15) is always made in an ascent direction. However, it should be noted that the Gauss-Newton algorithm normally has linear convergence, as opposed to the quadratic convergence of the Newton method [47]. The extension of the parameter update in (3.15) to the case where x0 is a Gaussian with mean µθ is straightforward. When fθ(x) and hθ(x) are linear in the parameter vector θ, the Gauss- Newton update in (3.15) is identical to the Newton update in (3.7). In this case, the M step becomes a linear least-squares problem that is solved analyti- cally with one Newton or one Gauss-Newton iteration. Examples of such linear models include linearized inertial navigation systems where the parameters are considered to be the biases of the inertial sensors [48], autoregressive-moving- average (ARMA) processes [49], and the well-studied training model consid- ered in [35]. However, in the general nonlinear case, the difference between the true Hessian employed in the EM gradient algorithm and the approximation used in (3.15) is

∂2Q(θ(i), θ(i)) H(θ(i)) θ − N δH N δH (3.16) = k=1 E[ (i) (xk 1, xk)] + k=1 E[ (i) (xk, yk)] Fθ − Hθ where P P

δH ∆ | 1 [ (i) (xk 1, xk)]n,: = (xk fθ(i) (xk 1)) Q− ∂[θ]n∂θfθ(i) (xk 1), (3.17a) Fθ − − − − δH ∆ | 1 [ (i) (x , y )] = (y h (i) (x )) R− ∂ ∂ h (i) (x ), (3.17b) Hθ k k n,: k − θ k [θ]n θ θ k for any n = 1,...,Nθ, with [A]n,: denoting the nth row of A and [a]n denoting the nth element of a. Further, note that Epθ(x0:N , y1:N )[xk fθ(xk 1)] = 0Nx,1 − − and Epθ(x0:N , y1:N )[yk hθ(xk)] = 0Ny,1 for any k = 1,...,N, where θ is the − parameter that was used to generate the state variables and the measurements, and 0 denotes a zero matrix of dimension ` ` . As a result, we would `1,`2 1 × 2 expect that ∂2Q(θ(i), θ(i)) H(θ(i)) when θ(i) θ, the second derivatives θ ≈ ≈ 3.3 Alternative Approximate M Steps 27

of fθ(xk) and hθ(xk) are sufficiently small, and N is sufficiently large. In 2 (i) (i) other words, under these conditions, the Hessian ∂θQ(θ , θ ) will tend to be negative definite and the EM gradient algorithm will typically be an adequate alternative. Furthermore, whenever all second derivatives of fθ(xk) and hθ(xk) 2 are independent of xk, it holds that Epθ(y1:N )[∂θQ(θ, θ) H(θ)] = 0Nθ , where − 0 denotes a zero matrix of dimension ` `. ` × 3.3.2 Trust Region Methods The idea of trust region methods is to constrain the updated parameter esti- mate to be in a region where the approximation of the objective function can be trusted [50]. Applying this idea to the approximations of Q(θ, θ(i)) defined in (3.6) and (3.12), we obtain

θ(i+1) = arg max Q(θ, θ(i)) (3.18a) θ θ(i) d(i) k − k≤ and b θ(i+1) = arg max Q(θ, θ(i)), (3.18b) θ θ(i) d(i) k − k≤ e respectively. Here, denotes the Euclidean norm and d(i) is known as the k · k radius of the trust region. As discussed in [51], the solutions to the maximiza- tion problems in (3.18a) and (3.18b) can be written as

(i+1) (i) 2 (i) (i) 1 (i) (i) θ = θ (∂ Q(θ , θ ) λI )− (∂ Q(θ , θ ))| (3.19a) − θ − Nθ θ and (i+1) (i) (i) 1 (i) (i) θ = θ (H(θ ) λI )− (∂ Q(θ , θ ))|, (3.19b) − − Nθ θ respectively, for some damping parameter λ 0. Here, I denotes the identity ≥ ` matrix of dimension `. Obviously, if the constraint is inactive it holds that λ = 0 and we recover the parameter updates in (3.7) and (3.15). The radius should be continuously updated based on the fit of the function approximation. For the trust region Newton step in (3.18a), we first compute the so-called gain ratio

Q(θ(i+1), θ(i)) Q(θ(i), θ(i)) ρ(θ(i+1), θ(i)) =∆ − . (3.20) Q(θ(i+1), θ(i)) Q(θ(i), θ(i)) − The following standard strategyb is then applied:b If ρ(θ(i+1), θ(i)) < 0.25, we set d(i+1) = d(i)/2. If ρ(θ(i+1), θ(i)) > 0.75, we set d(i+1) = max(d(i), 3 θ(i+1) k − θ(i) ) [47]. The analogous updates are made for the trust region Gauss-Newton k step in (3.18b). We emphasize that Q(θ(i+1), θ(i+1)) is generally not equal to 28 3 Alternative EM Algorithms for Nonlinear State-space Models

Q(θ(i+1), θ(i)), and hence, the function values needed to compute the gain ratio ρ(θ(i+1), θ(i)) cannot be reused in the computation of ρ(θ(i+2), θ(i+1)) (as would have been the case when performing multiple trust region iterations with the same objective function).

3.3.3 Damped Methods Instead of setting a strict limit on the distance θ(i+1) θ(i) , damped methods k − k add the corresponding penalty term to the objective function. When replacing the standard M step with a single iteration of the damped Newton method or the damped Gauss-Newton method (also known as the the Levenberg- Marquardt algorithm), we obtain

(i+1) (i) 1 (i) (i) 2 θ = arg max Q(θ, θ ) 2 λ θ θ θ − k − k (3.21a) (i) 2 (i) (i) (i) 1 (i) (i) = θ (∂ bQ(θ , θ ) λ I )− (∂ Q(θ , θ ))|, − θ − Nθ θ and

(i+1) (i) 1 (i) (i) 2 θ = arg max Q(θ, θ ) 2 λ θ θ θ − k − k (3.21b) (i) (i) (i) 1 (i) (i) = θ (He(θ ) λ I )− (∂ Q(θ , θ ))|, − − Nθ θ respectively. Generally, the damping parameter λ(i) should be decreased as the param- eter estimate approaches the solution and more trust can be put in the ap- proximation of Q(θ(i), θ(i)) [52]. We will follow the well-established updating scheme proposed in [53] which makes use of a scaling factor ν(i). Hence, after obtaining a new parameter estimate θ(i+1) from (3.21a) or (3.21b), we com- pute the corresponding gain ratio ρ(θ(i+1), θ(i)). There are then two possible cases. If ρ(θ(i+1), θ(i)) > 0, the new parameter estimate θ(i+1) is accepted, the damping parameter is updated according to λ(i+1) = λ(i) max(1/3, 1 − (2ρ(θ(i), θ(i+1)) 1)3), and we reinitialize the scaling factor as ν = 2. If − i+1 ρ(θ(i+1), θ(i)) 0, the parameter estimate is rejected, and we instead set ≤ θ(i+1) = θ(i), λ(i+1) = ν(i) λ(i), and ν(i+1) = 2 ν(i). The initial scale fac- · · tor is set to ν(0) = 2. Although equations (3.19) and (3.21) demonstrate the close relation between trust region and damped methods, there is no simple formula that describes the connection between the trust region radius and the damping parameter that gives the same parameter update [47]. Last, we note that while the M steps involving Q(θ, θ(i)) are dependent on the assumption of additive Gaussian noise, which enables the formulation in (3.9), the M steps based on Q(θ, θ(i)) could be employede also under more general assumptions on the noise terms (assuming that the expressions for b 3.4 Examples 29

(i) (i) 2 (i) (i) ∂θQ(θ , θ ) and ∂θQ(θ , θ ) are modified accordingly). The relation be- tween the M step in the EM gradient algorithm and the five alternative M steps proposed in this section is summarized in Table 3.1.

3.4 Examples

To evaluate the efficiency of the discussed M steps, this section considers two models. Both have measurement functions that are highly nonlinear in the parameter vector, and hence, they do not permit closed-form solutions to the M step in the standard EM algorithm. EM algorithms based on the six differ- ent algorithms in Table 3.1 are compared with estimators based on extended Kalman filters (EKF) and unscented Kalman filters (UKF) [54] where the state vector is extended with the sought parameter vector. Neither the filters nor the EM algorithms use any a priori knowledge of θ (this can be incorpo- rated into the EM algorithms as described in [23, p. 26]). We begin by giving some details on the implementation.

3.4.1 Implementation Details

In the E step, we chose to approximate the posteriors p (i) (x y ) and θ k| 1:N pθ(i) (xk, xk+1 y1:N ) by means of an extended Kalman smoother [55]. The | (i) (i) 2 (i) (i) (i) expectations in ∂θQ(θ , θ ), ∂θQ(θ , θ ), and H(θ ) were approximated by applying the cubature transform [11], and using the means and covariances given by the smoother. All in all, this meant that we could perform a complete EM iteration without the need to resort to Monte Carlo methods. (i+1) (i) 4 The EM algorithms were iterated until θ θ < 10− , at which k − k point the final estimate was θˆ = θ(i+1). An EM algorithm was considered to have diverged when reaching its hundredth iteration or when θ > 1 000. k k Runs leading to divergence were excluded from computations of root-mean- square errors (RMSEs). Further, the radius and the damping parameter were initialized as d(0) = 0.2 and λ(0) = 1, respectively, and each marker in the stud- ied plots was obtained as the result of 10 000 simulations of the full sequence of state vectors and measurements.

3.4.2 Bearings-Only Tracking Consider the setting of a bearings-only tracking problem where a target is traveling in a known direction in two dimensions with the mean speed v. The bearing of the target is measured relative to two stationary tracking platforms positioned along a line that is parallel to the path of the target with a perpendicular distance of r. One of the platforms is known to be located at 30 3 Alternative EM Algorithms for Nonlinear State-space Models (The Gauss-Newton method) ) ) i ( θ , θ ( e Q Objective function Relation between methods for M step. (Newton’s method) ) ) i ( θ , θ Table 3.1: ( Section 3.2.3, equation (3.7) Section 3.3.1, equation (3.15) b Q Section 3.3.2, equation (3.18a)Section 3.3.3, equation (3.21a) Section 3.3.2, equation (3.18b) Section 3.3.3, equation (3.21b) Maximization Standard Trust region Damped 3.4 Examples 31

θ p

r

v

x0 xk

Figure 3.1: The bearings-only tracking scenario with the target at xk and the tracking platforms at the unknown and known positions θ and p, respectively. p as measured along the line, and the other is located at the unknown position θ. Denoting the position of the target by x, the transition and measurement functions can be written as [11]

f(xk) = xk + v, (3.22a)

arctan2((xk p)/r) hθ(xk) = − . (3.22b) arctan2((x θ)/r) " k − #

2 2 For the simulations, we let r = 1, p = 1, θ = 0, v = 0.5, Q = (0.1) , R = σr I2, and N = 15. Further, the initial state was x0 = 0 with an initial uncertainty of P0 = 0, and the initial estimate of θ was simulated from a uniform distribution over (θ δθ, θ + δθ) where δθ = 7.5. The scenario is illustrated in Figure 3.1. − Figure 3.2 (a) now displays the RMSEs obtained when estimating θ from the set of measurements y1:N while varying σr. Here, the EM algorithms have been abbreviated based on whether the M step was implemented as a single iteration of Newton’s method (N), the Gauss-Newton method (GN), the trust region Newton method (TRN), the trust region Gauss-Newton method (TRGN), the damped Newton method (DN), or the damped Gauss-Newton method (DGN). As can be seen from Figure 3.2 (a), neither the EKF nor the UKF can manage the nonlinearities of the system, while all the EM algorithms display practically equivalent RMSEs. However, Figure 3.2 (b) reveals that the EM gradient algorithm and the EM algorithm using damped Newton steps diverge in approximately 70 % of the runs at small measurement errors, and we also see some divergence when using Gauss-Newton steps. No divergence is experienced with the EM algorithms employing trust region methods or the EM algorithm using damped Gauss-Newton steps. 32 3 Alternative EM Algorithms for Nonlinear State-space Models

(a) Estimation error — Bearings-only tracking

102

) EKF TRN ˆ θ UKF TRGN 100 N DN RMSE( GN DGN

10-2

10-2 10-1

σr

(b) Divergence rate — Bearings-only tracking 80

60

EKF TRN 40 UKF TRGN N DN GN DGN

Divergence rate [%] 20

0 10-2 10-1

σr Figure 3.2: (a) Errors and (b) divergence rates of the system identification algorithms in the bearings-only tracking scenario.

3.4.3 Log-distance Path Loss Model

In our second example, we consider a two-dimensional localization problem with a receiver positioned at xk, two transmitters at the known positions p1 and p2, and one transmitter at the unknown position θ. Further, we assume the availability of received signal strength (RSS) measurements that can be modeled according to the log-distance path loss model. Assuming that the 3.4 Examples 33

v xk

p1 x0 p2 θ

Figure 3.3: The localization problem using the log-distance path loss model with a receiver positioned at xk, two transmitters at the known positions p1 and p2, and one transmitter at the unknown position θ. receiver has the mean speed v, the corresponding transition and measurement functions can be written as [56, 57]

f(xk) = xk + v, (3.23a) log ( x p 2) 10 k k − 1k h (x ) = c 10 log ( x p 2) . (3.23b) θ k − ·  10 k k − 2k  log ( x θ 2)  10 k   k − k  | | For the simulations, we used c = 1/10 ln 10, p1 = [ 1 0] , p2 = [1 0] , | | 2 · 2 − θ = [2 0] , v = [0 0.5] , Q = (0.1) I2, R = σr I3, and N = 15. Further, | the initial state was x0 = [0 0] with an initial uncertainty of P0 = 02, and the initial estimate of θ was simulated from a uniform distribution over ([θ] δθ, [θ] + δθ) ([θ] δθ, [θ] + δθ) where δθ = 1. The scenario is 1 − 1 × 2 − 2 illustrated in Figure 3.3. The RMSEs associated with [θ]1 and [θ]2 are displayed in Figures 3.4 (a) and (b), respectively. All the alternative M steps can be seen to lead to lower RMSEs than the standard EM gradient algorithm using Newton updates. While the RMSE is only modestly decreased with damped Newton updates, the performance gain is substantial with both trust region updates and Gauss- Newton updates. At the lowest noise levels, the best performance is achieved with M steps based on a damped Gauss-Newton step, which can be seen to give an RMSE that is about one to two order of magnitudes smaller than that of the original EM gradient algorithm. The poor performance of the UKF can be improved by incorporating a prior on θ, thereby giving RMSEs that are similar to those of the EKF. However, there was no prior that enabled the 34 3 Alternative EM Algorithms for Nonlinear State-space Models

(a) Estimation error — Path loss model

100 ) 1 ] -1 ˆ θ 10

RMSE([ EKF TRN -2 10 UKF TRGN N DN GN DGN 10-3 10-2 10-1

σr

(b) Estimation error — Path loss model

EKF TRN 100 UKF TRGN N DN ) 2 ] ˆ

θ GN DGN

10-1 RMSE([

10-2

10-2 10-1

σr

Figure 3.4: Errors of the system identification algorithms in the path loss scenario.

UKF to provide better estimates than the EM algorithms with trust region or Gauss-Newton steps1. Although there is far less divergence than in the

1Upon closer study of the filter, the high RMSEs of the UKF seem to be related to in- adequate variance estimates. As previously noted in e.g., [58], there are nonlinear functions for which the Taylor transformation in the EKF provide better variance estimates than the unscented transform. 3.5 Conclusions 35

Divergence rate — Path loss model 5 EKF TRN 4 UKF TRGN N DN 3 GN DGN

2 Divergence rate [%] 1

0 10-2 10-1

σr Figure 3.5: Divergence rates of the system identification algorithms in the path loss scenario. bearings-only tracking scenario considered in Section 3.4.2, Figure 3.5 shows that some divergence is still experienced with Newton, damped Newton, and Gauss-Newton updates. Despite the large RMSEs displayed by the EM algorithms with Newton and damped Newton updates in Figure 3.4 (a), the estimates provided by these algorithms are in most runs acceptable. This is illustrated in Figure 3.6 (a), which shows the distribution of the estimates [θˆ]1 produced by the 0.5 EM gradient algorithm at σr = 10− . Most of the estimates can be seen to be concentrated around the true value 2. However, there are also several estimates concentrated around the values 2 and 0. The reason for this can − be understood by studying Figure 3.6 (b), which displays the objective func- tion Q(θ, θ) in a randomly chosen simulation with the second element of the parameter vector held fixed at its true value [θ]2 = 0. This function has local maximums and minimums at approximately [θ] = 2 and [θ] = 0, re- 1 − 1 spectively, which explains why the EM gradient algorithm produced estimates close to these values. The shape of the plot in Figure 3.6 (b) can easily be explained based on the geometry in Figure 3.3.

3.5 Conclusions

This chapter examined EM algorithms for estimating the parameters of non- linear state-space models with additive Gaussian noise. When the M step 36 3 Alternative EM Algorithms for Nonlinear State-space Models

(a) Distribution of parameter estimates 10

5 Frequency [%] 0 -5 -4 -3 -2 -1 0 1 2 3 4 5

[θˆ]1 (b) Objective function in EM algorithm -102 ) θ ,

θ 4

( -10 Q

-106 -5 -4 -3 -2 -1 0 1 2 3 4 5

[θ]1

Figure 3.6: (a) The distribution of the estimates [θˆ]1 produced by the EM 0.5 gradient algorithm in the path loss model scenario with σr = 10− ; (b) The objective function Q(θ, θ) as dependent on [θ]1 with [θ]2 = 0, in a randomly chosen simulation of the log-distance path loss model. cannot be solved by analytical means, the EM gradient algorithm provides an approximate solution that is straightforward to implement. However, the so- lution is based on a second-order Taylor expansion that makes the assumption of a negative definite Hessian. To expand the applicability of the algorithm, five alternative M steps were derived by following the methodology of the EM gradient algorithm and applying modifications based on combinations of the Gauss-Newton method, trust region methods, and damped methods. The resulting EM algorithms were benchmarked in an experimental study us- ing observation models associated with measurements of bearing and received signal strength. Simulations indicated that the proposed M steps compare favorably to the standard Newton step in the EM gradient algorithm both in terms of estimation accuracy and in terms of convergence properties. The low- est RMSEs were achieved by the EM algorithms where the M step employed trust region steps or damped Gauss-Newton steps (the Levenberg-Marquardt method). Given the close connection between trust region and damped meth- ods, the corresponding differences in performance can be expected to be asso- 3.5 Conclusions 37 ciated with the chosen strategies for updating the trust region radius and the damping parameter. In summary, the proposed EM algorithms are attractive alternatives to the EM gradient algorithm when estimating the parameters of nonlinear state-space models with nonlinearities in the parameter vector. As such, they have the potential to enable stable and computationally efficient estimators for applications within e.g., biomedicine and localization.

Chapter 4 Smartphone-based Vehicle Telematics

Abstract

Just like it has irrevocably reshaped social life, the fast growth of smartphone ownership is now beginning to revolutionize the driving experience and change how we think about automotive in- surance, vehicle safety systems, and traffic research. This chapter summarizes the first ten years of research in smartphone-based vehicle telematics, with a focus on user-friendly implementations and the challenges that arise due to the mobility of the smart- phone. Notable academic and industrial projects are reviewed, and system aspects related to sensors, energy consumption, and human-machine interfaces are examined. Moreover, we highlight the differences between traditional and smartphone-based auto- motive navigation, and survey the state-of-the-art in smartphone- based transportation mode classification, vehicular ad hoc net- works, cloud computing, driver classification, and road condition monitoring. Future advances are expected to be driven by im- provements in sensor technology, evidence of the societal bene- fits of current implementations, and the establishment of industry standards for sensor fusion and driver assessment.

4.1 Introduction

The of things (IoT) refers to the concept of connecting everyday physical objects to the existing internet infrastructure. In a potential future

39 40 4 Smartphone-based Vehicle Telematics scenario, the connected devices could range from household appliances and healthcare equipment, to vehicles and environmental probes [59]. Two main benefits are expected to result from this increased connectivity. First, IoT enables services based on the monitoring of devices and their surroundings. Connected devices can build and send status reports, as well as take appro- priate actions based on received commands. Second, IoT will provide massive amounts of data, which can be used for analysis of behavioral patterns, envi- ronmental conditions, and device performance. In the year 2020, the number of connected devices is expected to reach 13 billion, which would be an in- crease of 350 % since 2015 [60]. According to predictions, the main catalyst for this tremendous growth in IoT will be the smartphone [61]. Smartphones utilize software applications (apps) to unify the capabilities of standard computers with the mobility of cellular phones. Beginning with the birth of the modern smartphone about a decade ago (Nokia N95, first- generation iPhone, etc.), the smartphone has come to define social life in the current age, with almost one and a half billion units sold in 2016 [62]. The steadily increasing interest in utilizing the smartphone as a versatile mea- surement probe has several explanations. For example, smartphones have a large number of built-in sensors, and also offer efficient means of data transfer and social interaction. The use of smartphones for data collection falls under the general field of telematics [63], referring to services where telecom- munications are employed to transmit information provided by sensors in, e.g., vehicles (vehicle telematics) or smart buildings. Thanks to the ever-growing worldwide smartphone penetration, the vehicle and navigation industry has gained new ways to collect data, which in turn has come to benefit drivers, vehicle owners, and society as a whole. The immense number of newly undertaken projects, both in academia and in the industry, illustrate the huge potential of smartphone-based vehicle telematics, and has laid a steady foundation for future mass market implementations [66]. The relation between the IoT, telematics, vehicle telematics, and smartphone- based vehicle telematics is illustrated in Figure 4.1. There are several reasons to prefer smartphone-based vehicle telematics over implementations that only utilize vehicle-fixed sensors. Thanks to the unprecedented growth of smartphone ownership, smartphone-based solutions are generally scalable, upgradable, and cheap [67]. In addition, smartphones are natural platforms for providing instantaneous driver feedback via audio- visual means, and they enable smooth integration of telematics services with existing social networks [68]. Moreover, the replacement and development cycles of smartphones are substantially shorter than those of vehicles, and consequently, smartphones can often offer a shortcut to new technologies [69]. At the same time, the logistical demands on the end-user are typically limited Internet of Things

Telematics

Vehicle Telematics

Smartphone-based Vehicle Telematics

4.1 Introduction 41

Estimated market value:

263 Internet of Things

Telematics 138

Vehicle Telematics 45 [Billion dollars]

Smartphone-based Vehicle Telematics By 2020 By 2019

Figure 4.1: A Venn diagram illustrating the relation between the IoT, telem- atics, vehicle telematics, and smartphone-based vehicle telematics. Predicted market values are $263 billion (IoT, by 2020) [60], $138 billion (telematics, by 2020) [64], and $45 billion (vehicle telematics, by 2019) [65]. to simple app installations. However, the use of smartphones in vehicle telem- atics also poses several challenges. Built-in smartphone sensors tend to be of poor quality, and have not primarily been chosen or designed with vehicle telematics applications in mind. As a result, the employed estimation algo- rithms must necessarily rely on statistical noise models, taking the imprecision of the sensors into account. Further, the smartphone cannot be assumed to be fixed with respect to the vehicle, which complicates the interpretation of data from orientation-dependent sensors such as accelerometers, gyroscopes, and magnetometers [48]. Another issue is that of battery drain. An app that con- tinuously collects, stores, processes, or transmits data will inevitably consume energy, and hence, there is a demand for solutions that balance constraints on performance and energy efficiency [70]. Last, we note that the smartphone tends to follow the driver rather than the vehicle. This is often an advantage since it allows the driver to switch vehicle without losing the functionality of smartphone-based telematics services, and also to access services and informa- tion when off the road. However, this can also be a disadvantage when, e.g., collecting data for insurance purposes, since automotive insurance policies in many countries follow the vehicle rather than the driver. 42 4 Smartphone-based Vehicle Telematics

Smartphone-based Vehicle Telematics

Smartphone Wireless HMI communication Central data storage Humans Sensing Computation Computation

Complementary sensors

Development and execution of business model

Figure 4.2: Process diagram illustrating the information flow of smartphone- based vehicle telematics.

4.1.1 Conceptual Description of Information Flow

In what follows, we discuss the information flow of smartphone-based vehicle telematics in more detail. The process is illustrated in Figure 4.2. Measure- ments can be collected from both built-in smartphone sensors and from exter- nal, complementary sensor systems. Although vehicle-fixed sensor systems can offer both higher reliability and accuracy than their smartphone equivalents, they are often omitted to avoid the associated increase in monetary costs and logistical demands. Vehicle-fixed sensors are, however, indispensable in the more general field of vehicle telematics where they can provide updates on the status of the vehicle’s subsystems and be used to characterize driver behavior [71]. After sensing and processing measurements, data is communicated from the smartphone to a central data storage facility. Vehicle data aggregated at a central point can be used in e.g., traffic state estimation, traffic planning, or comparative driver analysis. Relevant information is sent back to the individ- ual user who can make requests of, e.g., the optimal route in terms of expected travel time to a given location, or receive feedback on his driving behavior. The data storage is typically managed by a revenue-generating corporation that supports the services offered to the end-users by financing app develop- ment and data analysis [72]. The business model pursued by the corporation relies on the extraction of commercially valuable information from the data 4.1 Introduction 43 storage. Generally, data can be processed both at a local level, in the smart- phone, and at a higher level, at the central data storage. Clearly, there is an inherent trade-off between the amount of computations required at a local level and the amount of data that must be sent to the central storage unit [73]. (The data set needed for a centralized analysis is often several orders of magnitude smaller in size than the total set of measurements gathered from the smartphones.) The amount of computations that are performed directly in the smartphone and at the central storage unit will depend on, e.g., the requested driver feedback and the interests of the service provider. In addi- tion, privacy regulations may require data pre-processing or anonymization at a local level to ensure that the user’s privacy is preserved when trans- mitting data [74]. When contemplating these issues, designers of telematics systems benefit from the availability of a multitude of dedicated hardware and software products. These include both platforms for fleet management and usage-based insurance, developed by large navigation or companies such as TomTom and Verizon, as well as flexible, open-source so- lutions offered by specialized firms such as Freematics and Openmatics. Due to the large number of potential collaborators, most telematics providers at- tempt to streamline their operations by developing architectural designs and customizable software development kits (SDKs) that facilitate system interop- erability and integration of external modules. An integral part of the discussed telematics platforms is the inclusion of a feedback-loop that enables a sensor- equipped end-user to control or change his behavior based on the results of the data analysis. In smartphone-based vehicle telematics, this could be exempli- fied by a driver who transmits driving data and then receives feedback that can be used to, e.g., improve the safety or fuel-efficiency of his driving [75]. By contrast, refers to applications where data is collected from remote locations by one-way telecommunications. Refer to [72, 76, 77] for technical details on the design of a telematics platform.

4.1.2 Contributions and Outline The aim of this chapter is to present an overview of the smartphone-based vehicle telematics field, focusing on the smartphone’s role as a measurement probe and an enabler of user-interactive services. In the process, we review recent advances, outline relevant directions for future research, and provide references for further reading. The focus is on topics of fundamental impor- tance for developers of systems for large-scale smartphone-based data collec- tion (sensor characteristics, energy consumption, the human-machine interface (HMI), transportation mode classification) and on applications that are ex- pected to continue to develop and generate widespread interest in the coming 44 4 Smartphone-based Vehicle Telematics

Table 4.1: Recommended publications for further reading.

Section Literature reviews Other notable studies 4.2 - See Table 4.2. 4.3.1 [83, 84, 85] [86] 4.3.2 [71, 87] [88] 4.3.3 [70, 89, 90, 91, 92] [93] 4.3.4 [94, 95] [96, 97, 98] 4.4.1 [99, 100] [48, 101] 4.4.2 [102, 103] [104, 105] 4.4.3 [106, 107, 108] [109] 4.4.4 [110, 111, 112] [113] 4.4.5 [66, 67, 114, 115, 116, 117, 118] [119, 120] 4.4.6 [121] [122] years (insurance telematics, cooperative intelligent transportation systems (C- ITS), mobile cloud computing (MCC), road-condition monitoring). For rea- sons of brevity, we have omitted discussions on vehicle condition monitoring [78], estimation of fuel consumption [79], privacy considerations [80], accident reconstruction [81], and traffic state estimation [82]. The chapter is orga- nized as follows. A selection of academic and industrial projects are discussed in Section 4.2. Then, Section 4.3 reviews system aspects, including sensor characteristics, energy efficiency, and the HMI. Following this, services and applications are covered in Section 4.4, which includes discussions on naviga- tion, transportation mode classification, vehicular ad hoc networks (VANETs), MCC, driver behavior classification, and pothole detection. Finally, the sur- vey is concluded in Section 4.5. A list of recommended reviews and other publications relating to the topics of the different subsections is provided in Table 4.1. The distinguishing features of this survey are that it takes a holistic approach to the field, covers a broad set of applications, and that particular emphasis is placed on the opportunities and challenges that are associated with smartphone-based implementations.

4.2 Academic and Industrial Projects

A large number of academic and industrial projects have been conducted within smartphone-based vehicle telematics. The projects differ in several aspects, and their value for future implementations will depend on the consid- ered application. As an example, the possibilities for information extraction are, to a large extent, determined by the employed sensors. Fine-grained traffic 4.2 Academic and Industrial Projects 45 state estimation generally set high demands on the penetration rate of global navigation satellite system (GNSS)-equipped probe vehicles, whereas detailed driver behavior profiling requires high rate data from, e.g., accelerometers and gyroscopes. Thus, although there are traffic apps with millions of users, their utility may still be constrained depending on which sensors and sampling rates that are employed. By contrast, academic projects are often conducted on a smaller scale and in controlled environments, with high-accuracy sensors used to benchmark the performance of smartphone-based implementations. Next, we present a selected number of academically oriented studies, and then go on to discuss industrial projects. A summary of the considered academic projects is provided in Table 4.2.

4.2.1 Academic Projects CarTel, introduced at the Massachusetts Institute of Technology in 2006, was a sensing and computing system designed to collect and analyze telematics data. The project resulted in approximately ten articles addressing topics such as traffic mitigation [123] and road surface monitoring [124]. Nericell, initiated by Microsoft in 2007, was a project that specialized in smartphone-based traffic data collection in developing countries. Most of the data were collected in Bangalore, India. The final report included discussions on inertial measurement unit (IMU) alignment, braking detection, speed bump detection, and honk detection [125]. Earlier, similar projects conducted by Microsoft include the JamBayes and ClearFlow projects, which used data collected in the US to study traffic estimation by means of machine learning [126]. The Mobile Millennium project, conducted in collaboration between the University of California, Berkeley, and Nokia, included one of the first large- scale smartphone-based collections of traffic data. The project resulted in more than forty academic publications, primarily in the field of applied mathematics [128, 132, 133]. Several future challenges were identified, all the way from defining the business roles of the involved actors, to delivering efficient driver assistance while minimizing distraction [127]. The project was extended by the Mobile Millennium Stockholm project in Sweden, in which position data were collected at one-minute intervals from 1500 GNSS-equipped taxis [134]. The Movelo Campaign introduced a smartphone-based measurement sys- tem designed for applications within insurance telematics. As part of the campaign, If P&C Insurance launched the commercial pilot if Safedrive, in which around 4500 [h] of data were collected from more than 1000 registered drivers [67, 72]. The project was the first large-scale study focusing on local data processing and real-time driver feedback. Published articles discussed, 46 4 Smartphone-based Vehicle Telematics energy efficiency. Traffic state Main field of study † † Primary sensors data GNSS, Traffic monitoring ] . microphone. countries. ] h km 6 [ . Data quantity 27 622 [ Geographical area US.India, andSeattle, US. recorded over accelerometer, in developing accelerometer. Period (yy-yy) Significant projects related to smartphone-based vehicle telematics. Nokia. New York, US. registered drivers. estimation. Institution/ company Sweco,UC Berkeley. Sweden. on-board units. estimation. ‡ Table 4.2: Key publ. [123] MIT.[125] Microsoft.[127], 06-11 07-08[128] UC Berkeley, Massachusetts,[67], Bangalore, 08-11[72] - KTH, If P&C.,[129] California and Movelo. 10-14 KTH, LiTH, More than Stockholm, 2000[130], 11- GNSS.[131] University of More than Porto. 1000 Stockholm, 12-15 GNSS, GNSS. Sweden. Porto, Portugal. Traffic state 1500 installed 624 installed Data management, GNSS. Insurance registered drivers. GNSS, Urban traffic telematics. on-board units. accelerometer. management. The study only consideredConsiders vehicle-fixed the sensors. closely related Mobile Century field experiment. Millenium Millenium Campaign Cities Project CarTel Nericell Mobile Movelo Mobile Stockholm Future † ‡ 4.2 Academic and Industrial Projects 47 e.g., estimation of fuel consumption [79], driver risk assessment [135], and business innovation [136]. The Future Cities project, conducted by the University of Porto, focused on traffic management in urban environments. The studied research areas included sustainability, mobility, urban planning, as well as information and communication technology. Data was collected from hundreds of local buses and taxis equipped with vehicle-fixed GNSS receivers and accelerometers. Among other things, the project led to the development of the SenseMyCity app, which can be employed to, e.g., estimate fuel consumption or investigate the possibilities for car sharing [131].

4.2.2 Commercial Projects The market for insurance telematics, i.e., automotive insurances with a pre- mium that is based on driving data, has grown to include more than fifteen million policyholders [137]. Most insurance telematics programs collect data by the use of in-vehicle sensors or externally installed hardware components, referred to as black boxes, in-vehicle data recorders, or aftermarket devices. The data collection process is often associated with large costs attributed to device installment and maintenance. However, by collecting data from the policyholders’ smartphones, it is possible to decrease the logistical costs while at the same time increase driver engagement and transparency [67]. For ex- ample, in 2014, the telematics provider Vehcon launched the app MVerity, intended as a stand-alone insurance telematics solution, for the mere cost of $1 per year and vehicle [138]. As of June 2016, there were thirty-three ac- tive smartphone-based insurance telematics programs (including trials) [137]. The proliferation of smartphones has also had a huge impact on the taxi industry. Specifically, companies such as Uber, Ola Cabs, and Careem are providing apps that connect passengers and drivers based on trip requests, while incorporating dynamic pricing, automated credit card payments, as well as customer and driver ratings. In June 2016, Uber was valued at $66 billion [139]. Additional examples of apps specifically designed for vehicle owners include: Waze (acquired by Google in 2013 for $1.15 billion), allowing drivers to share information about, e.g., accidents, traffic jams, and road closures; iWrecked, facilitating easy assemblies of accident reports; Torque, enabling the smartphone to extract data from in-vehicle sensors; FuelLog, logging fuel consumption as well as the maintenance and service costs of the vehicle; and Lyft (valued at $6.9 billion in April 2017 [140]), connecting passengers and drivers for on-demand ridesharing. To summarize the section, a large number of projects related to smartphone- based vehicle telematics have been conducted both in academia and in the 48 4 Smartphone-based Vehicle Telematics industry. The conducted projects differ in the employed sensors, the number of users, the required logistics, and the considered applications. The primary drivers for future projects are expected to be found in applications related to, e.g., insurance telematics, ridesharing, and driver assistance.

4.3 System Aspects

This section discusses system aspects of smartphone-based vehicle telematics. We review available sensors, battery usage, and the HMI. Many of the covered topics are relevant in a broad range of applications, and hence, this section will serve as a basis for the subsequent section that covers services and applications.

4.3.1 Smartphone Sensors In the following, we review the exteroceptive (acquiring external information: GNSS, Bluetooth, WiFi-based positioning, cellular positioning, magnetome- ters, cameras, and microphones) and proprioceptive (acquiring internal infor- mation: accelerometers and gyroscopes) sensors and positioning technologies commonly utilized in smartphones. 1) Exteroceptive Sensors: iPhone devices employ a built-in black-box fea- ture called Location Services to provide apps with navigation updates. The updates are obtained by fusing WiFi, cellular, Bluetooth, and GNSS data. WiFi and cellular positioning are mainly used to aid the initialization of the GNSS receiver (the stand-alone accuracy of these positioning systems is gener- ally inferior to that of a GNSS receiver). However, the coverage of WiFi and Bluetooth in urban environments is continuously being improved, enabling position fixes in areas where GNSS signals are often unavailable. Location Services provide three-dimensional position updates with horizontal and ver- tical accuracy measures, as well as planar speed, planar course, and times- tamps. Details on the positioning methods utilized in Locations Services and their associated accuracy can be found in [86] and [141]. In contrast to smart- phones from the iPhone series, Android devices enable apps to read GNSS messages in the NMEA 1803 standard. The standard includes not only GNSS measurements of position, planar speed, and planar course, but also addi- tional information such as detailed satellite data (refer to [142] for details on NMEA 1803). According to reports, raw GNSS measurements, such as pseu- doranges and doppler shifts, will eventually become accessible from all new Android phones [143]. This will enable both tighter information fusion and more application-dependent GNSS signal processing. Thanks to its high accuracy and availability, GNSS is often the positioning technology of choice in today’s location-based services (LBSs). Nevertheless, 4.3 System Aspects 49 the performance of commonly utilized GNSS receivers has been found to be a limiting factor in a number of ITS applications [144]. Performance evalua- tions of GNSS receivers in several commercial smartphones are presented in [145] and [146]. All of the studied smartphones were found to provide horizon- tal position measurements accurate to within 10 [m] with a 95 % probability. Since comprehensive performance evaluations of GNSS receivers in real-world scenarios often are both time-consuming and costly, simulation-based method- ologies are an attractive complement. Using simulations, test scenarios can be repeated in a controlled laboratory setting, with many possibilities for error modeling [147]. Obviously, the external validity of such experiments will be limited by the simulator’s ability to mimic the contributions of the numerous error sources in real-world GNSS scenarios [148]. The standard update rate of smartphone-embedded GNSS receivers is currently 1 [Hz]. A higher sampling rate would allow high-dynamic events to be captured with greater resolution, and could therefore be beneficial for many vehicle telematics applications [149]. A triad of magnetometers measure the magnetic field (magnetic flux den- sity) in three-dimensions and is in many applications used as a compass, in- dicating the direction of the magnetic north in the plane that is tangential to the surface of the earth. In other words, smartphone-embedded magne- tometers can often be used to gain information about the orientation of the smartphone. The accuracy of smartphone-embedded magnetometers has been studied in [150], which reported mean absolute errors in the range of 10 30 [◦ ] − when estimating the direction of the magnetic north during pedestrian walk- ing. In vehicle telematics, the use of magnetometers is further constrained by magnetic disturbances caused by the vehicle engine [151, 152, 153]. However, in some cases these disturbances can be used to extract valuable information. For example, magnetometers can be placed on the road side to detect and classify passing vehicles based on the magnetic fields that they induce [154]. Unfortunately, studies on how to extend this work to implementations using vehicle-fixed or smartphone-embedded magnetometers are scarce. The possibilities for information extraction based on built-in smartphone cameras are continuously increasing as a result of improved image resolutions and increased frame rates. Smartphone cameras can e.g., be used to aid an inertial navigation system (INS) [155], to detect or track surrounding vehicles [156], to estimate the gaze direction of drivers [157], to detect traffic signals [158], to recognize license plates [159], or to detect traffic signs [160]. Geomet- ric camera calibration is a vast topic encompassing the estimation of intrin- sic (describing the internal geometry and optical characteristics of the image sensor) and extrinsic (describing the pose of the camera with respect to an external frame of reference) parameters, as well as the joint calibration of cam- eras and motion sensors [161]. Calibration approaches specifically designed for 50 4 Smartphone-based Vehicle Telematics smartphone-embedded cameras are presented in [162, 163, 164, 165, 166]. Gen- erally, camera-based implementations are constrained by their high computa- tional cost, and requirements on, e.g., orientation and visibility. Moreover, they also tend to raise privacy concerns from users. As a result, they are often disregarded in telematics solutions that prioritize reliability and convenience. Although microphones are typically not directly used for purposes of navi- gation, they may be used to provide information regarding context and human activities [85]. In vehicle telematics, microphones have been used for honk de- tection [125], emergency vehicle detection [167], and audio ranging inside of the vehicle [168]. The use of smartphone-embedded microphones has brought attention to issues of user privacy, and several related legislations can be ex- pected in the coming years [169]. 2) Proprioceptive Sensors: A triad of accelerometers and a triad of gyro- scopes comprise what is known as a 6-degrees of freedom IMU, which pro- duces three-dimensional measurements of specific force (non-gravitational ac- celeration) and angular velocity. As shown in [170], the cost of smartphone- embedded IMUs has steadily decreased and is expected to continue to do so during the coming years (a smartphone-embedded IMU can be expected to cost less than $1 for the manufacturer). IMUs in smartphones are considered to belong to the lowest grade of inertial sensors (the commercial grade), and can exhibit significant bias, scale factor, misalignment, and random noise er- rors [7]. Detailed error analyses of IMUs in ubiquitous smartphone models have been conducted in [171, 172, 173, 174]. The maximum sample rate of IMUs in smartphones is typically in the range of 20 300 [Hz]. By fusing IMU − and GNSS measurements, it is possible to obtain high-rate estimates of the vehicle’s position, velocity, and attitude [7]. This is discussed in more detail in Section 4.4.1. The use of IMUs for driver behavior profiling and pothole detection is discussed in Sections 4.4.5 and 4.4.6, respectively.

4.3.2 Complementary Sensors Information can be obtained not only from built-in smartphone sensors, but also from vehicle-installed black boxes and from the vehicle’s on-board di- agnostics (OBD) system. The OBD system is an in-vehicle sensor system designed to monitor and report on the performance of a large number of ve- hicle components. As of 2001, OBD is mandatory for all passenger sold in Europe or in Northern America. OBD data can be sent from an OBD don- gle to a smartphone using either WiFi or Bluetooth [175]. The dongles cost from around $10 (the cost of an OBD dongle used for insurance telematics is typically higher), with the exact price dependent on factors such as smart- phone compatibility, data output, and hardware configuration, including built- 4.3 System Aspects 51 in sensing capabilities. Developers may also benefit from vehicle-independent open source hardware and software application programming interfaces (APIs) such as OpenXC, manufactured by Ford. Following the installation of a small hardware module, OpenXC deploys a user-friendly interface to give Android applications access to real-time data from in-vehicle sensors, and also supports a library for testing and prototyping in Python. Some of the most commonly used OBD parameters include measurements of vehicle speed, engine revolutions per minute, position, and air- flow rate. The utility of OBD measurements of speed have been compared with standalone smartphone solutions in the context of acceleration-based ac- cident detection [175], detection of harsh braking [67], maneuver recognition [176], [177], and vehicle speed estimation [172, 174, 178]. On the one hand, OBD measurements are not subject to multipath errors (unlike GNSS mea- surements) and only rely on vehicle-fixed sensors (as opposed to smartphone sensors whose readings often are affected by the dynamics of the smartphone with respect to the vehicle). On the other hand, smartphone-based solutions benefit from the high sampling rate of the embedded IMU and can often pro- vide contextual information related to, e.g., the driver’s activities prior to driving [88]. Fusion of OBD and smartphone data for vehicle positioning is discussed in [179]. A method for fusing OBD and GNSS measurements to estimate a vehicle’s speed is presented in Chapter 9.

4.3.3 Energy Consumption The energy consumed by technology and hardware in smartphones has been identified as a key factor in the design of new services and applications. Rec- ognizing that the GNSS receiver can often be attributed to a large share of a smartphone’s energy consumption, plenty of research has been invested into the design of low-power LBSs. Examples of LBSs include applications for personal tracking, weather or traffic updates, delivery optimization, social services, games, and emergency call localization. In dynamic tracking, the GNSS receiver is only turned on when GNSS updates are expected to result in a sufficiently large improvement of the navi- gation accuracy. Dynamic tracking can be roughly categorized into two over- lapping branches: dynamic prediction and dynamic selection. In dynamic prediction, the localization uncertainty is quantified using less power-intensive sensors [181]. Accelerometers are, for example, often used to assess if the tar- get is moving or not. In dynamic selection, the navigation system alternates between GNSS and low-power positioning systems (e.g., WiFi-based) by con- sidering the time and location dependence of 1) the accuracy requirements of the application; and 2) the performance of the positioning technologies. 52 4 Smartphone-based Vehicle Telematics

(a) (b)

] 1500 m [ Cellular

positioning ] 400

WiFi mW [

40 positioning GNSS Gyroscope Microphone 10 Power Magnetometer Localization error Accelerometer Camera

10050 400 [mW ] 150 50 Energy consumption 100 20 Figure 4.3: (a) The trade-off between localization error [92] and energy consumption [180] at 0.1 [Hz] for cellular positioning, WiFi positioning, and GNSS. (b) Typical figures of energy consumption for smartphone-embedded sensors [70, p. 127].

For example, the localization accuracy requirements for turn-by-turn navi- gation will typically increase as the vehicle approaches an intersection [182], and GNSS measurements are known to be unreliable in urban canyons [183]. By exploiting such variations in required accuracy, the smartphone can dy- namically choose the positioning technology that will optimize the trade-off between localization accuracy and energy efficiency (see Figure 4.3 (a)). Al- though sampling the GNSS receiver less often typically leads to a decrease in the energy consumed per time, the total energy required per GNSS sample will increase as a result of the cold/warm/hot-start nature of the receiver [182]. Several additional measures can be taken to improve the energy efficiency of smartphone-based LBSs. For example, navigation filters are often comple- mented by digital map-matching (MM). The resulting increase in localization accuracy will depend on the road density (the performance of MM algorithms generally deteriorates in areas where the road density is high), and hence, this must be considered in the design of energy-efficient sampling schemes [184]. Generally, MM reduces the error growth during inertial navigation and thereby makes it possible to rely only on low-power inertial sensors for a longer period of time. Another option is to reject GNSS measurements completely. One example can be found in [185], which considers map-aided cellular posi- tioning of vehicle-fixed mobile devices. An additional alternative is to employ cooperative localization, i.e., to fuse sensor measurements, e.g., GNSS and ranging measurements, from several near-by smartphones. In this way, it is possible to reduce the total number of GNSS updates, and thereby also the energy consumption, while maintaining sufficient localization accuracy [186]. For easy reference, Figure 4.3 (b) displays typical figures of energy con- 4.3 System Aspects 53 sumption for the smartphone-embedded sensors discussed in Section 4.3.1. Additional parameters characterizing the energy consumption attributed to transmission, computation, and storage, can be found in [187, 188, 189]. Since the exact energy characteristics are dependent on many factors, including the device model, sensing specifications, and smartphone settings, the provided figures should be interpreted as order-of-magnitude approximations. As an example, the energy consumption attributed to the IMU will depend on the IMU model, the chosen sampling rate, and eventual duty-cycling schemes [180]. It should also be noted that the inertial sensors and magnetometers themselves generally consume less energy than stated in Figure 4.3 (b). How- ever, since many smartphones use the main processor to directly control the sensors, continuous sensing often incurs a substantial energy overhead. A potential remedy is to let a dedicated low-power processor handle the tasks of duty cycle management, sensor sampling, and signal processing, thereby allowing the main processor to sleep more frequently [190]. In smartphone-based vehicle telematics, the issue of battery drainage can be mitigated by using a battery charger connected to the vehicle’s power outlet (cigarette lighter receptacle). Similarly, several telematics solutions combine sensors, battery, and a charger in a single device that connects to either the OBD port or the cigarette lighter receptacle [191, 192].

4.3.4 The Human-Machine Interface The term HMI encompasses all functionalities that allow a human to interact with a machine. Our main interest is the human-smartphone interface in applications of smartphone-based vehicle telematics. Refer to [193] for details on human-vehicle interfaces. The design of a human-smartphone interface can be considered to have two objectives [194]. First, the interface must be efficient. In other words, the tasks of control and monitoring should be performed with as little computational and human effort as possible. Second, the interface should be configured to minimize driver distraction. Hence, all driver-smartphone communication should be dynamically integrated with the driving operations and discourage excessive engagement in secondary tasks. As noted in [195], driver distraction due to smartphone usage has become one of the leading factors in fatal and serious injury crashes. For example, having a conversation on your phone while driving can, in some situations, increase the crash risk by a factor of four [96], with texting typically being even more distracting [196]. Four facets of distracted driving can be identified: cognitive distraction, taking your mind off the road; visual distraction, taking your eyes off the road; auditory distraction, focusing your auditory attention on sounds that are unrelated to the traffic 54 4 Smartphone-based Vehicle Telematics

Table 4.3: Characterization of typical forms of smartphone-related driver distractions [95].

Distraction Activity Cognitive Visual Auditory Manual Texting High High Low High Dialing Medium High Low High Phone call High Low High Low Voice control High Low Medium Low

environment; and manual distraction, taking your hands off the wheel [94]. While smartphone usage may contribute to each of these types of distractions, the distraction caused by smartphone calls is mainly cognitive and auditory in nature [197]. In other words, the use of hands-free equipment does little to alleviate this distraction. In what follows, we discuss the current use of audio and video in human-smartphone interfaces, and describe how the interfaces can be designed to not interfere with the task of driving. Audio communication has proven to be an efficient method for relaying information to the driver while minimizing visual and manual distractions. For example, in [182], respondents generally thought that vehicular navigation benefited more from turn-by-turn voice guidance than from visual map display. In addition to voice guidance, many driver assistance systems also make use of audio alerts. One example is the iOnRoad app, which provides warning signals when detecting speeding, insufficient headway, or the crossing of a solid line. Although voice control is an integral part of many apps, its use can at times require a significant cognitive workload. Refer to [97] and [98] for details on how different auditory tasks affect, e.g., reaction times and headway distances. Table 4.3 describes how the activities of texting, dialing a phone number, having a conversation on the phone, and delivering voice commands relate to the four facets of distracted driving. Most human-smartphone interfaces are, to a large extent, based on vi- sual communication utilizing touch-based operations and virtual keyboards. Obviously, visual display is the most efficient way to deliver complex infor- mation such as annotated maps. In addition, displayed information is not intrusive in the same way as, e.g., audio, and therefore, it allows the driver to manually coordinate driving maneuvers and information gathering. The smartphone display may also be used for augmented reality, i.e., to overlay computer-generated graphics onto images of reality. This has been utilized in apps such as iOnRoad, which highlights the vehicle’s current lane in a real- 4.4 Services and Applications 55 time video shown in the smartphone display, and Hudway, which creates a head-up display (HUD), independent of the existing vehicle technology, by reflecting smartphone images in the windshield. The latter technique is par- ticularly valuable in low visibility conditions since it enables the driver to see the curvature of the upcoming road overlaid in sharp lines on the windshield. Smartphone-based HUDs could also be used to spearhead the rollout of tar- geted and interactive in-vehicle advertising of, e.g., gas stations or restaurants [198]. Similarly, capitalizing on partnerships between original equipment man- ufacturers (OEMs) and marketers, the vehicle intelligence system could trigger ads for roadside assistance or suitable repair shops [199]. In general, the in- creasing prevalence of in-vehicle HUDs is expected to both reduce collisions (thanks to automated obstacle detection) and discourage inattentive driver behavior [200]. By using the Google and Apple standards and CarPlay, respectively, any smartphone operation can be directly integrated into the HUD, thereby further reducing visual and manual distractions. How- ever, according to predictions, only 9 % of the automobiles on the road in the year 2020 will have a built-in HUD [201], and consequently, there will continue to be a high demand for inexpensive and flexible aftermarket solutions. As should be evident, the level of concentration required for the task of driving varies with both the vehicle’s state and the traffic situation. Hence, to satisfy requirements on communication efficiency while minimizing driver dis- traction, the driver-smartphone interactions should be adjusted with respect to, e.g., the vehicle’s speed and location [202, 203]. This idea is utilized in apps such as DriveID, which enables customized restrictions of smartphone functionality while driving.

4.4 Services and Applications

Next, we cover practical applications within smartphone-based vehicle telem- atics, such as navigation, transportation mode classification, C-ITS, MCC, driver behavior classification, and road condition monitoring.

4.4.1 Navigation As opposed to navigation utilizing vehicle-fixed sensors, smartphone-based au- tomotive navigation is constrained by the fact that the sensor measurements depend not only on the vehicle dynamics, but also on the orientation, posi- tion, and movements of the smartphone relative to the vehicle. (The relation between the smartphone’s placement in the vehicle and the smartphone-to- vehicle dynamics is studied in Chapter 8.) In the following, we describe what 56 4 Smartphone-based Vehicle Telematics this means for a designer of a smartphone-based automotive navigation sys- tem, and review methods for estimating the orientation and position of the smartphone with respect to the vehicle. Subsequently, we discuss supplemen- tary information sources and how these can be used to improve the navigation solution. 1) Smartphone-based Vehicle Navigation and Smartphone-to-Vehicle Align- ment: Low-cost automotive navigation is often performed by employing a GNSS-aided INS, fusing measurements from a GNSS receiver and an IMU. Typically, a GNSS-aided INS will provide three-dimensional estimates of the vehicle’s position, velocity, and attitude (orientation) [7]. The estimates have the same update rate as the IMU measurements; refer to [204, 205, 206] for examples of smartphone-based GNSS-aided INSs. The mathematical details of GNSS-aided inertial navigation are delayed until Chapter 5. Since the mea- surements originate from sensors within the smartphone, a smartphone-based navigation system will provide estimates of the smartphone’s, rather than of the vehicle’s, dynamics and position. For the position and velocity estimates, the difference is typically negligible (when considering applications such as driver navigation and speed compliance) as long as the smartphone remains fixed to the vehicle. However, the attitude of the smartphone may obviously be very different from the attitude of the vehicle. To construct estimates of the vehicle’s attitude based on estimates of the smartphone’s attitude, one typi- cally needs some knowledge of the smartphone-to-vehicle orientation. In [204], the smartphones were fixed to the vehicle during the entire field test, and the smartphone-to-vehicle orientation was estimated using measurements from a vehicle-fixed tactical-grade IMU that had already been aligned to the vehicle frame. In most practical applications though, measurements from IMUs that have been pre-aligned to the vehicle will not be available. As a consequence, several stand-alone solutions for smartphone-to-vehicle alignment (estimation of the smartphone-to-vehicle orientation) have been proposed. In Table 4.4, publications discussing smartphone-to-vehicle alignment are categorized based on the employed sensors. Often, the estimation will use IMU measurements and exploit the fact that the vehicle’s velocity is roughly aligned with the for- ward direction of the vehicle frame [48]. Moreover, the smartphone’s roll and pitch angles can be estimated from accelerometer measurements during zero acceleration [207], the smartphone’s yaw angle can be estimated from magne- tometer measurements [208], the vehicle’s roll and pitch angles can, in many cases, be assumed to be approximately zero [100], and the vehicle’s yaw angle can be estimated from GNSS measurements of course (or consecutive GNSS measurements of position)1 [209]. Alternatively, the smartphone-to-vehicle

1Obviously, estimates of the smartphone’s and the vehicle’s attitudes with respect 4.4 Services and Applications 57

Table 4.4: Publications discussing smartphone-to-vehicle alignment.

Sensors Publications Accelerometer [213, 208, 214, 215, 216, 217] Accelerometer, GNSS [125, 174, 218, 219, 207, 220, 221, 222, 223] Accelerometer, Magnetometer [224] Accelerometer, GNSS, Magnetometer [100, 153, 225, 226, 227] Gyroscope [211] IMU [191, 192, 228, 229, 230] IMU, GNSS [48, 101, 231, 210] IMU, Magnetometer [232, 233, 234] IMU, GNSS, Magnetometer [235, 209]

orientation can be estimated by applying principal component analysis to 1) accelerometer measurements [210]: most of the variance is concentrated along the longitudinal direction of the vehicle frame, and most of the remaining variance is concentrated along the lateral direction of the vehicle frame; or 2) gyroscope measurements [211]: most of the variance is concentrated along the vehicle’s yaw axis, and most of the remaining variance is concentrated along the vehicle’s pitch axis. The variances of the accelerometer measure- ments along the longitudinal and lateral directions are associated with for- ward acceleration/deceleration and vehicle turns, respectively. The variances of the gyroscope measurements along the yaw and pitch axes are associated with vehicle turns and changes in the inclination of the road, respectively. A data-driven comparison of GNSS, magnetometer, and accelerometer-based methods for smartphone-to-vehicle alignment is presented in [212]. The three coordinate frames and commonly used methods to infer their relation are sum- marized in Figure 4.4. Chapter 6 presents a method for smartphone-to-vehicle alignment based on non-holonomic constraints and the assumption of a vehicle roll angle close to zero. Once the smartphone-to-vehicle orientation has been estimated, the IMU measurements can be rotated to the vehicle frame, thereby making it possible to estimate, e.g., the acceleration in the vehicle’s forward direction directly from the IMU measurements (see Section 4.4.5). Although smartphone-to- vehicle alignment can normally be performed with good precision when the smartphone is fixed with respect to the vehicle (the experimental study pre- sented in Chapter 6 reports estimation errors of the vehicle’s Euler angles in the order of 2 [◦ ] for each angle), the process is complicated by the fact that the smartphone-to-vehicle orientation may change at any time during a trip. to some chosen navigation frame can be directly transformed into an estimate of the smartphone-to-vehicle orientation. 58 4 Smartphone-based Vehicle Telematics frame Navigation Telematics Internet of Things Smartphone-basedVehicle Telematics Vehicle Telematics .

roll and pitch angles. yaw angle. 0 ≈ ⇒ ⇒

Accelerometer measurements of gravity vector

Magnetometer measurements of magnetic north yaw angle. roll and pitch angles ⇒ ⇒ frame

Smartphone

vehicle ⇒

GNSS measurements of course Vehicle on horizontal surface

owr ieto nsathn frame. smartphone in direction forward

ogtdnlaccelerations Longitudinal

eil a xsi mrpoeframe. smartphone in axis yaw vehicle

⇒ yocp esrmnsdrn turns during measurements Gyroscope frame The three coordinate frames of interest in smartphone-based automotive navigation and methods that Vehicle Figure 4.4: can be applied to infer their relation. 4.4 Services and Applications 59

Moreover, the accelerations and angular velocities experienced by the smart- phone when it is picked up by a user are often orders of magnitude larger than those associated with normal vehicle dynamics [100]. Therefore, the vehicle dynamics observed in measurements collected by smartphone-embedded IMUs tend to be obscured by the dynamics of the hand movements whenever the smartphone is set in motion with respect to the vehicle. Solutions presented in the literature have discarded the IMU measurements during detected pe- riods of smartphone-to-vehicle movements, and then re-initialized the inertial navigation once the smartphone is detected to be fixed with respect to the vehicle again [125]. Obviously, GNSS data can still be used for position and velocity estimation, even when the IMU measurements are discarded. How- ever, stand-alone GNSS navigation suffers from both its low update rate and its inability to provide attitude estimates. 2) Smartphone-to-Vehicle Positioning: The problem of estimating the smartphone’s position with respect to the vehicle is closely related to the prob- lem of driver/passenger classification, i.e., determining whether a smartphone belongs to the driver or a passenger of the vehicle. Usually, it is assumed that the smartphone is placed in the vicinity of its owner, and hence, smartphone- to-vehicle positioning makes it possible to assess whether the owner of a given smartphone is also the driver of the vehicle. While neither of these problems are critical to vehicle positioning, driver/passenger classifications are of use in several applications, including distracted driving solutions [192], and in- surance telematics [236]. A large number of classification features have been considered. One alternative is to study human motions as measured by the smartphone. Studied motions have included vehicle entries [152], seat-belt fastening, and pedal pressing [236]. The dynamics of the two former motions will typically differ depending on which side of the vehicle the user entered from. Similarly, the detection of pedal pressing always indicates that the user sat in the driver’s seat. While these features possess some predictive power, they are susceptible to variations in the motion behavior of each individual. In addition, they are constrained by the assumption that the user carries the smartphone in his pocket. A second option is to utilize that the specific force, measured by accelerometers, will vary depending on where in the vehicle the accelerometers have been placed [229]. The effect is most clearly seen during high-dynamic events such as when passing a pothole or during heavy cornering [101]. Typically, the measurements from a smartphone-embedded accelerom- eter must be compared with measurements from an additional smartphone (that has a different position in the vehicle) or from the OBD system (as- suming that the vehicle trajectory can be approximated with a circle [192]). Since the difference in the specific force measured by two accelerometers only depends on their relative position, and not on their absolute positions, meth- 60 4 Smartphone-based Vehicle Telematics ods of this kind can only be used for absolute positioning within the vehi- cle when the absolute position of one of the sensor nodes is already known [101]. A method for estimating the relative positions of an arbitrary number of smartphones based on their inertial measurements is presented in Chapter 7. A third option is to utilize audio ranging. Proposed frameworks have used smartphone-embedded microphones to recognize, e.g., the vehicle’s turn sig- nals [236], or high frequency beeps sent through the car’s stereo system [168], [237]. The latter positioning systems are in general very accurate. However, the required vehicle infrastructure (Bluetooth hands-free systems) can only be found in high-end vehicles, and thus, implementations in older vehicle models necessitate expensive aftermarket installations [192]. A fourth option is to estimate the relative position of two devices from their GNSS measurements. However, since the errors are normally heavily correlated in time and often exceed the typical device distance, convergence can be expected to be slow [100]. A fifth option is to utilize vehicle-installed technology to directly iden- tify the driver and the passengers, without making use of any smartphone sensors. If needed, the data or the results from the driver/passenger classifi- cation can then be communicated to all personal devices in the vehicle and be used to, e.g., customize their settings based on the design of distracted driving solutions. This of course assumes that each smartphone already has been as- sociated with its owner’s identifying characteristics. One way to implement a system of this kind is to record speech using directional microphones attached to each seat [238]. Each smartphone can then compare the recorded audio with its own unique pre-stored voice signature, and can thereby identify the seat of the smartphone owner. Some additional approaches to driver/passenger clas- sification are to identify drivers from their driving characteristics, so called driver fingerprinting [239], to position smartphones in the vehicle using Blue- tooth or near field communication [236], to position smartphones based on photos taken with embedded cameras [240], or to detect based on texting characteristics [241]. A selection of publications related to driver/passenger classification are summarized in Table 4.5. 3) Supplementary information sources: Several supplementary information sources can be employed to improve the navigation estimates. These include road maps, road features, and dynamic constraints. By using this informa- tion to aid an INS, it is often possible to reduce the dependence on GNSS availability. Next, we discuss the three mentioned information sources and how they can be integrated with sensor measurements. MM algorithms use location-based attributes (speed limits, restrictions on travel directions, etc.) and spatial road maps together with sensor measurements to infer locations, links (arcs between nodes in a road map), or the path of a traveling vehicle [99]. The algorithms are broadly categorized as online or offline, with the former 4.4 Services and Applications 61 Assumptions/requirements or additional smartphones. at least two smartphones. hands-free system. microphones available. Access to a trainingfor set voice recognition. magnetic fieldfrom engine. pocket of the driver. acceleration.acceleration,pothole crossing. smartphone and from OBD at least two smartphones. Classification features Human motion Vehicle dynamics Other pedal pressing, fastening. turn based signals. on driver. pocket or handbag of the † Mic. xx Audio ranging. Vehicle-installed Bluetooth Audio ranging. Four vehicle-installed † Publications on smartphone-to-vehicle positioning and driver/passenger classification. Sensors IMU Mag. x xx x xx Vehicle entry,x Vehicle entry.x Pothole crossing. Recognition of The smartphone is in Audio the ranging The smartphone is in the Centripetal Centripetal Any dynamics. Data is available from one Data is available from Data is available from Table 4.5: ‡ ‡ ∗ Although the work was motivated by customization of personal devices such as smartphones, the sensing system functions Abbreviations of magnetometer (Mag.)The and study microphone only (Mic.). estimates the smartphone’s lateral position in the vehicle frame. Pub. [236], [242] [152] [191] [192] [229] [101] [168], [237] [238] † ‡ ∗ independently of the devices. 62 4 Smartphone-based Vehicle Telematics primarily being used for turn-by-turn navigation, and the latter being used as, e.g., input in traffic analysis models. Although GNSS has been established as the dominant source of input data to MM algorithms, cellular positioning has been employed in several instances [243]. In addition, IMU or mea- surements can be used to increase the estimation rate and bridge GNSS signal outages. Since many MM algorithms have been developed for navigation prod- ucts using data from dedicated GNSS receivers, they often have to be modified to cope with the low temporal density and poor quality of GNSS data from smartphone-embedded receivers [244]. Today, digital road maps are created using both professional methods based on, e.g., satellite imagery, and using crowdsourcing by collecting GNSS traces from smartphone-equipped drivers. The gains of crowdsourcing have been magnified by the ubiquity of smart- phones, thereby motivating many navigation service providers to replace the commercial maps used in their products with crowdsourced open license alter- natives. One of the most popular crowdsourcing platforms is OpenStreetMap (OSM), which benefits from a large community of contributing users enabling rapid updates to changes in geography or traffic regulations [245]. However, just as many other crowdsourced databases, OSM, to some extent, suffers from logical inconsistency (duplicated lines, dead-ended one-way-roads, etc.) [246], and incomplete coverage (missing or simplified objects) [247]. Artificial position measurements can be obtained by detecting road fea- tures associated with map locations. Typically, IMUs are used for the detec- tion. Previously studied road features within smartphone-based automotive navigation include turns [182], bridges [185], potholes [228], traffic lights [248], tunnels [249], and speed bumps [250]. As demonstrated in [251], driving events such as turns, lane changes, and potholes can be used to enable smartphone- based lane-level positioning. The problem of building a map of road features was considered in [248], which used GNSS position traces collected from smart- phones to detect and map traffic lights and stop signs. In the future, we can expect more implementations to make use of the vast research on simultaneous localization and mapping (SLAM) conducted by the robotics community. Constraints on the vehicle dynamics are employed to reduce the dimension of possible navigation solutions, and thereby improve the estimation accuracy. For example, the estimated altitude of the vehicle can be constrained when the vehicle is expected to travel on an approximately flat surface or when a topographic map is available [252], the vehicle’s speed and angular velocity can be constrained to comply with the minimal turning radius [253], and the estimated velocity can be assumed to be aligned with the forward direction of the vehicle frame [48]. Last, we note that measurements from smartphone-embedded motion sen- sors and positioning technologies often display signs of pre-processing, which 4.4 Services and Applications 63 can affect the navigation capabilities of the device. One potential reason for this is that the sensor measurements have been filtered through built-in filters, designed to, e.g., mitigate sensor failures or increase accuracy. As a result, the measurements may display signs of latency or temporal error correlation [254]. Unfortunately, the transparency of built-in estimation filters is typically low.

4.4.2 Transportation Mode Classification As opposed to in-vehicle sensors, smartphone-embedded sensors can be used to collect data not only on automotive driving, but also on pedestrian activ- ities, bus rides, train rides, etc. Hence, for those smartphone-based vehicle telematics applications where the primary interest lies in data collected while traveling in a specific vehicle type, accurate transportation mode classification is an essential capability. For example, in insurance telematics and participa- tory bus arrival time prediction systems, the primary interest typically lies in car trips [100] and bus trips [255], respectively. Smartphone-based transporta- tion mode classification has also been employed for general trip reconstruction, which may be used for health monitoring or in studies of travel behavior [104]. While it would be theoretically possible to let the smartphone users manually label their trips or start and stop the data collection, the resulting burden would discourage many people from using the app to begin with [72]. In the days before the breakthrough of the smartphone, mobile-based transportation mode classification only utilized cellular positioning [256, 257]. Today, the classification primarily relies on GNSS receivers and accelerom- eters. The classification methods can be divided into heuristic rule-based approaches and machine-learning methods [258], [259]. In some cases, the algorithm will, in a first stage, attempt to detect the time points when the user switches from one mode to another, and in a second stage classify each segment of data between two such identified switches. Obviously, the accu- racy of the classification is, in these cases, constrained by the accuracy of the mode-switch detection [258]. Generally, it is comparatively easy to separate motorized modes from non-motorized, and hence, many studies will first make a coarse classification by attempting to detect all motorized data segments [260, 261]. The classification benefits from that pedestrians and bicyclists have more freedom to change their direction than automobiles [262], and that the typical speeds achieved when, say, riding in passenger cars and walking, tend to be very different [263]. There are several measurement features that can be used to separate dif- ferent motorized modes. For example, the acceleration and braking frequency of disconnected vehicles that move alongside other traffic (i.e., cars, buses, and trams) tends to be vastly different from that of vehicles moving independently 64 4 Smartphone-based Vehicle Telematics of other traffic (i.e., trains) [260]. Similarly, [105] illustrated that the distri- butions of mean (as taken over a complete trip) absolute accelerations can be expected to be very different for trains, trams, cars, and buses. Moreover, cars are more prone to be involved in quick driving maneuvers than larger vehicles [260], often reach higher speeds on the freeway [263], and do typi- cally not, as opposed to public transport, permit walking inside the vehicle. Of course, public transport also tends to make a larger number of stops per driven kilometer. Furthermore, if information on timetables, bus routes, or rail maps are available, this will clearly simplify the identification of the cor- responding transportation modes [261, 264]. It may also be possible to make use of semantic information, such as home and work location, and information on behavioral patterns [105]. Likewise, one may use that if the last recorded car trip ended at a given location, it is likely that the next recorded car trip starts at the same location. In [103], it is also proposed that one could use prior probabilities on different transportation modes that are dependent on the current day of the week, the current month, or the weather. For example, people are more likely to walk longer distances in clear weather than they are when it is raining. Proposals on how to reduce the energy consumed by smartphones collect- ing data for transportation mode classification have included sparse GNSS sampling [258], implementations only utilizing inertial sensors [260, 265], and an increased use of cellular positioning. In the last case, one option is to let changes in the cellular position estimates indicate when the user moves from indoor to outdoor settings. A GNSS fix does then only have to be attempted when the user is expected to be outdoors [104]. Last, we note that it is possible to label trips made with a specific vehicle by installing dedicated vehicle-fixed devices that can communicate with the smartphone. Previous devices used for this purpose have included iBeacons [266], smart battery chargers [72], and near-field communication tags mounted on a smartphone cradle [72].

4.4.3 Cooperative Intelligent Transportation Systems The idea of C-ITS is to fuse measurements and information from vehicles, pedestrians, and local infrastructure within a limited geographical area. Stud- ied applications include collision avoidance [267], emergency vehicle warning systems [268], traffic mitigation [269], and vehicular surveillance for crime in- vestigation [270]. The considered information content can be characterized by its local validity, i.e., its spatial scope of utility, and its explicit lifetime, i.e., its temporal scope of validity [107]. Figure 4.5 displays the local va- lidity and explicit lifetime of several content types typical for applications within C-ITS. The communication of information in C-ITS is performed over 4.4 Services and Applications 65 ] km [

Road 5 congestion

Work zone 1 warning Accident Emergency Local validity

1 warning vehicle warning . 0

s] 10 [min]30[ 1 [day] Explicit lifetime Figure 4.5: Local validity and explicit lifetime for a number of content types used in applications of C-ITS [107]. a VANET, i.e., a type of mobile ad hoc network (MANET) where vehicles are used as mobile nodes [106]. VANETs comprise both vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications. Typically, one assumes the employment of the IEEE 802.11p standard for local wireless communi- cation, exclusively dedicated to vehicular environments. However, despite a vast amount of conducted research, the deployment of real-world VANETs has been slow. This is primarily due to the large costs associated with the re- quired communication transceivers, i.e., the costs of setting up local roadside units and installing on-board units in vehicles [271]. Using smartphones in VANETs is appealing both from a cost perspective and because it facilitates the integration of pedestrians, bicyclists, and motorcyclists into the network. Nevertheless, the use of smartphones also brings about some technical diffi- culties. Most importantly, commercial smartphones are not equipped with the IEEE 802.11p interface. More so, [109] concluded that it is currently not feasible, neither from an economical nor from a practical point of view, to make smartphones compliant with the IEEE 802.11p standard. One solu- tion is to make use of the sensing, computing, and storage capabilities of the smartphone, and then use Bluetooth or WiFi to send data from the smart- phone to a low-cost in-vehicle 802.11p device, only equipped with the most indispensable communication technology. This 802.11p device can in turn be used to communicate with roadside units or other on-board units [272], [273]. Implementations have also been proposed where the wireless communication is instead partially or completely based on the IEEE 802.11a/b/g/n standards [271, 274, 275, 276], or on cellular connectivity [108, 277]. However, the link performance of the IEEE 802.11a/b/g/n standards is expected to be worse than under the IEEE 802.11p standard. Whether the performance is suffi- 66 4 Smartphone-based Vehicle Telematics cient for time-critical safety applications in smartphone-based VANETs is still an open question [278]. The pros and cons of different wireless technologies in VANETs are reviewed in [108]. Refer to [275] and [279] for details on the implementation of a smartphone-based VANET. Notable real-world deployments of VANETs include the Cooperative ITS Corridor, a planned vehicular network that will initially be deployed on mo- torways in Austria, Germany, and the Netherlands. The project’s main focus will be on utilizing the IEEE 802.11p interface for safe and environmentally friendly driving [280]. In Japan, vehicle drivers are provided with real-time traffic information through the Vehicle Information and Communication Sys- tem (VICS). Traffic data is collected from road sensors and then sent to vehi- cles with installed VICS units. As of 2015, the total number of installed VICS units was over 47 million [281]. Japan also holds a 3.5-hectare ITS Proving Ground in Susono, Shizuoka, where Toyota conducts research on C-ITS. Al- though large-scale real-world implementations of smartphone-based VANETs have yet to be studied, smaller field tests of V2V communications have been conducted for applications within, e.g., platooning [282].

4.4.4 Mobile Cloud Computing MCC refers to the concept of offloading computation from mobile nodes to remote hosts. By exploiting resources of “the cloud”, mobile users can bypass the computation and storage constraints of their individual devices. Resources can either be provided by a remote cloud, serving all mobile devices, or be provided by a local cloud, utilizing resources from other nearby mobile de- vices [110]. Lately, MCC applications have been discussed in the context of VANETs, giving rise to the term vehicular cloud computing (VCC). Vehicu- lar clouds differ from internet clouds in several aspects. One distinguishing feature derives from the unpredictability of the vehicles’ behavior. Vehicles can unexpectedly leave the VANET, and hence, the computation and storage scheme must be made resilient to such events. Moreover, VCC is often per- formed using so called vehicular cloudlets, i.e., clouds utilizing the resources of surrounding vehicles without requiring an active internet connection. The use of vehicular cloudlets will generally both reduce the transmission cost and increase service availability [111]. The network architecture proposed in [113] divides the vehicular cloud into three layers. These are the earlier described vehicular cloudlet, the roadside cloudlet, composed of dedicated local servers and roadside units, and the central cloud (the basic internet cloud)2. As il-

2The cloud based exclusively on V2V communications, here called the vehicular cloudlet, is sometimes referred to as the vehicular cloud. However, we follow the taxonomy of [111] and reserve the term vehicular cloud for the overarching network architecture. Internet of Things

Telematics

Vehicle Telematics

Smartphone-based Vehicle Telematics

4.4 Services and Applications 67

Central Cloud Roadside Cloudlet Vehicular Cloudlet

Decreasing communication delay Increasing cloud resources

Figure 4.6: The three-layered vehicle cloud architecture consisting of the vehicle cloudlet, the roadside cloudlet, and the central cloud. lustrated in Figure 4.6, usage of the added resources provided by the outer layers of the vehicular cloud tends to come at the expense of an increased communications delay. VCC have been applied in industrial projects conducted by Ford, General Motors, Microsoft, and Toyota. The focus has been on providing naviga- tion services and infotainment systems with features such as voice-controlled traffic updates and in-vehicle WiFi hotspots [112]. Academic works have dis- cussed implementations where the smartphone acts as a middle layer that collects data from in-vehicle sensors and simultaneously connects to the inter- net cloud [283], or where the vehicular cloud acts as a cloud service provider for smartphones [284]. The studied applications have included pedestrian pro- tection [285], collision avoidance [286], cyber parallel traffic worlds [287], and camera-based route planning [288]. MCC has also been applied at a sensing level to, e.g., mitigate the large energy consumption of GNSS receivers. In [289], raw GNSS signals were uploaded from a mobile device to the cloud for post-processing. Leveraging the computational capabilities of the cloud and publicly available data such as the satellite ephemeris and earth elevation, the energy efficiency was increased as a result of reducing both the compu- tational cost and the amount of raw GNSS data acquired for each position fix. Obviously, when offloading computations to the cloud, the energy con- sumption attributed to data transmission will increase. To reduce this effect, a framework for pre-transmission compression of GNSS signals was presented in [290]. 68 4 Smartphone-based Vehicle Telematics

4.4.5 Driver Behavior Classification

The literature on driver behavior classification is vast and encompasses a wide range of sensors and algorithms. Here, the primary focus is on smartphone- based implementations. What separates smartphone-based methods from other methods is that the former primarily relies on GNSS, accelerometer, gyroscope, and magnetometer data, whereas the latter more often will also make use of OBD data extracted from the controller area network (CAN) bus, vehicle-mounted cameras, electroencephalograms (EEGs), or other high- end sensors [66, 117]. Assessments of driver safety are usually performed by detecting driving events that are perceived as dangerous or aggressive. Two of the most com- monly studied categories of driving events, which are also of particular inter- est in insurance telematics [291], are harsh acceleration3 and harsh cornering. Today, it is widely accepted that drivers who frequently engage in harsh accel- eration and harsh cornering also tend to be involved in more accidents [216], [292], get more traffic tickets [219], and drive in a less eco-friendly manner [293]. The intimate connection between harsh acceleration and harsh cor- nering is made clear by the fact that the study of harsh cornering usually is considered to be synonymous with the study of horizontal (longitudinal and lateral) acceleration. As shown in [135], this can be motivated by a kinematic analysis of vehicle rollovers and tire slips. Often, the resulting constraints on a vehicle’s acceleration are illustrated using a friction circle as displayed in Figure 4.7. Although both longitudinal and lateral acceleration estimates can be obtained by differentiating GNSS measurements [67, 294], the higher sampling rate of accelerometers generally makes them more suitable for the detection of short-lived acceleration events [295], [296]. The computation of acceleration estimates in the vehicle frame from smart- phone-based accelerometer measurements includes three steps [233]: 1) re- moval of the gravitational component; 2) rotation of the measurements to the vehicle frame (see Section 4.4.1); and 3) compensation for bias and noise terms. One solution that can be used to simultaneously estimate the smartphone’s attitude (which may be used to compensate for the gravitational component), the smartphone-to-vehicle orientation (which is needed to rotate the measure- ments to the vehicle frame), and the sensor biases is presented in Chapter 6. The impact of random noise can be mitigated by the use of low-pass filters4. Although much of the literature on smartphone-based detection of harsh accel-

3Here, we use harsh acceleration to refer to both aggressive forward acceleration and harsh braking. 4Most of a passenger vehicle’s horizontal accelerations will be in the frequency spectrum below 2 [Hz] [298]. Internet of Things

Telematics

Vehicle Telematics

Smartphone-based Vehicle Telematics

4.4 Services and Applications 69

Forward acceleration

Lateral acceleration

Left 0.4 g Right cornering cornering 1 g Longitudinal acceleration Water depth LoA Braking 0 [mm] 1 g 1 [mm] 0.4 g Figure 4.7: Friction circle (also known as the ellipse of adherence) illustrating the limits of adhesion (LoA) on a vehicle’s acceleration in terms of gravity g 9.8 [m/s2] when driving at a velocity of 50 [km/h] on a horizontal surface ≈ with worn tires that have a tread depth of 1.6 [mm] [297]. eration and harsh cornering has been focused on the estimation of horizontal acceleration, several alternative methods have been proposed. For example, harsh cornering may be detected by using magnetometer [299] or gyroscope [120] measurements to estimate the vehicle’s angular velocity; in [300], harsh braking and cornering was detected by thresholding the vehicle’s pitch and roll angles, respectively; and in [301] and [302], microphones were used to detect whether turn signals were employed during critical steering maneuvers (if not, the maneuver was classified as careless). Another category of driving events that are of interest in driver assessments is lane changing. In similarity with cornering events, lane changes are char- acterized by pronounced lateral accelerations. To separate these two types of events using sensor measurements, one may use that 1) the vehicle’s yaw an- gle will be approximately the same before and after completing a lane change along a straight road [232]; and 2) during a lane change, the accumulated displacement along the direction perpendicular to the road is approximately equal to the lane width [301]. In general, neither of these statements will hold for cornering events. When position measurements and map information are available, the separation obviously becomes easier. Sudden or frequent lane changes are often associated with aggressive driving [232], and have been reported to account for up to 10 % of all vehicle crashes [335]. Two dominant approaches to the detection of aggressive driving events have emerged. In some studies, these will naturally extend to an overall 70 4 Smartphone-based Vehicle Telematics Detection/classification method † Driving events HA HC LC External sensors OBD IMU Mag. † Publications on smartphone-based driver safety classification. Gyro. † xxxxxxxxxxx x x x x x x x x x x x x x Thresholding. Thresholding. x x x Thresholding. x x Thresholding. x x x x x Thresholding. x Thresholding. x x Thresholding. x x Thresholding. Thresholding. Thresholding. Thresholding. Table 4.6: Smartphone sensors GNSS Acc. xxxxxx xx xx xx xx x xx x xx x xx x xx x x x x x x x x x x x x x x x x x x x Thresholding. x Thresholding. x x x x x Thresholding. Thresholding/SVM. x x x x Thresholding. x Thresholding. x Thresholding/hidden x Markov model. x x x x Thresholding/DTW/NBC. x Thresholding. Thresholding/DTW. x x Thresholding. x x Thresholding. x Thresholding. Thresholding/DTW. Abbreviations: Accelerometers (Acc.), gyroscopes (Gyro.), harsh acceleration (HA), harsh cornering (HC), and lane changing (LC). [305] [306, 135] [216] [307] [301] [308] [309, 310] [302] [311] [296] [312] [299] [294, 295] [313] [217] [227] [314] [213] [315] [316] [317] [232] [230] [224] Publications [303, 304] The table only† specifies the sensors that were used for the detection of aggressive acceleration, cornering, and lane changing. 4.4 Services and Applications 71 Detection/classification method † Driving events HA HC LC x x x x Various machine learning methods. External sensors OBD IMU xx x x x x x Thresholding. Thresholding. x x x x SVM, k-means clustering. Mag. † Publications on smartphone-based driver safety classification. Gyro. x x x Thresholding. † xxx x x x x xx xx x xx x xx xx x xx Thresholding. x xx Thresholding. x x Thresholding. x x x x x x x x x x x x x x Pattern x matching. x x x x SVM k-means x (to clustering. classify Artificial maneuvers). neural DTW/Bayesian classification. network. DTW/k-nearest neighbors. Various machine learning methods. Table 4.7: Smartphone sensors GNSS Acc. xxx xx xx xx xx x xx x x x x x x x x x x x x x x x x x x x x x Adaptive thresholding. x x x x Various machine x learning x Unsupervised methods. Various learning. machine learning methods. x Symbolic aggregate approximation. Maximum likelihood estimation. NBC. SVM (to classify maneuvers). ‡ ‡ ∗ ∗ ‡ ‡ The algorithm did not detect individual driving events per se, but rather provided a general characterization of the driving style. Abbreviations: Accelerometers (Acc.), gyroscopesAlthough (Gyro.), no harsh smartphone acceleration sensors (HA), are harsh employed, cornering the (HC), study and is lane motivated changing by (LC). smartphone-based driver classification. [322] [323] [334] [324] [222] [325] [326] [327] [219] [211] [328] [318] [319] [320] [329] [330] [331] [119] [120] [332] [333] Publications [300] [321] The table only† specifies the sensors that‡ were used for the∗ detection of aggressive acceleration, cornering, and lane changing. 72 4 Smartphone-based Vehicle Telematics driver risk assessment, whereas others only discuss the classification of in- dividual driving events. The first alternative approach to the detection of driving events is to use thresholding or some type of fuzzy inference [232]. In other words, one would construct rule-based methods based on the valid- ity of a given set of statements regarding the measurements. For example, a harsh braking could be considered to have been detected whenever the es- timated longitudinal acceleration falls below (assuming that negative values correspond to decelerations) some predetermined threshold. The second al- ternative is to study the temporal regularities of the measurements or related test statistics. For example, a harsh braking could be considered to have been detected whenever the IMU measurements over a specified time window are sufficiently similar to some predetermined and manually classified reference templates. A popular method used to assess the similarity between two driv- ing events is dynamic time warping (DTW) [119, 120, 295, 310]. DTW is a time-scale invariant similarity measure, and hence, it enables correct event classifications despite variations in event length. A comparison of DTW and a threshold-based method was conducted in [295], indicating a performance difference in favor of DTW. However, in [310], DTW came up short against a naive Bayesian classifier (NBC) that used driving event features to classify events as aggressive or normal. In addition to NBC [219], driver safety classifi- cations have also been based on machine learning approaches such as artificial neural networks [330], k-nearest neighbors [332], and support vector machines (SVMs) [333]. A summary of studies on smartphone-based classifications of driver safety is provided in Tables 4.6 and 4.7. Notable related studies using black boxes or in-vehicle data recorders instead of smartphones can be found in [336] and [337]. Harsh acceleration, cornering, and lane changing are far from the only driving events of interest in smartphone-based studies of driver safety. For example, swerving or weaving are important events that are similar in nature to lane changes [213, 302, 316], and likewise, speeding has been considered in many related studies [226, 296, 299, 316, 321, 322, 338]. Although speeding detection is straightforward if GNSS measurements are available, IMU-only solutions to speed estimation are of interest for reasons of energy-efficiency, availability, and privacy. These solutions can be based on the relation between the vehicle’s speed and the vehicle vibrations measured by inertial sensors [339, 340], or use that the vehicle speed is inversely proportional to the time that elapses between when the front and rear wheel passes a given road anomaly [228, 296]. Moreover, the driver behavior can be further characterized by using accelerometers to estimate the current gear [341]. Smartphone-embedded cameras can be used to assess driver safety by tracking both the road [302], as well as the behavior of the driver [157]. More 4.4 Services and Applications 73 so, it is possible to combine these capabilities by rapidly switching between the smartphone’s front and rear-facing cameras [342]. This idea was used in [343] to detect drowsy and inattentive driving, tailgating, lane weaving, and harsh lane changes. In [301], camera-free and camera-based methods for the detection of lane changing were compared in different weather conditions. The camera-free methods outperformed the camera-based both in terms of performance and computational cost.

4.4.6 Road Condition Monitoring Poor road surface conditions can cause damage to vehicles, increase main- tenance costs and fuel consumption, reduce driving comfort, and may even increase the risk of accidents [344]. In the United Kingdom alone, potholes cause more than £1 million worth of damages to cars every day [332]. Due to the high costs associated with identifying and analyzing road deteriorations using specialized vehicles and equipment, the collection of crowdsourced data from smartphones in passenger vehicles has emerged as an attractive alterna- tive [345]. Typically, related studies will threshold the standard deviation of measurements from smartphone-embedded accelerometers to detect potholes, cracks, speed bumps, or other road anomalies (with accompanying position estimates used to mark their location) [217, 228, 232, 307, 315, 346, 347, 348, 349, 350, 351, 352, 353]. Nonetheless, many variations to this approach have been proposed. Considering the sensor setup, we note that accelerometers in many in- stances have been complemented with magnetometers [332, 354, 355] and gyroscopes [185, 332, 345]. Other smartphone-based implementations have used 1) GNSS measurements of speed to attempt to remove the speed de- pendency in the accelerometer features describing a given anomaly [356]; 2) microphones to record pothole-induced sound signals [357]; and 3) OBD sus- pension sensors to measure pothole-induced compressions [321]. Although cameras have been popular in the general field of pothole detection [358, 359], camera-based approaches are often considered too computationally expensive for smartphone-based implementations [360]. Moving on to the road assess- ment, we note that there have been several proposals on how to increase the granularity of detection algorithms that simply aim to identify road anomalies in general. For example, [345] used a SVM to differentiate between road-wide anomalies, such as speed bumps and railroad crossings, and one-sided pot- holes. As noted in [124], one-sided potholes tend to have a larger effect on the vehicle’s roll angle than road-wide anomalies. Moreover, studies which have aimed to detect potholes have proposed to threshold the speed to reject ap- parent anomalies caused by curb ramps (typically, curb ramps are approached 74 4 Smartphone-based Vehicle Telematics at low speeds) [124]. In [361] and [362], it was demonstrated that it is pos- sible to estimate both the depth and length of potholes from accelerometer measurements. This information could then be used to evaluate the priority of specific repairs. Since many drivers will try to avoid driving over potholes, [363] noted that repeated instances of swerving at a given position indicates the presence of a pothole. Although speed bumps are typically not in need of repairs, efforts have still been directed towards detecting and mapping them [122, 364]. One of the motivations for this is the creation of early warning systems that could give the driver sufficient time to slow down also, for example, in low visibility conditions. The estimation of bump height is discussed in [314]. Instead of detecting individual anomalies, it is also possible to assess the overall road roughness [365], [366]. Specifically, [344] and [367] studied how accelerometer measurements relate to the established international roughness index (IRI). Last, we note that while most algorithms for road condition monitoring rely on thresholding techniques, some studies have instead employed linear predictive coding [365], SVMs [345, 356, 366, 368], k-means clustering [225, 369], decision- tree classifiers [185], or Bayesian networks [332]. Remaining challenges include 1) the development of standardized methods for the detection, classification, and characterization of road anomalies; 2) how to efficiently build a map of anomalies and reject spurious detections; and 3) how to model and consider the effect of vehicle suspension [122, 362].

4.5 Summary, Historical Perspective, and Outlook

The internet of things describes a world where personal devices, vehicles, and buildings are able to communicate, gather information, and take action based on digital data and human input. Some of the most promising applications are found within the field of smartphone-based vehicle telematics, which also was the topic of this survey. As demonstrated by services such as Uber and Waze, the idea of smartphone-based vehicle telematics is to employ the sensing, com- putation, and communication capabilities of smartphones for the benefit of pri- mary (drivers), secondary (passengers), and tertiary users (bystanders, pedes- trians, etc.). The field has been particularly driven by the rapid improvement in smartphone technology seen over the last ten years. This development can be exemplified by the continuous addition of new sensors to devices from the iPhone series, which now come with a GPS-receiver (introduced in the iPhone 3G in 2008), a digital compass (iPhone 3GS, 2009), a three-axis gyroscope (iPhone 4, 2010), a GLONASS-receiver (iPhone 4S, 2011), and so forth. For comparison, Figure 4.8 shows a pre-smartphone research platform developed 4.5 Summary, Historical Perspective, and Outlook 75

Figure 4.8: Historical flashback: Pre-smartphone era telematics device de- veloped in 2006 with dimensions of 45 32 13 [cm] [370]. The device was × × equipped with a touch-screen, a 2G GSM modem, a USB-connected camera, a GNSS receiver, and an IMU [371]. The display shows the user interface of a non-intrusive vehicle performance application [372]. at KTH Royal Institute of Technology, Sweden, in 2006. To begin with, the present chapter described the information flow that is currently utilized by, e.g., smartphone-based insurance telematics and nav- igation providers, and emphasized the characteristic feedback-loop that en- ables users to change their behavior based on the analysis of their data. No- table academic and industrial projects were reviewed, and system aspects such as embedded and complementary sensors, energy-efficiency, and the human- smartphone interface were examined. Moreover, it was concluded that while the smartphone potentially can function as an enabler for low-cost imple- mentations of, e.g., VANETs and distracted driving solutions, there are often technical difficulties that have to be overcome due to the non-dedicated na- ture of the device. Since the smartphone-embedded sensors are not fixed with respect to the vehicle, we reviewed methods to estimate the smartphone’s ori- entation and position with respect to some given vehicle frame. Similarly, we presented methods for labeling smartphone data according to transportation mode. Specifically, we noted that data collected from cars can be distinguished from public transport data by considering driving characteristics, timetables, bus routes, rail maps, and personal information. Further, publications on smartphone-based driver classification were categorized based on the used sensors, considered driving events, and applied classification methods. Last, methods for road condition monitoring were reviewed. While smartphones 76 4 Smartphone-based Vehicle Telematics offer a cheap, scalable, and easily implementable alternative to current road monitoring methods, several methodological challenges still remain. The abundance of smartphones in the traffic is expected to catalyse new advances in, e.g., crowdsourced traffic monitoring, platooning, ridesharing, car security, insurance telematics, and infotainment. However, although the breadth of the literature is continuously expanding, widespread adoption has so far been limited to a narrow set of applications. One of the most important reasons for this is that potential driving forces are often left unconvinced of the monetary or social value of specific ventures. For example, since few in- dustrial partners have detailed data describing both driving behavior and the associated insurance claims over longer periods of time, the value of collecting driving data for insurance companies is still somewhat obscured. Similarly, there is a need to establish and validate standards for commonly encountered challenges such as transportation mode classification, smartphone-to-vehicle alignment, driver event detection, and driver scoring. Currently, the literature on these topics is very scattered, with many articles detailing ideas that have already been published elsewhere. Moreover, future implementations may also benefit from improvements in sensor technology, communication standards, and road maps. While some speculate that the increasing number of vehi- cles being equipped with factory-installed telematics systems will eventually make smartphone-based solutions obsolete, smartphones will continue to offer scalable and user-friendly telematics platforms for many years to come. Chapter 5

GNSS-aided Inertial Navigation Systems

A global navigation satellite system (GNSS)-aided inertial navigation system (INS) propagates navigation estimates by integrating measurements from in- ertial sensors, i.e., accelerometers and gyroscopes. GNSS measurements are then used to bound the resulting navigation errors. The navigation system allows the two information sources to operate in a complementary manner. To further illustrate this point, consider the capabilities of a stand-alone INS. By integrating measurements from inertial sensors, a stand-alone INS produces three-dimensional estimates of position, velocity, and orientation (assuming knowledge of an initial dynamic state) based on steady, high rate updates. However, the estimates degrade with time, and INSs capable of providing ac- ceptable stand-alone navigation solutions for an extended period of time are too expensive for consumer applications. GNSS, on the other hand, produce position measurements with errors limited to a few meters using equipment available at a very low-cost. Nevertheless, a low-cost GNSS receiver cannot provide three-dimensional orientation estimates, is sensitive to environmental disturbances, and has an update rate that is substantially lower than that of an INS. By fusing measurements from these two sources, we obtain a complete navigation solution with high update rate, high accuracy, and few unscheduled interruptions. The GNSS measurements prevent the INS estimates from drift- ing, while the inertial sensors smoothes the GNSS measurements and bridges signal outages. Next, we describe the mathematical basis of a GNSS-aided INS. We begin with a one-dimensional example.

77 78 5 GNSS-aided Inertial Navigation Systems

v(t)

r(t) r(t0)

Figure 5.1: The principle of dead-reckoning is to estimate position by inte- grating measurements or estimates of speed.

5.1 A One-dimensional Example

Figure 5.1 depicts a vehicle whose motion will be described by one-dimensional navigation quantities. Denoting the vehicle’s position at time t with r(t), it holds that t r(t) = v(s)ds + r(t0) (5.1) Zt0 where v(t) denotes the vehicle’s speed at time t. Hence, given some knowledge of r(t0), we can estimate r(t) by measuring v(s) at a number of time points t s t and approximating the integral in (5.1). This is the standard princi- 0 ≤ ≤ ple of dead-reckoning, which refers to the process of recursively estimating the position of an object by numerically integrating measurements or estimates of velocity. There are several reasons why it might be preferable to integrate measurements of speed rather than measuring position directly. Position mea- surements may be costly, power consuming, subject to outliers, or altogether unavailable. Additionally, speed measurements are sometimes available at a higher rate than position measurements. Obviously, perfect integration in (5.1) requires perfect knowledge of the velocity at all time instances between t0 and t. In practice, this means that stand-alone dead-reckoning systems always will be subject to position errors that accumulate with time. To prevent the position error from growing without bound, additional information is required. This type of information can be provided by, e.g., satellite-based positioning systems, WiFi fingerprinting systems, magnetic fingerprinting systems, terrain measurements, or road maps. The principle of equation (5.1) can be taken one step further by noting that s v(s) = v˙(τ)dτ + v(t0) (5.2) Zt0 where v˙(t) denotes the vehicle’s acceleration at time t. Further, by inserting 5.2 The Three-dimensional Case 79

(5.2) in (5.1), we have that

t s t r(t) = v˙(τ)dτds + v(t0)ds + r(t0). (5.3) Zt0 Zt0 Zt0

Hence, given some knowledge of r(t0) and v(t0), we can estimate r(t) by mea- suring v˙(τ) at a number of time points t s t and approximating the 0 ≤ ≤ integrals in (5.3). Next, we consider the same concept in three-dimensions. In this case, we can obtain information about the vehicle’s acceleration from accelerometer measurements. Since the accelerometers measure the vehicle’s specific force in the body frame, the measurements must be rotated to the navigation frame. To obtain information about changes in the vehicle’s orien- tation, we employ gyroscope measurements.

5.2 The Three-dimensional Case

To start with, let us consider the three-dimensional version of (5.3), which can be written as t s t n n n n r (t) = v˙ (τ)dτds + v (t0)ds + r (t0). (5.4) Zt0 Zt0 Zt0 Here, we use the superscript n to denote quantities in the navigation frame, which is some chosen coordinate frame that is fixed with respect to earth. Similarly, we will use the superscript b to denote quantities in the body frame, i.e., a coordinate frame that is fixed to the vehicle of interest. All sensors will be assumed to be fixed to the origin of the body frame, with their sensitive axes aligned with the respective axes of the body frame.

5.2.1 Inertial Navigation The accelerometer measurements of specific force can be written as

ab = v˙ b gb (5.5) − where g denotes the gravity force. This further means that the vehicle’s acceleration in the navigation frame can be obtained as

n n b n v˙ = Cb a + g (5.6)

n where Cb denotes the rotation matrix from the body frame to the navigation frame. We emphasize that rotation matrices are only one way to represent three-dimensional orientations. Other popular alternatives include Euler an- gles and quaternions. 80 5 GNSS-aided Inertial Navigation Systems

Now, consider the quantities in the right-hand side of equation (5.6). Here, ab is measured and gn is known (assuming that we have knowledge of the approximate position). Thus, to perform the integration in (5.4), it only remains to identify the vehicle’s orientation with respect to the navigation frame. For this purpose, we will make use of the relation

t n n b Cb (t) = Cb (t0) exp Ω (s)ds (5.7)  Zt0 

b ∆ b b 1 where Ω = [ω ]× and ω is the angular velocity measured by the gyroscopes . We have here used [c]× to denote the skew symmetric matrix defined such that [c1]×c2 is equal to the cross product of c1 and c2. In the same way as equa- tion (5.1) allows us to recursively estimate a vehicle’s position by integrating speed measurements, (5.7) allows us to recursively estimate the vehicle’s three- dimensional orientation by integrating gyroscope measurements.

5.2.2 The Discrete-time Navigation Equations Before we can implement an INS based on (5.4), (5.6), and (5.7), the equa- tions need to be discretized. Hence, by assuming constant quantities over each integration interval in (5.4) and (5.7) while making a linearization of the exponential in (5.7), we obtain

rn rn + ∆t v , (5.8a) k+1 ≈ k k k vn vn + ∆t (Cn ab + gn), (5.8b) k+1 ≈ k k b,k k n n b C C (I + ∆t [ω ]×). (5.8c) b,k+1 ≈ b,k 3 k k Here, (5.4) has been divided into two steps and we have used the subindex k to denote quantities at time tk. Further, I` and ∆tk denote the identity matrix of dimension ` and the sampling interval between the sampling instances k and k + 1, respectively. Generally, the accuracy of the discretization will increase with the sampling rate of the inertial sensors. In low-cost applications, the errors of the inertial measurements tend to be so high that the discretization errors can be neglected. Hence, we will from now on consider the relations described by equations (5.8) to be exact. Refer to [7] for a discussion on more precise discretizations and orthonormalization of the rotation matrix. As may be realized, the gyroscope measurements are integrated three times before

1Gyroscopes do not only measure the vehicle’s rotation with respect to earth, but also the earth’s rotation around its own axis (i.e., with respect to the inertial frame). However, this contribution is small in comparison to the noise of commercial-grade gyroscopes, and can therefore safely be neglected in low-cost applications. 5.2 The Three-dimensional Case 81 they are used to update the position estimates. Consequently, stand-alone inertial navigation has cubic error growth. The equations (5.8) are known as the navigation equations and are more efficiently written as NE z¯k+1 = fk (z¯k, uk) (5.9) where

∆ n | n | n | | z¯ = [(r ) (v ) (ψb ) ] , (5.10a) u =∆ [(ab)| (ωb)| ]|, (5.10b)

n n and ψb denotes the Euler angle representation of Cb . An alternative approach to inertial navigation is to omit the input vector u and instead include an and ωb in the state vector z¯. In this case, motion models are needed for the acceleration and the angular velocity. On the hand, this may smooth out the navigation solution and enable improved estimates of an and ωb. On the other hand, the resulting navigation system typically has higher computational complexity and tends to be slower in responding to changes in the inertial measurements. Obviously, the benefit of including motion models is limited by the models’ ability to describe real-world dynamics.

5.2.3 GNSS-aided Inertial Navigation Systems To prevent the error of the navigation solution from growing without bound, we will consider the GNSS measurements

GNSS yk = h (z¯k) + k. (5.11)

Here, k denotes measurement errors. The observation vector y includes GNSS measurements of position, and possibly also of speed and bearing. By com- bining equations (5.9) and (5.11), we can define the state-space model

NE z¯k+1 = fk (z¯k, uk), (5.12a) GNSS yk = h (z¯k) + k. (5.12b)

It is now possible to implement a GNSS-aided INS, i.e., to estimate the vehi- cle’s dynamics by solving the nonlinear filtering or smoothing problem defined by (5.12). This is commonly done by first linearizing the navigation equations around the current estimates and the current inertial measurements (giving the navigation error equations), and then employing an extended Kalman fil- ter. However, as described in Section 2.2.2, many other possible solutions also exist. Often, the state vector is augmented with the bias of the inertial mea- surements. The corresponding bias estimates can then be used to correct the 82 5 GNSS-aided Inertial Navigation Systems inertial measurements that drive the navigation equations. The noise terms involved in the system model are typically modelled as white Gaussian noise processes. The initial position and velocity estimates can be defined based on the initial GNSS measurements. Similarly, the initial roll and pitch angles can be extracted from the accelerometer measurements at standstill. The initial yaw angle can be obtained from magnetometer measurements or from GNSS measurements of bearing. It is also possible to implement a marginalized particle filter where each particle represents a unique initialization of the yaw angle. When sufficient weight has been distributed to one of the particles, the particle filter is terminated and the navigation system can continue with the identified initial state. Refer to [7] and [373] for further details on GNSS-aided inertial navigation. In the next chapter, we describe how a GNSS-aided INS based on measurements from smartphone-embedded sensors can be modified to provide the same capabilities as a standard GNSS-aided INS based on vehicle-fixed sensors. Chapter 6 IMU Alignment for Smartphone-based Automotive Navigation

Abstract

Recent years have seen an increasing interest in making use of the smartphone as a cheap and viable navigation device for land- vehicles. However, smartphone-based automotive navigation suf- fers from the fact that the orientation of the smartphone’s inertial measurement unit, with respect to the vehicle, in general is un- known. In this chapter, we present a method for simultaneous vehicle navigation and smartphone-to-vehicle alignment. In ad- dition to the state estimates obtained from applying a standard global navigation satellite system-aided inertial navigation system to estimate the smartphone dynamics, this will also provide us with estimates of the vehicle’s attitude. These estimates are used to improve the navigation solution, and also enable the estimation of additional vehicle, road, and driver characteristics that require knowledge of the vehicle’s attitude. The performance of the pro- posed method is evaluated with both simulations and experimental data.

6.1 Introduction

The navigation of land-vehicles using vehicle-fixed sensors is a mature tech- nology with numerous commercial applications [298]. One of the current chal-

83 84 6 IMU Alignment for Smartphone-based Automotive Navigation lenges of the intelligent transportation systems society lies in extending these navigation systems to implementations that can be based solely on measure- ments from cheap and readily accessible devices such as smartphones. The processing power of commercial smartphones has in the past years increased at a fast pace. In addition, smartphones have been equipped with more and more features, and today typically include both a low-cost global navigation satellite system (GNSS) receiver, and an inertial measurement unit (IMU). All in all, the modern smartphone is capable of functioning as a portable and user-friendly navigation device or measurement probe for vehi- cles in many settings and environments. Promising applications can be found in the fields of insurance telematics [67], traffic state estimation [374], fleet management [375], and advanced driver-assistance systems [376]. As opposed to when using vehicle-fixed sensors, smartphone-based vehicle navigation suffers from the fact that the orientation of the smartphone’s IMU, with respect to the vehicle, in general is unknown. In this chapter, we show how to align the smartphone with respect to the vehicle, without the use of reference data. The motivation for this is twofold. First, knowing the relative orientation of the smartphone and the vehicle, it is possible to utilize con- straints on the vehicle dynamics to improve the navigation solution. Second, this also enables the estimation of vehicle, road, or driver characteristics that require knowledge of the vehicle’s attitude. Reproducible research: The simulated and real-world data used in this chap- ter is available at www.kth.se/profile/jwahlst/ together with a Matlab imple- mentation of the proposed method.

6.2 Problem Formulation

The implementation of a GNSS-aided inertial navigation system (INS) for smartphone-based automotive navigation poses several challenges. Typically, relevant design choices involve a trade-off between ease of use, which pays off in terms of commercial viability, and navigation performance. Many of the smartphone-based GNSS-aided INSs presented in the literature put restric- tions on both the smartphone usage while driving [208], and on the vehicle motion during initialization [218]. Some implementations also make use of additional data from more expensive and logistically demanding sources (such as the vehicle’s on-board-diagnostics system or vehicle-aligned IMUs) to e.g., estimate the smartphone-to-vehicle orientation [204]. In the following, we discuss these issues in more detail. One of the most challenging aspects of smartphone-based vehicle navi- gation is the question of how to separate the dynamics of the smartphone 6.2 Problem Formulation 85 from the dynamics of the vehicle. So far, published studies have generally assumed that the smartphone is fixed with respect to the vehicle (see e.g., [208, 218, 204]). A simple way to generalize frameworks based on this assump- tion is to implement a detector that attempts to detect when the smartphone is non-stationary with respect to the vehicle [377]. All data obtained during the detected period is then discarded, after which the relative attitude of the smartphone and the vehicle must be re-estimated. The success of an imple- mentation of this kind to a large extent depends on the navigation system’s ability to perform an in-motion smartphone-to-vehicle alignment. An obvious disadvantage of this approach is that all data will be lost during the detected period, which in e.g., insurance telematics applications might encourage fraud- ulent behavior through excessive smartphone interaction when the driver do not wish to share driving data. The smartphone-to-vehicle orientation is typically estimated by first as- suming that the vehicle is horizontally aligned during the initialization period, so that the vehicle’s roll and pitch angles relative to a tangent frame both are zero. Assuming that the vehicle does not experience any acceleration, the smartphone’s roll and pitch angles can be estimated from accelerometer mea- surements of the gravity vector. The smartphone’s yaw angle can then be identified using magnetometer measurements, while the vehicle’s yaw angle is estimated using GNSS measurements of planar course [235]. Alternatively, one can directly estimate the relative yaw angle of the smartphone and the ve- hicle by studying accelerometer measurements during pronounced acceleration [208] or deceleration [377]. The method for smartphone-to-vehicle alignment presented in this chapter has several advantages over the approach described above. For example, it is integrated with the standard method for GNSS- aided inertial navigation, it does not make the approximation that the vehicle is traveling on a flat surface, and it consider bias terms in the IMU mea- surements. Measurements from IMUs that are rigidly attached to the vehicle, with their axes aligned to the vehicle frame, have been used for many purposes besides navigation. These include driver recognition and maneuver classifica- tion [120], detection of road bumps [378], and detection of additional roof load [379]. Relaxing the assumption of a vehicle-aligned IMU, it is possible to expand the application area of any of these frameworks. To estimate the smartphone-to-vehicle orientation, we will make use of navigation constraints. Generally, if constraints on the true navigation state are known, these can be used to reduce the vector space of possible naviga- tion solutions, which will improve the performance of the navigation system. In low-cost automotive navigation, the most commonly applied constraints restrict the vehicle’s velocity to be approximately zero in directions perpen- dicular to the forward direction of the vehicle frame [380]. These are often 86 6 IMU Alignment for Smartphone-based Automotive Navigation referred to as non-holonomic constraints (NHCs), i.e., constraints on the ve- hicle’s velocity. In this chapter, we propose a method for simultaneous vehicle navigation and smartphone-to-vehicle alignment, which can be seen as an extension of the standard GNSS-aided INS. The alignment is performed by applying NHCs with the smartphone-to-vehicle orientation as an unknown state element. The relative orientation can then be recursively estimated in a Kalman filter. The navigation system is well-suited for a real-time implementation, and requires no pre-calibration of the smartphone sensors. The proposed method is evaluated in a simulation study which random- izes the smartphone-to-vehicle orientation and the sensor errors. In addition, we show the results of a field study where driving data from several ubiqui- tous devices are compared to reference data. Convergence of the estimated smartphone-to-vehicle orientation is shown to occur within minutes, and the accuracy of the estimated vehicle attitude is in the order of 2 [◦] for each Euler angle. It is further demonstrated that the method reduces the position error growth during GNSS outages.

6.3 State-Space Model

In this section, the navigation problem discussed in the preceding section is formulated as a constrained nonlinear filtering problem. First, we present the navigation equations and measurements used in the standard GNSS-aided INS. The navigation state is then augmented with the relative smartphone-to- vehicle orientation, after which constraints on the vehicle motion are employed to introduce a coupling between the smartphone dynamics and the relative smartphone-to-vehicle orientation. Values of the generic variable c will throughout the chapter be denoted as measured c˜, estimated cˆ, and developing in discrete time ck, where k (often omitted for notational convenience) is the index of the sampling instance.

Further, cˆk1 k2 denotes the estimate of ck1 using GNSS measurements up until | κ3 sampling instance k2. Following the notation in [7], we let cκ2κ1 denote the physical quantity c of frame κ1 with respect to frame κ2, resolved in frame 1 κ3 κ3 κ3 . Naturally, cκ2κ1,k refers to cκ2κ1 at sampling instance k. The coordinate frames (see Figure 6.1) are denoted by b (forward-right-down body frame, also known as the vehicle frame), s (smartphone frame), n (north-east-down tangent frame), and i (earth-centered inertial frame).

1Since this chapter introduces a coordinate frame that is not used in standard GNSS- aided inertial navigation, we have for increased clarity chosen to use a more explicit notation than in Chapter 5. 6.3 State-Space Model 87

s n

b i

Figure 6.1: The employed coordinate frames: body frame b; smartphone frame s; tangent frame n; and earth-centered inertial frame i.

6.3.1 Smartphone-based GNSS-aided Inertial Navigation We begin by presenting the navigation equations and measurement equations commonly employed in low-cost GNSS-aided INSs. These navigation systems propagate the navigation solution using high-rate IMU measurements, and then bound the resulting navigation errors using GNSS measurements avail- able at a lower rate. Refer to Chapter 5 for details. Let the navigation state be defined as

∆ n | n | n | | z¯ = [(rns) (vns) (ψns) ] (6.1) where r and v denote three-dimensional position and velocity, respectively. n ∆ n n n | Further, the roll, pitch, and yaw angle are elements in ψns = [φns θns ψns ] . The input vector u is given by

∆ s | s | | u = [(ais) (ωis) ] (6.2)

s s where ais and ωis denote specific force and angular velocity, respectively. The state-space model describing the time development of the navigation state, and the GNSS measurements, is typically formulated as

NE z¯k+1 = fk (z¯k, uk), (6.3a) GNSS yk = h (z¯k) + k. (6.3b) Here, f NE are known as the navigation equations [7]. (In practice, process noise has to be considered in (6.3a) since the input u cannot be measured 88 6 IMU Alignment for Smartphone-based Automotive Navigation perfectly.) The GNSS measurements of position, speed, and planar course are modeled as GNSS ∆ n | n n n | h (z¯) = [(rns) ¯vns atan2([vns]2, [vns]1)] (6.4) where the horizontal speed is defined as

n ∆ n 2 n 2 ¯vns = ([vns]1) + ([vns]2) (6.5) q and [c]i denotes element i in c. The measurement noise  is assumed to be white with covariance matrix [135]

GNSS ∆ R = Cov(k) 2 2 2 2 n 2 (6.6) = blkdiag(σr I2, σr,vert, σ¯v, σ¯v/(¯vns) ).

2 2 2 Here, σr , σr,vert, and σ¯v describe the accuracy of the GNSS measurements, blkdiag( ,..., ) denotes the block diagonal matrix with block matrices given · · by the arguments, and I` is the identity matrix of dimension `. As previously outlined in Chapter 5, (6.3) can be used to implement a GNSS-aided INS that recursively estimates the smartphone dynamics.

6.3.2 Augmenting the State-space Model Augmenting the navigation state with the Euler angles relating the smart- phone frame to the vehicle frame, we obtain the augmented navigation state

| s | | z = [z¯ (ψsb) ] . (6.7)

To make the corresponding augmented system observable, the GNSS measure- ments need to be complemented with additional measurements or constraints. One possibility is to utilize that the vehicle’s velocity typically is close to zero in the lateral and up/down directions of the vehicle frame [380]. To this end, we introduce the NHCs gv(z) dv (6.8) ≤ where dv is some constant, while gv(z) =∆ Avb and A =∆ [0 I ] with 0 | nb| 2,1 2 `1,`2 denoting the zero matrix of dimension ` ` . Further, and are used 1 × 2 ≤ | · | to denote component-wise inequality and absolute value, respectively. Moreover, we note that the vehicle’s roll angle typically varies around zero without any large deviations, leading us to introduce the additional constraint

gφ(z) dφ (6.9) ≤ where dφ is some constant, while gφ(z) =∆ b|ψn and b =∆ [1 0 0]|. | nb| 6.4 EKF-based Navigation System 89

The state-space model (6.3) and the dynamic constraints (6.8) and (6.9) can be summarized by the new system model

zk+1 = fk(zk, uk) + wk, (6.10a)

yk = h(zk) + k, (6.10b) and the constraints

g(z ) d. (6.10c) k ≤ ∆ NE | ψ | | We have here assumed that fk(zk, uk) = [(fk (z¯k, uk)) (f (zk, uk)) ] and h(z) =∆ hGNSS(z¯), where f ψ is some function describing the time development s ∆ v | φ | of ψsb,k. Further, we have made use of the definitions g(z) = [(g (z)) g (z)] and d =∆ [(dv)| dφ ]|, while w denotes process noise. The problem of estimating z from (6.10) defines a constrained nonlinear filtering problem which can be approached by several methods. One alternative is to formulate the constraints (6.10c) as pseudo observations and then linearize the system in an extended Kalman filter (EKF) [54]. This method is considered next.

6.4 EKF-based Navigation System

In what follows, we present an EKF-based solution to the constrained nonlin- ear filtering problem presented in the preceding section. First, the navigation equations are linearized around the estimated motion dynamics and the IMU measurements. The state constraints are then re-formulated as pseudo obser- vations that extend the measurement equation, after which also the measure- ment equation is linearized. The resulting state-space model forms the basis of an EKF which provides estimates of the vehicle dynamics. The section concludes with a discussion on filter initialization.

6.4.1 Model of the System Error Dynamics We begin by defining the error vector

x =∆ [(δz)| (δu)| ]| (6.11)

∆ n | n | n | s | | where δz = [(δrns) (δvns) (δψns) (δψsb) ] . In addition to the estimation errors δrn =∆ ˆrn rn , δvn =∆ vˆn vn , δψn , and δψs (the last two of ns ns − ns ns ns − ns ns sb which are defined by (6.19) in Appendix 6.A), the error vector also includes the IMU bias δu. Note that x can be obtained as a straightforward extension of the 15-state error vector in the standard GNSS-aided INS framework (see e.g., [7]). 90 6 IMU Alignment for Smartphone-based Automotive Navigation

Linearizing (6.10a) around zˆ and u˜, the time development of x can be described by

xk+1 = Φkxk + Gkwwk (6.12) where the process noise covariance is Cov(wwk) = Qk (see e.g., [373] for ex- ample derivations and definitions of filter matrices such as Φk, Gk, and Qk). s The implementation considered here models δψsb,k as a random walk. We now reformulate the dynamic constraints (6.10c) as the pseudo obser- vations

ps,v ps,v 02,1 = h (zk) + k , (6.13a) ps,φ ps,φ 0 = h (zk) + k , (6.13b)

ps,v ∆ b ps,φ ∆ | n ps,v where h (z) = Avnb and h (z) = b ψnb. The measurement errors  ps,φ ps,v ∆ 2 and  are modeled as white with covariance matrices R = σps,v I2 and ps,φ ∆ 2 R = σps,φ, respectively. For notational convenience, the measurement function and the measurement noise covariance matrix for the complete set of pseudo observations are denoted by hps(z) =∆ [(hps,v(z))| hps,φ(z)]| and Rps =∆ blkdiag(Rps,v, Rps,φ), respectively. Linearizing the GNSS measurement equations (6.10b) and the pseudo ob- servations (6.13) around zˆ and u˜, we obtain the measurement equation (see Appendix 6.A) δyk = Hkxk + ek (6.14) where we use the notation

GNSS ∆ y h (z¯ˆ) δy = ps , (6.15) "03,1# − " h (zˆ) #

∆ GNSS | ps | | ∆ | ps,v | ps,φ | GNSS ps H = [(H ) (H ) ] , and ek = [(k) (k ) k ] , with H and H defined in Appendix 6.A. The measurement noise covariance matrix is

∆ R = Cov(ek) (6.16) = blkdiag(RGNSS, Rps).

Algorithm 1 displays one iteration of the filtering algorithm resulting from im- plementing an EKF based on (6.12) and (6.14), with Pk1 k2 denoting the state | covariance of xˆk1 k2 . The computational cost of the algorithm, not considering | e.g., symmetry properties, will be dominated by a term in the order of 6d3 per iteration, where d denotes the dimension of the state vector [381]. Running the algorithm at 20 [Hz] would then require 20 6 183 [flops] 0.7 [Mflops], · · ≈ 6.4 EKF-based Navigation System 91

Algorithm 1 : EKF-based navigation algorithm. Perform a Kalman filter time update using (6.3a) and (6.12). The navigation state errors are set to zero as the navigation solution is updated in the last step of the algorithm.

zˆk+1 k = fk(zˆk k, u˜k δuˆk k), | | | | − | Pk+1 k = ΦkPk kΦk + GkQkGk, | | | | xˆk+1 k = [01,12 (δuˆk k) ] . | | If there are GNSS measurements at sampling instance k + 1, use (6.14) to perform a Kalman filter measurement update utilizing GNSS measurements, and pseudo observations of velocity and roll:

| | 1 Kk+1 = Pk+1 kHk+1(Hk+1Pk+1 kHk+1 + Rk+1)− , | | xˆk+1 k+1 = xˆk+1 k + Kk+1δyk+1, | | Pk+1 k+1 = (I18 Kk+1Hk+1)Pk+1 k. | − | If there are no GNSS measurements at sampling instance k+1, use (6.13) to perform a Kalman filter measurement update utilizing pseudo observations of velocity and roll:

ps | ps ps | ps 1 Kk+1 = Pk+1 k(Hk+1) (Hk+1Pk+1 k(Hk+1) + Rk+1)− , | | ps xˆk+1 k+1 = xˆk+1 k Kk+1h (zˆk+1 k), | | | − ps Pk+1 k+1 = (I18 Kk+1Hk+1)Pk+1 k. | − | Correct the navigation solution using the estimated errors. The rotation matrix update is given by (6.18) in Appendix 6.A. The Euler angle estimates κ1 κ1 ψκ1κ2 are uniquely defined by the rotation matrix Cκ2 .

b b [zˆk+1 k+1]1:6 = [zˆk+1 k]1:6 [δzk+1 k+1]1:6, | | − | n n | n Cs,k+1 k+1 = (δCs,k+1 k+1) Cs,k+1 k, | | | s s | s Cb b,k+1 k+1 = (δCb,k+1 k+1) Cb b,k+1 k. | | | b b which is several orders of magnitude smaller than the maximum number of flops performed by standard smartphones [187]. It can be noted that the pseudo observations of roll (6.13b) are needed to fade out the effect of the initial smartphone-to-vehicle roll angle (the value s of φsb does not have any effect on the error estimates xˆ if only GNSS mea- 92 6 IMU Alignment for Smartphone-based Automotive Navigation

2 2 surements and NHCs are employed). The design parameters σps,v and σps,φ describe the trade-off between allowing for vehicle dynamics which deviates from those implied by perfect measurements in (6.13), and increasing the abil- ity to utilize the added information provided by the same equations. The assumption of approximately zero relative velocity between the smart- phone and the vehicle is employed when utilizing the NHCs in the derivation of (6.14). Hence, the estimate of the smartphone’s velocity will also be our best estimate of the vehicle’s velocity. Although nothing is assumed about the relative smartphone-to-vehicle position, in practice, the same statement could be made regarding this quantity. To estimate the smartphone-to-vehicle position using GNSS and IMU measurements, one typically needs reference data or measurements from multiple smartphones. This is discussed further in Chapter 7.

6.4.2 Initialization The smartphone’s position and velocity are initialized using the first available GNSS measurements, with the initial velocity in the vertical direction of the tangent frame set to zero. All sensor biases are initialized as zero. n n It is convenient to express the initial attitudes as ψns,0 and ψnb,0 and then s s s n κ2 compute ψsb,0 from Cb,0 = Cn,0Cb,0, where Cκ1 denotes the rotation matrix n from frame κ1 to frame κ2. The first two elements in ψns,0 can be estimated from accelerometer measurements during presumed zero acceleration [373]. n Moreover, the first two elements in ψnb,0 are generally close to zero, and the yaw angle of the vehicle can be initialized from the first GNSS measurement of planar course. The initial yaw angle of the smartphone typically has to be estimated by nonlinear methods. We choose to marginalize the navigation system in a marginalized particle filter (MPF), where each particle is associated with a unique initial state. Similar methods for other applications have previously been discussed in [35] and [382]. When sufficient weight has been distributed to one of the particles, the particle filter is terminated and the navigation system can continue with the identified initial state.

6.5 Simulation Study

We will now present the results of the conducted simulation study. True navi- gational data emulating typical vehicle dynamics was generated along the tra- jectory shown in Figure 6.2, where t denotes the time that has passed since the start of the simulation. The corresponding horizontal speed is shown in Figure 6.3. When generating the trajectory, the NHCs were utilized as hard equality 6.5 Simulation Study 93

Simulated vehicle trajectory 500 t = 34 [s] 400

] 300 m [ 2 ] t = 93 [s] n nb

r 200 Elapsed distance: 1.49 [km] [ Elapsed time: 120 [s] 100 t = 72 [s] 0 0 200 400 600 800 1000 1200 n [rnb]1 [m]

Figure 6.2: Vehicle trajectory generated from Monte Carlo simulations.

Simulated vehicle speed 80

] 60 km/h [ n

nb 40 ¯ v

20 0 20 40 60 80 100 120 t [s] Figure 6.3: Vehicle speed generated from Monte Carlo simulations.

ps,v constraints, so that the true vehicle dynamics satisfied h (zk) = 02,1 for all k. Hence, the vehicle’s pitch and yaw angles were defined by the velocity vector. The vehicle’s roll angle was simulated as a zero mean, auto-regressive process of first order. Each run of the simulation produced a stationary smartphone- to-vehicle orientation (generated from uniform distributions over each Euler angle) and a constant IMU bias (generated from uniform distributions over 2 ( 0.5, 0.5) [m/s ] and ( 0.2, 0.2) [◦/s]). See [7, 173, 383] for typical bias val- − − ues. White Gaussian measurement noise was generated with standard devia- 2 tions (SDs) of 0.01 [m/s ] and 0.75 [◦/s] for the accelerometers and gyroscopes, respectively, while σr = 1 [m], σr,vert = 4 [m], σ¯v = 0.5 [m], σps,v = 10 [m/s], and σps,φ = 180 [◦]. The update rates of the IMU and the GNSS receiver were set to 20 [Hz] and 1 [Hz], respectively. Further, the filter modeled the relative smartphone-to-vehicle orientation with a process noise of 0.01 [◦] for each Eu- 94 6 IMU Alignment for Smartphone-based Automotive Navigation

(a) Error of roll estimates 10

] σ (theoretical SD) ◦

) [ RMSE n nb b φ 5 Error ( 0 0 20 40 60 80 100 120 t [s]

(b) Error of pitch estimates 10

] σ (theoretical SD) ◦

) [ RMSE n nb ˆ θ 5 Error ( 0 0 20 40 60 80 100 120 t [s]

(c) Error of yaw estimates 10

] σ (theoretical SD) ◦

) [ RMSE n nb b ψ 5 Error ( 0 0 20 40 60 80 100 120 t [s]

Figure 6.4: Estimation errors of the vehicle’s Euler angles in simulation study. ler angle. The MPF used eight particles, with the best initial state identified after 100 [s]. The root-mean-square error (RMSE) and the theoretical SD of each of the vehicle’s Euler angles are shown in Figure 6.4. The displayed figures are the result of 100 Monte Carlo realizations. The SDs were obtained by employing the approximation (readily obtained from (6.27) in Appendix 6.A)

Cov(δψn ) Cov(δψn ) + CnCov(δψs , δψn ) (6.17) nb ≈ ns s sb ns n s n | n s s + (Cs Cov(δψsbb , δψns)) + Cs Cov(δψsb)Cn b b b 6.6 Field Study 95 in each simulation, and then computing the average of the corresponding SDs at each sampling instance. Here, Cov( ) and Cov( , ) denote the covariance · · · matrix of a random vector and the cross-covariance matrix of two random vectors, respectively. Note that all covariance and cross-covariance matrices in the right-hand side of (6.17) can be obtained as sub-matrices of the state covariance matrix P. As indicated in Figure 6.4, convergence typically occurs within 60 seconds, with a steady-state RMSE in the order of 2 [◦] for each Euler angle. For com- parison, it can be noted that estimating the roll and pitch angles as 0 [◦] at each sampling instance would result in a time-averaged RMSE of 5.74 [◦] and 4.29 [◦], respectively. The simulations employed the same filter as was used for the experimental data in Section 6.6, and hence, the theoretical SDs generally exceed the RMSEs. This is a consequence of filtering with a nonzero mea- surement noise covariance in the pseudo observations of velocity, and nonzero s process noise covariance in the time development of ψsb,k. The exact error graphs in Figure 6.4 will be highly dependent on the vehicle dynamics. As an example, the estimates of the roll and pitch angles can be seen to exchange information uncertainty as the vehicle changes course (consider e.g., the change in theoretical SD of the roll and pitch estimates when the vehicle corners at t = 72 [s]). This can be attributed to the fact that the system’s observability varies with the vehicle dynamics. Refer to e.g., [384] for details on the observability of GNSS-aided INSs during different vehicle maneuvers.

6.6 Field Study

The algorithm presented in Section 6.4 was applied to 25 minutes of driving data collected from a Samsung S3, a Samsung S4, and an iPhone 5, all fixed to the but with an unknown orientation. Simultaneously, reference data was collected using a Microstrain 3DM-GX3-35 aligned to the vehicle’s coordinate frame. The update rates of the IMUs were set to 20 [Hz], while the GNSS receivers of the smartphones and the reference system were operating at update rates of 1 [Hz] and 4 [Hz], respectively. The reference data was pro- cessed in a standard GNSS-aided INS. Due to limitations in the accuracy of the reference system, the presented results should be considered as an indica- tion of achievable performance rather than as exact measures. Although the presented framework does not explicitly require the smartphone to be fixed with respect to the vehicle, convergence is, according to the authors’ expe- rience, difficult to achieve when the smartphone is rotating with respect to vehicle. However, since the angular velocity measured when a user picks up 96 6 IMU Alignment for Smartphone-based Automotive Navigation

Table 6.1: Root-mean-square errors of vehicle angles in field study.

Samsung S3 Samsung S4 iPhone 5 n φnb [◦] 0.86 0.96 1.49 n θnb [◦] 0.87 0.86 1.29 n ψnb [◦] 2.36 2.08 3.21 the smartphone generally is much larger than any angular velocity caused by vehicle maneuvers, sporadic user initiated movements should be possible to filter out as described in Section 6.2. The time-averaged RMSEs of the vehicle’s Euler angle estimates are shown in Table 6.1, and can be seen to be in the same order as the RMSEs obtained in the simulation study in Section 6.5. Further, Figure 6.5 displays the posi- tion drifts during simulated GNSS outages, in terms of the root-mean-square n horizontal position error RMSE([ˆrnb]1:2). Outages were simulated by removing GNSS data from sequential periods of 60 seconds, starting 240 seconds after initialization and continuing up until the end of the data set. The proposed IMU alignment method was run several times with GNSS measurements re- moved from one of the periods in each run. The displayed RMSEs were then obtained by averaging over the different runs. For comparison, the correspond- ing position drifts resulting from applying the standard 15-state GNSS-aided INS to each smartphone are also shown. As seen from Figure 6.5, the IMU alignment method reduces the position error growth considerably for each of the three smartphones. This confirms that the pseudo observations of veloc- ity and roll provide the navigation system with valuable information despite small errors in the estimated smartphone-to-vehicle orientation. Noteworthy is that the error drift is still quite large due to the poor quality of the inertial sensors in the smartphones.

6.7 Conclusions

This chapter has presented a method for simultaneous vehicle navigation and smartphone-to-vehicle alignment, which can be seen as an extension of the standard GNSS-aided INS. The alignment is performed by applying NHCs with the smartphone-to-vehicle orientation as an unknown state element. The accuracy of the estimated vehicle attitude is in the order of 2 [◦] for each Euler angle, with convergence occurring within minutes. It was further shown that the proposed method reduces the position error growth during GNSS outages. With reliable estimates of the relative smartphone-to-vehicle orientation, 6.7 Conclusions 97

(a) Position drift, Samsung S3 800 ]

m Standard GNSS-aided INS

) [ 600 EKF-based IMU alignment 1:2 ]

n nb 400 r ˆ

200

RMSE([ 0 0 10 20 30 40 50 60 t [s]

(b) Position drift, Samsung S4 800 ]

m Standard GNSS-aided INS

) [ 600 EKF-based IMU alignment 1:2 ]

n nb 400 r ˆ

200

RMSE([ 0 0 10 20 30 40 50 60 t [s]

(c) Position drift, iPhone 5 800 ]

m Standard GNSS-aided INS

) [ 600 EKF-based IMU alignment 1:2 ]

n nb 400 r ˆ

200

RMSE([ 0 0 10 20 30 40 50 60 t [s]

Figure 6.5: Position drifts in field study during GNSS outage as dependent on the time length of the outage. measurements from the smartphone’s IMU can be utilized in the same way as if the smartphone had been manually aligned to the vehicle frame. Hence, the proposed method opens up a wide range of applications utilizing smartphone- based inertial measurements for automotive navigation, without setting any requirements on the smartphone’s orientation with respect to the vehicle. 98 6 IMU Alignment for Smartphone-based Automotive Navigation

6.A Derivation of Measurement Matrix

This appendix derives the linearized measurement matrix Hk and the mea- surement error δyk in (6.14) by linearizing the measurements (6.10b) and (6.13) around zˆ. We will use that

κ1 ∆ κ1 κ2 δCκ2 = Cκ2 Cκ1 (6.18)

κ1 κ2 b κ1 κ1 where Cκ2 and Cκ1 are uniquely defined by ψκ1κ2 and ψκ1κ2 , respectively, κ1 κ1 while δCκ defines δψκ κ through the small angle approximation b 2 1 2 b

κ1 ∆ κ1 [δψ ]× = δC I . (6.19) κ1κ2 κ2 − 3

Here, [c]× is the skew symmetric matrix defined such that [c1]×c2 is equal to the cross product of c1 and c2. First, we approximate the measurement function in (6.10b) as

GNSS GNSS GNSS h (z¯) h (z¯ˆ) + ∂zh (z¯ˆ)(z¯ z¯ˆ) ≈ ¯ − GNSS GNSS (6.20) = h (z¯ˆ) + ∂xh (z¯ˆ)x which gives y hGNSS(z¯ˆ) HGNSSx +  (6.21) − ≈ where

GNSS ∆ GNSS H = ∂xh (z¯ˆ)

GNSS = ∂z¯h (z¯ˆ) 05,9 (6.22) " − #

I3 03,1 03,1 03,13 n n n n = 0 , [vˆ ] /¯v [vˆ ] /¯v 0 , . −  1 3 ns 1 ns ns 2 ns 1 13 0 [vˆn ] /(¯vˆn )2 [vˆn ] /(¯vˆn )2 0  1,3 ns 2 ns ns 1 ns 1,13  −  Moving on to the pseudo observations of velocity in (6.13a) and assuming ( ) v · 0, we have bs ≈ b b s n vnb = CsCnvnb CbCs vn ≈ s n ns = CbδCsCs δCn(vˆn δvn ) (6.23) s b n s ns − ns b s s n n n = C (I + [δψ ]×)C (I + [δψ ]×)(vˆ δv ) b s 3 b sb n 3 ns ns − ns b s n n n n b s s n C C (vˆ δv + [δψ ]×vˆ ) + C [δψ ]×C vˆ ≈ b s n nb − nsb ns ns s sb n ns b b s n n n b s n s = vˆ C C (δv + [vˆ ]×δψ ) C [C vˆ ]×δψ bnbb− s n ns ns ns − b s n ns b sb b b b b 6.A Derivation of Measurement Matrix 99 and hence, we can write

0 hps,v(zˆ) Hps,vx + ps,v (6.24) 2,1 − ≈ where

ps,v ∆ b s b s n b s n H = A 0 C C C C [vˆ ]× C [C vˆ ]× 0 . (6.25) − 3,3 s n s n ns s n ns 3,6 h i For the pseudo observationsb b of rollb inb (6.13b)b web note that

n n [δψ ]× = δC I nb b − 3 = CnCb I b n − 3 = CnCsCbCs I b s b s n − 3 n s n s s (6.26) = C C + C [δψ ]×C I b s b n s sb n − 3 n n s s [δψ ]× + C [δψ ]×C ≈ b ns b s sb n n n s = [δψns]× + [bCs δψsb]× b which gives [7] b δψn δψn + Cnδψs . (6.27) nb ≈ ns s sb We then have b ψn ψn δψn nb ≈ nb − nb (6.28) ψn δψn Cnδψs ≈ bnb − ns − s sb where the first approximationb can be derivedb from (6.19) while assuming [φn θn ] [0 0] (which is reasonable for passenger vehicles on most roads), nb nb ≈ and the second approximation follows from (6.27). This results in

0 hps,φ(zˆ) Hps,φx + ps,φ (6.29) − ≈ where Hps,φ =∆ b|[0 I Cn 0 ]. (6.30) − 3,6 3 s 3,6 The linearized measurement matrix H in (6.14)b can now be obtained from

Hps =∆ [(Hps,v)| (Hps,φ)| ]|. (6.31)

Chapter 7

IMU-based Smartphone-to-Vehicle Positioning

Abstract

In this chapter, we address the problem of using inertial mea- surements to position smartphones with respect to a vehicle-fixed accelerometer. Using rigid body kinematics, this is cast as a non- linear filtering problem. Unlike previous publications, we consider the complete three-dimensional kinematics, and do not approx- imate the angular acceleration to be zero. The accuracy of an estimator based on the unscented Kalman filter is compared with the Cramér-Rao bound. As is illustrated, the estimates can be expected to be better in the horizontal plane than in the vertical direction of the vehicle frame. Moreover, implementation issues are discussed and the system model is motivated by observabil- ity arguments. The efficiency of the method is demonstrated in a field study which shows that the horizontal RMSE is in the order of 0.5 [m]. Last, the proposed estimator is benchmarked against the state-of-the-art in left/right classification. The framework can be expected to find use in both insurance telematics and distracted driving solutions.

101 102 7 IMU-based Smartphone-to-Vehicle Positioning

7.1 Introduction

There is a steadily growing interest in smartphone-based traffic data collection. Today, smartphones are used for purposes of for example navigation [204], traffic state estimation [385], vehicle condition monitoring [386], and driver assistance [387]. Smartphone-based measurement systems benefit not only from their versatile set of sensors, typically including both a global navigation satellite system (GNSS) receiver and an inertial measurement unit (IMU), but also from their low cost, transparency, and ease of use [83]. The need to estimate the position of a smartphone with respect to a vehicle (see Figure 7.1) can arise for two reasons. First, the position of the smartphone can be used as one of many possible features that provide information about a driving trip. An accurate point estimate of the smartphone’s position could for example help answer questions such as “Who is most likely to have driven the vehicle?”. This is particularly relevant for the industry of smartphone-based insurance telematics where vehicle data, collected using smartphones, is used to adjust premiums and provide driver feedback [67]. In insurance telematics, the insurer can for example use GNSS measurements to infer the vehicle’s position, or use IMU measurements to detect harsh braking. Typically, a substantial amount of statistical signal processing is required to enhance and monitor the quality of the data [66]. Second, the position of the smartphone can be used to minimize driver distraction by enabling location-dependent limitations of smartphone functionality while driving. This idea is utilized in apps such as DriveID, which uses patented Bluetooth technology to infer which side (right or left) of the car that the smartphone is placed in, and then locks the smartphone whenever it is on the driver’s side during vehicle movement. While the classification of the smartphone’s side in the vehicle is rather accurate, the cost of the required Bluetooth-equipped device is almost $100, thereby limiting the possibilities for large-scale deployment.

7.1.1 Smartphone-to-Vehicle Positioning — State-of-the-Art Many modern cars can perform smartphone-to-vehicle positioning (or the closely related task of driver identification) by utilizing original equipment manufacturer (OEM)-technology based on e.g., Bluetooth positioning [168], near-field communication [236], audio ranging [237], or voice recognition [238]. Although these systems typically work very well, full market penetration is not expected within the near future, and aftermarket installations are often expen- sive. As a consequence, several low-cost methods for smartphone-to-vehicle positioning have been proposed. One alternative is to detect human motions using the smartphone’s IMU and thereby try to infer the owner’s seat in the 7.1 Introduction 103

Figure 7.1: Illustration of the problem of smartphone-to-vehicle positioning. vehicle. Studied motions have included vehicle entries [152], seat-belt fasten- ing, and pedal press [236]. The exact patterns of the two former motions will typically differ depending on which side of the vehicle that the user entered from. Similarly, the detection of pedal press always indicates that the user sat in the driver’s seat. While these features possess some predictive power, they are susceptible to variations in the motion behavior of each individual. In addition, they are constrained by the assumption that the user carries the smartphone in his pocket. Another option is to estimate the relative position of two spatially separated devices from their GNSS measurements. However, since the errors normally are heavily correlated in time and often exceed the typical device distance, convergence can be expected to be slow [135]. It is also possible to perform driver identification by studying individual driving styles and travel behaviors [388]. In this chapter, we will use that the specific force, measured by smartphone- embedded accelerometers, varies with the smartphone’s position in the vehicle. The effect is most clearly seen during high-dynamic events. For example, in [152] and [229], the relative longitudinal placement of two smartphones was inferred by comparing their accelerometer measurements when the vehicle passed a pothole. Similarly, in [191] and [229], the relative lateral placement of two smartphones was inferred from their centripetal acceleration. The accel- eration was computed under the assumption that the vehicle’s angular velocity was constant (i.e., that the angular acceleration was zero) and only nonzero along the yaw axis of the vehicle frame. In the following section, we will 104 7 IMU-based Smartphone-to-Vehicle Positioning generalize this method by considering the complete three-dimensional kine- matic relation between the specific forces. Thereby, the previously employed assumptions on the angular velocity are avoided. Obviously, the proposed method may be used in conjunction with any of the previously mentioned methods utilizing e.g., human motions, GNSS measurements, driving styles, etc.

7.1.2 Implementation Challenges The presented estimation framework requires access to measurements from at least one gyroscope triad and at least two spatially separated accelerometer triads. Hence, if each smartphone is equipped with a 6-degrees of freedom IMU, we either need measurements from at least two smartphones, or from at least one smartphone and one external vehicle-fixed accelerometer triad. Since we are estimating the relative positions of the accelerometer triads, absolute positioning within the vehicle is typically only possible in the latter case and under the assumption that the absolute position of the external accelerometer triad is known. However, accurate estimates of relative positions are often sufficient input for reliable driver classifications, i.e., determining which of the smartphones that belongs to the driver (assuming that such a smartphone exists). Technical solutions where smartphone sensors are enhanced with ex- ternal accelerometer triads [72], embedded into tags or on-board-diagnostics (OBD)-dongles, is currently utilized by telematics providers such as Cam- bridge Mobile Telematics and Mojio. The cost of a mass-produced accelerom- eter triad can be expected to be less than half a dollar [170], and hence, the cost of producing and installing an accelerometer tag is typically negligible.

7.1.3 Insurance Telematics Now, we go into more detail on the practical use of smartphone-to-vehicle po- sitioning within smartphone-based insurance telematics. While most automo- tive insurances follow the vehicle, data collected using a smartphone normally follows the user. (Since most insurance telematics apps are implemented with an autostart feature, data will be sent to the insurer whenever the smartphone is situated in a motor vehicle [104].) Generally, this will cause issues of data association. For example, we can imagine scenarios where you lend your car to someone who is not a registered driver within your telematics policy (data will be lost), or where you take a cab or ride along in a friend’s car (data can be mistaken as having emerged from a trip with your car). In the latter case, smartphone-to-vehicle positioning might be used to detect which trips that were made with a third party vehicle. More specifically, one would use 7.2 Model and Estimation Framework 105 that the probability that the trip was made with the insured car is higher conditioned on that the data was recorded from the driver’s seat and vice versa. One practical issue in IMU-based smartphone-to-vehicle positioning is that of data sharing, i.e., even if there are multiple smartphones in a vehicle they might not be configured to neither share data in real-time nor send data to the same server [76]. Moreover, problems could arise due to the implicit assump- tion that the smartphone is placed in the vicinity of its user. Installing an accelerometer tag solves the issues of data association (data from the tag im- mediately reveals whether the insured vehicle was driven) and of data sharing (the insurer will have access to data from both the tag and the smartphone- embedded accelerometer). Even though the problem of data association would be solved by the use of tags, there would still be reasons to be interested in identifying the driver in each trip. For example, this information could be used to confirm accident reports where the insurance coverage differs depend- ing on who was driving during the accident (liability insurance often follows the driver).

7.1.4 Outline In this chapter, we examine the possibilities of using IMU measurements to position a smartphone with respect to a vehicle. The achievable accuracy is evaluated with both simulations and experimental data. The basis of the proposed estimation framework is presented in Section 7.2. Section 7.3 studies classical Cramér-Rao bounds (CRBs) using both numerical and analytical methods. Last, Section 7.4 presents the results of the conducted field study and Section 7.5 summarizes the chapter. Reproducible research: The experimental data used in this chapter is avail- able at www.kth.se/profile/jwahlst/ together with a Matlab implementation of the proposed method.

7.2 Model and Estimation Framework

Consider a moving vehicle with one gyroscope triad and N accelerometer triads. All sensors are assumed to be fixed to the vehicle. Our aim will be to fuse the measurements from the gyroscopes and the accelerometers to estimate the relative positions of the accelerometer triads in the vehicle frame. We begin by presenting kinematic and dynamic models for the vehicle, and then describe the sensor model. Following this, the estimation problem is formulated as a nonlinear filtering problem where the vehicle dynamics and the sensor biases are treated as nuisance parameters. Last, relevant filtering 106 7 IMU-based Smartphone-to-Vehicle Positioning methods are reviewed and the observability properties of the presented system are analyzed.

7.2.1 Kinematic Model Let us, without loss of generality, order the accelerometer triads from 1 to N and define the origin of the vehicle frame to be at the position of the first accelerometer triad. It then holds that [7]

(n) (1) (n) a = a + ([ω]×[ω]× + [α]×)r (7.1) where a(n) and r(n) denote the specific force and the position in the vehicle frame of the nth accelerometer triad, respectively. Further, ω and α denote the vehicle’s angular velocity and acceleration with respect to inertial space, respectively. The skew symmetric matrix [c]× is defined such that [c1]×c2 is equal to the cross product of c1 and c2.

7.2.2 Dynamic Model The dynamics of the vehicle are modeled according to

ωk+1 = ωk + ∆tk αk, (7.2a)

αk+1 = αk + wα,k. (7.2b)

Here, ∆tk denotes the sampling interval between the sampling instances k and k + 1, and the subindex k is used to denote quantities at sampling instance k. Further, the noise term wα,k is assumed to be a white Gaussian with 2 covariance ∆tk σα I3, where I` denotes the identity matrix of dimension `.

7.2.3 Sensor Model The measurements from the gyroscope triad and the nth accelerometer triad are modeled as

ωk = ωk + bω,k + ω,k, (7.3a) (n) (n) (n) (n) a˜ek = ak + ba,k + a,k, (7.3b)

(n) 2 2 where ω,k and a,k are white Gaussians with covariances σω I3 and σa I3, respectively. Further, (7.3) includes gyroscope and accelerometer biases which are assumed to develop according to the random walk models

bω,k+1 = bω,k + wω,k, (7.4a) (n) (n) (n) ba,k+1 = ba,k + wa,k. (7.4b) 7.2 Model and Estimation Framework 107

(n) Here, wω,k and wa,k are assumed to be white Gaussian noise processes with 2 2 covariances ∆tk σb,ω I3 and ∆tk σb,a I3, respectively.

7.2.4 State-space Model Using (7.1) – (7.4), the vehicle dynamics and the measurements can be de- scribed by the state-space model

xk+1 = f(xk) + wk, (7.5a)

yk = h(xk) + k. (7.5b)

The state vector is defined as1

∆ | | | | | | x = [r ω α bω δba ] (7.6)

∆ (2) | (N) | | ∆ (2) | (N) | | where we use that r = [(r ) ... (r ) ] , δba = [(δba ) ... (δba ) ] , and δb(n) =∆ b(n) b(1). It can be noted that r defines the relative position of any a a − a two accelerometer triads. Using (7.2) and (7.4), the state transition model ∆ can be shown to be linear, i.e., f(xk) = Fxk, where F is defined as

03(N 1),3N 03(N 1),3 03(N 1),3N ∆ − − − F = I6N+3 +  03,3N ∆tk I3 03,3N . (7.7) 03(N+1),3N 03(N+1),3 03(N+1),3N    We have here used 0 and 0 to denote zero matrices of dimensions ` ` and ` `1,`2 × ` ` , respectively. Similarly, it can be shown that w is a white Gaussian 1 × 2 k noise process with covariance GQG| where

Q =∆ ∆t blkdiag(σ2 I , σ2 I , σ2 I ), (7.8a) k · α 3 b,ω 3 b,a N,3 ∆ | G = [03(N+1),3N I3(N+1) ] , (7.8b)

∆ 1 and In,m = (In 1 + n 1) Im. Here, blkdiag( ,..., ) denotes the block − − ⊗ · · 1 diagonal matrix with block matrices given by the arguments, ` denotes the matrix of dimension ` ` with all elements equal to one, and denotes × ∆ ⊗ ∆ the Kronecker product. Now, defining δa˜(n) = a˜(n) a˜(1) and h(n)(x) = (n) (n) − δba +([ω]×[ω]× +[α]×)r , equations (7.1) and (7.3) yield the measurement vector and the measurement function

y =∆ [ω| (δa˜(2))| ... (δa˜(N))| ]|, (7.9a)

1The choice of variables to includee in x is motivated by the observability properties of the corresponding system. This is discussed in more detail in Section 7.2.6. 108 7 IMU-based Smartphone-to-Vehicle Positioning and h(x) =∆ [ω| (h(2)(x))| ... (h(N)(x))| ]|, (7.9b) respectively. Moreover, it follows that k will be a white Gaussian noise pro- ∆ 2 2 cess with covariance R = blkdiag(σω I3, σa IN,3). It can be noted that the measurement vector (7.9a) includes the difference in the measurements from N 1 specific pairs of accelerometer triads. The same information can be − provided to the system by including the measurement difference of any N 1 − pairs of accelerometer triads, as long as all accelerometer triads appear at least once in the total set of pairs. Any measurement defined by an additional Nth pair of accelerometer triads can be obtained as a linear combination of the measurements defined by the original N 1 pairs, and is therefore redundant. −

7.2.5 Filters for Nonlinear Systems Due to the nonlinearity of the system (7.5), no optimal (in mean-square sense) and finite-dimensional estimator of x is available, and we will have to resort to suboptimal implementations. Several alternatives are discussed in Section 2.2.2. In this chapter, we will be confined to unscented Kalman filter (UKF)- based implementations. Typically, an UKF has a computational cost in the same order as that of the corresponding extended Kalman filter (EKF). In the most straightforward implementation, the computational cost is dominated by a term in the order of 6d3 per iteration, where d = 6N + 3 denotes the dimension of the state vector [381]. Hence, running the algorithm with e.g., N = 3 at 20 [Hz] would require 20 6 213 [flops] 1.1 [Mflops], which · · ≈ is several orders of magnitude smaller than the maximum number of flops performed by standard smartphones [187]. If particle-based implementations are considered, it can be noted that the system is conditionally linear in both the positions r and the bias terms bω and δba. Hence, it suffices to employ a marginalized particle filter (MPF) where only the posteriors of the vehicle dynamics ω and α need to represented by particles [12].

7.2.6 Observability We will address issues related to the observability of (7.5) by studying the ob- servability properties of the corresponding linear system. (This is a common approach for systems which are not subject to e.g., multimodal or discontin- uous error distributions [384].) The linearized system is said to be observable over the interval [t1, tk ] if and only if the rank of the observability matrix ∆ | | k 1 | | O1,k = [H1 (H2F) ... (HkF − ) ] is equal to the dimension of x [389]. 7.3 The Cramér-Rao Bound 109

∆ Here, H = ∂xh(x) denotes the linearized measurement matrix. As can eas- ily be demonstrated numerically, observability is normally attained already for small k in all but degenerate cases where e.g., the angular velocities and accelerations only are nonzero in one spatial dimension. Several additional remarks can be made regarding the observability of the system and the chosen system model. For example, we note that the pre- sented framework easily can be extended to include measurements from mul- tiple gyroscope triads. A straightforward observability analysis shows that each separate gyroscope bias can be made observable. However, if the mea- surement noises of different gyroscope triads are assumed to be independent and identically distributed, it is usually more convenient to average the mea- surements before they are fed to the filter, and then only estimate the average bias. Moreover, notice that neither the vehicle’s specific force nor any absolute accelerometer bias has been included in the state vector. To investigate this further, imagine that we extend the state vector with the specific force and the accelerometer bias at the first the accelerometer triad, and model both quan- tities to develop according to a random walk. (Obviously, it suffices to extend the state vector with the specific force and the accelerometer bias at the first the accelerometer triad to implicitly estimate the corresponding quantities at any accelerometer triad.) It is then easily confirmed that these quantities are mutually unobservable, even if (7.3b) is included in the measurement equation for n = 1. (Both quantities only enter the observability matrix through this measurement, and do so linearly.) However, if the bias or the specific force is known at an arbitrary accelerometer triad, this will make both quantities observable.

7.3 The Cramér-Rao Bound

Given some statistical estimation problem, the classical CRB provides a lower bound on the mean square estimation error of any unbiased estimator. There are two primary reasons to study this bound. First, it can be used to evaluate the performance of suboptimal estimators. If the mean-square error of an estimator is in the vicinity of the bound, the estimator is typically considered to be adequate. Second, it may help to discover inherent limitations of the problem at hand, or give hints on how to design experiments to increase estimator performance. In this section, we will make use of both these traits.

7.3.1 Definitions and Setup for Studies of the Parametric CRB For a system model such as (7.5), both classical and Bayesian CRBs are avail- able. To make sure that we are able to capture all the characteristics of 110 7 IMU-based Smartphone-to-Vehicle Positioning real-world vehicle dynamics, our focus will be on the classical CRB, using dy- namics from real-world data. (The same approach has previously been taken in e.g., [390].) This bound considers the classical framework with deterministic and unknown parameters, and must therefore be given with respect to some specified realization of the relevant state elements. Since the parametric CRB for the system model (7.5) is given by the Riccati recursion for the covariance in the EKF (where the state-space model is linearized around the true realiza- tion), it will suffice to specify realizations of r, ω, and α (the linearized filter matrices only depend on these state elements). For ease of illustration, we will reduce the Cramér-Rao inequality to scalar form, and compare the empirical scalar root-mean-square error (RMSE) of the position errors with the trace of the corresponding submatrix of the inverse Fisher information matrix (FIM). The inequality can be written as [19]

M 1 2 v ˆri r & tr(Pr) (7.10) uM k − k u Xi=1 q t and holds for each sampling instance (the time dependence has been sup- pressed for brevity). Here, M denotes the total number of simulations over the complete realization, denotes the Euclidean norm, ˆr denotes the esti- k · k i mated relative positions during simulation i, Pr denotes the submatrix of the inverse FIM providing the lower bound on mean-square error of ˆr, tr( ) de- · notes the trace of a matrix, and denotes inequality in the limit of M . & → ∞ For ease of notation, we define

M ∆ 1 1 2 RMSE(ˆr) = v ˆri r (7.11a) uM k − k dim(r)u Xi=1 q t and ∆ 1 CRB(r) = tr(Pr). (7.11b) dim(r)q q Here, dim(r) =∆ 3(N 1) is the length of the vector r. The factor 1/ dim(r) − has been included to facilitate a comparison between scenarios with aq different number of accelerometers.

7.3.2 Simulation-based Evaluation of the UKF Inertial data was collected from a Microstrain 3DM-GX3-35 placed in a vehicle performing harsh cornering maneuvers. It should be intuitively clear already from equation (7.1) that a certain level of angular rotation (e.g., cornering) 7.3 The Cramér-Rao Bound 111

Vehicle dynamics 75 50 x-axis y-axis ] 25 z-axis /s ◦ [ 0 ω -25 -50 0 5 10 15 20 25 30 t [s]

Figure 7.2: The angular velocity along the x-axis (roll), y-axis (pitch), and z-axis (yaw) in a subset of the employed data set.

Table 7.1: Simulation parameters.

Parameter Value (2) Positions† r [0 1 0.5] [m] r(3) [1 0 0.5] [m] r(4) [1 1 1] [m]

Sensor noise σω 1 [◦/s] 2 σa 0.035 [m/s ] 2 Bias drift σb,ω 0.0015 [◦/s /√Hz] 5 3 σ 5 10− [m/s /√Hz] b,a · 3 Dynamics σα 8 [◦/s /√Hz]

† The positions are given along the vehicle’s longitudinal, lateral, and up direction, respectively. is required to estimate the relative positions. This is discussed in more detail in Section 7.3.3. The data included 32 cornering maneuvers collected over approximately 6 minutes on an empty parking lot. The sampling rate was 20 [Hz]. Figure 7.2 serves to illustrate the studied dynamics (the true dynam- ics provided by the reference system) by showing a 30-second snippet of the vehicle’s angular velocity. The simulation parameters are given in Table 7.1. As can be seen, the simulated measurements were assumed to be gathered from four 6-degrees of freedom IMUs, placed in the corners of a rhombus. Further, the RMSEs were computed from M = 100 simulations using a standard UKF implementation [13]. In each simulation run, we generated constant accelerometer and gyro biases from uniform distributions in the intervals of ( 0.5, 0.5) [m/s2] and − 112 7 IMU-based Smartphone-to-Vehicle Positioning

(a) Estimator performance in horizontal plane 100

] RMSE(ˆrhor) m -1 ) [ 10 CRB(rhor) hor r ˆ 10-2 Error ( 10-3 0 1 2 3 4 5 6 t [min]

(b) Estimator performance in vertical direction 100 ] m

) [ -1 10 RMSE(ˆrvert)

vert CRB(rvert) r ˆ 10-2 Error ( 10-3 0 1 2 3 4 5 6 t [min]

Figure 7.3: The (a) horizontal and (b) vertical RMSE of the UKF evaluated against the CRB.

( 0.25, 0.25) [◦/s], respectively. See [7, 173, 383] for typical bias values. The − angular velocity and acceleration were initialized from the measurements, and the relative positions and the bias terms were initialized as zero. As the larger part of the angular velocity and the angular acceleration was along the yaw axis of the vehicle frame (as is typical for vehicle dynamics), the errors of the estimated relative position will also be larger in this direction. (Note that r(n) is locally unobservable in the direction of ω as long as ω and α are parallel and constant.) To illustrate the resulting effect on the estimates, we study the horizontal (along the longitudinal and lateral dimensions of the vehicle frame) and vertical (in the up direction of the vehicle frame) contri- butions to RMSE(ˆr) and CRB(r) separately. For example, when studying the horizontal RMSE(ˆr), only the lateral and longitudinal elements of r were considered in (7.11a). Figures 7.3 (a) and (b) show the CRB together with the RMSE for the horizontal position estimates ˆrhor and vertical position estimates ˆrvert, respec- tively. As can be seen in Figure 7.3 (a), the estimates of relative position reach decimeter-level accuracy in the horizontal plane after about twelve sec- onds (corresponding to one cornering maneuver). The CRB is attained after 7.3 The Cramér-Rao Bound 113 about six minutes. By contrast, Figure 7.3 (b) shows that the estimates do not converge to the true values in the vertical direction. However, it should be noted that it is unclear whether the CRB is attainable at all. Either way, the vertical distance between the accelerometers will typically be of secondary importance.

7.3.3 Analytical CRBs To gain some insight into how the estimation accuracy depends on the vehicle dynamics, let us consider the case where the vehicle’s angular velocity and an- gular acceleration can be assumed to be known. Since all gyroscopes measure the same angular velocity, this is usually a feasible approximation for sensor setups with a large number of IMUs [391]. In addition, it is assumed that the estimation procedure has been preceded by a bias calibration. Hence, we may disregard all bias terms. Under these assumptions, the measurement equation providing information about r at a given sampling instance can be written as

ya = Har + a. (7.12)

∆ (2) | (N) | | ∆ Here, we have defined ya = [(δa˜ ) ... (δa˜ ) ] and Ha = IN 1 Ωa, ∆ − ⊗ where the angular acceleration tensor is Ωa = [ω]×[ω]× + [α]×. Moreover, the ∆ 2 covariance of  is given by R = σ I . Now, assuming that [ω]×α = 0 , a a a N,3 6 3,1 the parametric CRB for r (which is achieved by the maximum likelihood | 1 1 | 1 estimator ˆrml = (HaRa− Ha)− HaR− ya) given one observation of ya is

1 Cov(ˆr) I (r)− (7.13)  a where we have used A B to denote that A B is positive semidefinite. The  − inverse FIM is given by

1 2 1 | 1 Ia(r)− = σa (IN 1 + N 1) (ΩaΩa)− . (7.14) − − ⊗ As expected, the bound will typically decrease as the angular velocities and accelerations increase. A derivation of (7.14) is provided in Appendix 7.A. Several conclusions can be drawn from the CRB bound presented in (7.13). First, we note that the estimator performance is independent of the number of accelerometers. This is a consequence of assuming perfect knowledge of angu- lar velocity and acceleration. Under this assumption, each accelerometer only provides information about its own position (relative to the first accelerome- ter), and so, increasing the total number of accelerometers will not improve the estimates, only increase the dimension of the state vector. Second, we note that the correlation function of the estimated relative positions of two 114 7 IMU-based Smartphone-to-Vehicle Positioning accelerometers in some given direction always is equal to σ2 / 2σ2 2σ2 = 1/2, α α · α independent of the vehicle dynamics. In other words, the estimatesq will always be heavily correlated, and this cannot be mitigated by any choice of vehicle dynamics. Third, we study the variance in the direction that is perpendicular to the angular velocity and angular acceleration. That is, we consider the ∆ | case N = 2, and look at the CRB for the estimates of rω⊥ = (ω ) r, where ∆ ⊥ ω = [ω]×α/ [ω]×α . In Appendix 7.A, it is shown that the corresponding ⊥ k k Cramér-Rao inequality can be written as

2σ2 Var(ˆr ) a (7.15) ω⊥ ≥ (sin(θ) α )2 k k ∆ 1 where θ = cos− (ω|α/ ω α ) is the angle between ω and α. As may be k kk k realized, (7.15) indicates that the position estimates, in the direction that is perpendicular to ω and α, get worse as the angular rotations are concentrated to one direction, i.e., when θ tend to zero. This illustrates that the horizontal position estimates benefit from rotations along the roll and pitch axes (so that the angular velocity and acceleration are not both constantly pointing in the vertical direction or close to it).

7.4 Field Study

The field study used data from the same event as in Section 7.3.2. In addi- tion, the same filter parameters were used. However, instead of simulating measurement errors and the effects of spatial separation, real-world measure- ments were employed. As a result, the estimates were affected by several errors that were not modeled in Section 7.2. These include scale factor, cross- coupling, inter-misalignment, and time synchronization errors [7]. Although these error contributions can be expected to degrade the estimation accuracy, the presented framework is still shown to be useful in practical applications.

7.4.1 Setup Data was collected from seven smartphones rigidly attached to the vehicle. The placement of the smartphones is illustrated by Figure 7.4. The employed smartphone models where (1) Samsung S3, (2) iPhone 4, (3) Samsung S3, (4) Samsung S3, (5) iPhone 5, (6) Samsung S4, and (7) Samsung S3. The three smartphones in the front seat were placed 0.1 [m] higher up (in the up direction of the vehicle frame) than the four smartphones in the back seat. The relative positions were measured by hand with an uncertainty in the order of 5 [cm] in any given direction. The measurements from different smartphones were time 7.4 Field Study 115

0.1 [m] 1.55 [m]

1 2 . m 3 0 15 [ ]

1 [m]

4 0.1 [m] 5 0.15 [m] 6 7

Figure 7.4: The placement of the smartphones in the longitudinal and lateral directions of the vehicle frame. For clarity, the figure is not shown in consistent scale. synchronized by the use of GNSS time. Moreover, all IMU measurements were aligned to the vehicle frame by estimating the smartphone-to-vehicle orientation as described in Chapter 6.

7.4.2 Results

We now present the results of three experiments. In the first experiment, we attempt to simultaneously estimate the relative positions of the seven smart- phones using all available data. The second experiment considers the same problem but with the measurements processed from two smartphones at a time. Last, we study the accuracy of left/right classification by using mea- surements from two smartphones at a time, and processing the measurements from each cornering maneuver separately. Defining the origin of the vehicle frame to be at the position of the Sam- sung S3 denoted by (1) in Figure 7.4, Figure 7.5 shows the true and estimated relative positions in the horizontal plane after processing all available mea- surements in the same filter. The true and estimated positions associated with a given smartphone are for visibility connected by a straight black line. Although we are not able to reach the sub-centimeter accuracy that was indi- cated in Figure 7.3 (a), the results are still promising as they show that the 116 7 IMU-based Smartphone-to-Vehicle Positioning

Estimates in horizontal plane

rhor 0 1 ˆrhor 2 3

] -0.5 m [ r

-1 4 5 6 7 -1.5 -0.5 0 0.5 1 1.5 2 2.5 r [m]

Figure 7.5: The estimated and true smartphone positions. All smartphone measurements were processed simultaneously. overall smartphone geometry is preserved. The horizontal RMSE was 0.46 [m]. However, in accordance with what could be expected from the results in Sec- tion 7.3.2, the estimates in the up direction (not illustrated by any figure) were worse and had a RMSE of 2.40 [m]. In practice, it is often more relevant to study the accuracy that can be ob- tained by processing measurements from two smartphones at a time, i.e., when N = 2. Figure 7.6 shows the relative errors (ˆr r)/r of estimated distances in − a given direction (longitudinal or lateral) as dependent on the true distance. Only the cases where the true distance exceeded 0.3 [m] are considered. The measurements were processed from two smartphones at a time, and all 21 pos- sible smartphone combinations were studied (i.e., (1, 2), (1, 3),..., (1, 7), (2, 3), { (2, 4),..., (6, 7) ). As can be seen, correct left/right or front/back classifica- } tions were made in all of the 22 (longitudinal and lateral directions for 21 pairs, excluding cases with a true distance below 0.3 [m]) considered cases, i.e., the relative error always exceeded 1. In one case, attributed to the longitudinal − distance between the two iPhones, the relative error exceeded 1. For the cases (not illustrated by any figure) when the true distance was below 0.3 [m], the RMSE was 0.51 [m]. Moreover, correct left/right or front/back classifications were made in 7 out of 11 instances (not counting the instances where the true distance was equal to zero). 7.5 Summary 117

Distance estimates 4 longitudinal 2 lateral /r ) r −

r 0 (ˆ

-2 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 r [m] | | Figure 7.6: The relative errors as dependent on the true longitudinal and lateral distance. The measurements were processed from two smartphones at a time.

Table 7.2: Accuracy in left/right classification.

UKF [%] Benchmark [191] [%] r > 0.3 [m] 91.1 88.5 r < 0.3 [m] 50.5 50.5

Last, the proposed framework was benchmarked against the method for left/right classification presented in [191] (the method is detailed in Appendix 7.B). To obtain a sufficient sample size, each of the 32 recorded cornering ma- neuvers was handled separately. A cornering maneuver was registered when- ever the absolute value of the vehicle’s angular velocity along the yaw axis exceeded 30 [◦/s] and was considered to have ended when then angular ve- locity fell below 10 [◦/s] for at least 0.5 [s]. Table 7.2 shows the classification accuracy obtained when aggregating the results from all possible smartphone pairs. The results are shown separately for the two cases when the true lat- eral distance exceeded or fell short of 0.3 [m]. In the latter case, neither of the methods perform better than random guessing. However, for larger lateral distances, both methods reach a classification accuracy of about 90 %, with the UKF performing marginally better.

7.5 Summary

We have presented an IMU-based method for positioning a smartphone with respect to a vehicle. The problem was formulated as a nonlinear filtering problem, where the vehicle dynamics and the sensor biases where treated as 118 7 IMU-based Smartphone-to-Vehicle Positioning nuisance parameters. While previous studies on smartphone-to-vehicle po- sitioning always have estimated the lateral and longitudinal distances sepa- rately, this study employed three-dimensional kinematics, thereby enabling the smartphone’s position to be estimated jointly in all directions. Moreover, analytical CRBs were derived, and the system model was motivated by ob- servability arguments. As was confirmed by both simulations and real-world experiments, the estimates in the horizontal directions of the vehicle frame will typically be better than in the up direction. Real-world measurements were employed in three experiments. Simul- taneously processing measurements from seven different smartphones gave a horizontal RMSE in the order of 0.5 [m]. When processing measurements from two smartphones at a time, correct left/right or front/back classifications were made in all of the 22 considered cases. Studying each cornering maneuver in- dependently, correct left/right classifications were made in about 90 % of the cases when the lateral distance between the devices exceeded 0.3 [m]. To summarize, the proposed method enables accurate smartphone-to-vehicle po- sitioning with minimal requirements on vehicle infrastructure. As such, it can be expected to find use in both smartphone-based insurance telematics and distracted driving solutions. Future work should focus on how to incorporate information from human motions and GNSS measurements in a robust man- ner, and how to handle situations when the smartphones’ positions change with time.

7.A Derivation of Cramér-Rao Bounds

This appendix first derives the CRB in (7.13). Equation (7.12) describes a standard linear measurement equation with Gaussian errors and an invertible measurement matrix Ha. Under these conditions, the inverse FIM can be expressed as [8]

1 1 | 1 Ia(r)− = Ha− Ra(Ha)− 2 1 1 = σa (IN 1 Ωa)− ((IN 1 + N 1) I3) − ⊗ − − ⊗ | 1 ((IN 1 Ωa) )− · − ⊗ 2 1 1 = σa (IN 1 Ωa− )((IN 1 + N 1) I3) (7.16) − ⊗ − − ⊗ | 1 (IN 1 (Ωa)− ) · − ⊗ 2 1 1 | 1 = σa ((IN 1 + N 1) Ωa− )(IN 1 (Ωa)− ) − − ⊗ − ⊗ 2 1 | 1 = σa (IN 1 + N 1) (ΩaΩa)− − − ⊗ and we are done. Here, it has been used that (A B)| = A| B|, (A B) ⊗ ⊗ ⊗ ⊗ (A0 B0) = (AA0) (BB0), and assuming that A and B are nonsingular, ⊗ ⊗ 7.B Method for left/right Classification 119

1 1 1 (A B)− = A− B− , for any matrices A, B, A0, and B0 of appropriate ⊗ ⊗ dimensions [392]. Next, we derive the CRB in (7.15). Using that [393]

2 (ωω|) + αα| [ω]×[ω]×α × 1 ∆ − Ωa− = h i (7.17) α|[ω]×[ω]×α it can from (7.13) be seen that

2 | | 1 Var(ˆrω⊥ ) 2σa(ω ) (ΩaΩa)− ω ≥ ⊥ ⊥ 2 [ω]×[ω]×α × 2 | = 2σa(ω ) h i ω − ⊥ α|[ω]×[ω]×α ! ⊥ (sin(θ) ω 2 α )2 (7.18) = 2σ2 k k k k a (sin(θ) ω α )4 k kk k 2σ2 = a . (sin(θ) α )2 k k Here, the first equality uses that ω|ω = 0 and α|ω = 0, while the second ⊥ ⊥ equality uses that (ω )|ω = 1 and that [ω]×α = sin(θ) ω α . ⊥ ⊥ k k k kk k 7.B Method for left/right Classification

Here, we outline the method for left/right classification [191] of two smart- phones that was used as a benchmark in Section 7.4.2. Making the approxima- tion that the angular velocity is constant and only nonzero along the yaw axis of the vehicle frame, we may, without loss of generality, use that ω ω[0 0 1]| ≈ and α 0 . Inserting this into (7.1) then yields the scalar equation ≈ 3,1 a(n) a(1) ω2 r(n) (7.19) lat ≈ lat − lat where ( ) denotes quantities in the lateral direction of the vehicle frame. · lat Now, using (7.19) in the limit of ω 0, we first estimate the relative ac- → celerometer bias as the constant

δˆb(n) = a˜(n) a˜(1) (7.20) a,lat lat,k − lat,k ω˜k <γ | X| where γ is some chosen threshold. Finally, the classification is made based on the sign of the test statistic

0 Nω+N (n) (n) (1) ˆ(n) rˆlat = a˜lat,k a˜lat,k δba,lat. (7.21) 0 − − k=Nω N X− 120 7 IMU-based Smartphone-to-Vehicle Positioning

Here, N is the sampling index where ω˜ reaches it maximum, and N 0 is ω | k| some chosen integer. In section 7.4.2 we used γ = 10 [◦/s] and chose N 0 so that the sum in (7.21) was taken over samples from an interval of length 1 [s]. Chapter 8 Smartphone Placement within Vehicles

Abstract

Smartphone-based driver monitoring is quickly gaining ground as a feasible alternative to competing in-vehicle and aftermarket so- lutions. Today, the main challenges for data analysts studying smartphone-based driving data stem from the mobility of the smart- phone. In this chapter, we use kernel-based k-means clustering to infer the placement of smartphones within vehicles. All in all, trip segments are mapped into fifteen different placement clusters. As part of the presented framework, we discuss practical considera- tions concerning e.g., trip segmentation, cluster initialization, and parameter selection. The proposed method is evaluated on more than 10 000 kilometers of driving data collected from about 200 drivers. To validate the interpretation of the clusters, we compare the data associated with different clusters and relate the results to real world knowledge of driving behavior. The clusters associated with the label “Held by hand” are shown to display high gyro- scope variances, low maximum speeds, low correlations between the measurements from smartphone-embedded and vehicle-fixed accelerometers, and short segment durations.

8.1 Introduction

One of the primary challenges of the intelligent transportation systems (ITS) society lies in modifying current implementations so that they can be based on

121 122 8 Smartphone Placement within Vehicles the sensing, computation, and connectivity capabilities of mobile devices such as smartphones. This field within ITS is referred to as smartphone-based vehi- cle telematics, and emerged after the birth of the modern smartphone, about a decade ago. Smartphones enable end-user centric and vehicle-independent telematics solutions that complement vehicle-centric factory-installed telemat- ics systems. Generally, smartphone-based implementations benefit from seam- less software updates, intuitive user interfaces, short development cycles, and low development costs [72, 82, 100, 323, 394, 395]. One example application where the involvement of smartphones is expected to lead to a major indus- try disruption is insurance telematics. In insurance telematics, driving data is used to adjust automotive insurance premiums and provide various value- added services to policyholders. By utilizing smartphone-embedded sensors rather than e.g., in-vehicle sensors, many insurers expect to both lower main- tenance costs and increase customer engagement [67]. As of June 2016, there were thirty-three active smartphone-based insurance telematics programs (in- cluding trials) [137]. Smartphones may also provide a cheap alternative to current data collection methods for road-condition monitoring [124], and can act as an enabler for services within real-time ridesharing [396] and vehicular ad hoc networks (VANETs) [275]. A comprehensive review of the state-of-the- art in smartphone-based vehicle telematics was presented in Chapter 4. Smartphone-based driver behavior profiling is, due to the mobility of the smartphone, fundamentally different from traditional sensor fusion using vehicle- fixed sensors. As a consequence, recent studies have focused on how to e.g., classify data from smartphone-embedded sensors into classes associated with different transportation modes (see Chapter 4.4.2), as well as how to esti- mate the orientation (see Chapter 6) and the position (see Chapter 7) of the smartphone with respect to the vehicle frame. Further, the utility and characteristics of vehicle data recorded from smartphones can be expected to depend on whether the smartphone is rigidly mounted on a cradle, placed on the passenger seat, held by hand, etc. Despite an increasing interest in smartphone-based driver behavior profil- ing, there is still a big gap between academia and industry: While most aca- demic studies assume that the smartphone is fixed to the vehicle, anyone who studies large-scale industry data collected from smartphone-embedded and orientation-dependent sensors such as accelerometers, gyroscopes, and mag- netometers, will inevitably have to consider the effects of e.g., users picking up their smartphones during trips. As illustrated in Figure 8.1, this chapter attempts to bridge this gap by examining the problem of unsupervised infer- ence on the smartphone’s conceptual placement1 within a vehicle. Knowledge

1Note that we talk of placement in a more abstract sense than simply referring to the 8.1 Introduction 123

a)

b)

c)

Figure 8.1: We consider the problem of inferring the smartphone’s placement in a vehicle, i.e., whether it is placed a) in a car mount; b) on the passenger seat; c) on the center console; d) in the driver’s pants (not illustrated); or e) held by hand (not illustrated). of the smartphone’s placement is important for several reasons. For instance, it can be used to assess the reliability of collected data (inertial measurements collected from smartphones held by hand are e.g., less suitable for the de- tection of harsh acceleration events since hand motions may interfere with the measurements). Moreover, it may be used for accident reconstructions (to answer e.g., “Was the driver interacting with his smartphone before the accident?”) and assessments of driver distraction (drivers interacting with their mobile phone are at a significantly increased risk of being involved in an accident [96]). The following approach is taken in this study: Trip segments are mapped into fifteen different states using kernel-based k-means clustering. The features are computed using data from smartphone sensors and a vehicle-fixed sensor tag equipped with an accelerometer triad. Within the inference framework we make use of classical mechanics, established methods for inertial measurement unit (IMU) alignment, knowledge of the relation between the smartphone- to-vehicle orientation and the smartphone’s placement in the vehicle, and behavioral statistics published by the National Highway Traffic Safety Ad- ministration (NHTSA). The results are interpreted by studying the feature smartphone’s position with respect to the vehicle. 124 8 Smartphone Placement within Vehicles distributions of the different clusters and relating this to typical driver be- havior characteristics. Last, we demonstrate the effect that the smartphone’s placement has on the efficiency of accelerometer-based detection of harsh ac- celerations.

8.2 Trip Segmentation

Assume that the smartphone at each time instant can be said to be in one and only one of a discrete number of states describing its conceptual placement. Instead of jointly estimating the smartphone placement at all time instants in a given trip, we will first attempt to identify the time points at which the state can change (such as when the driver pulls out the smartphone from his pocket and mounts it on the dashboard). Assuming that all true state changes have been detected, all driving segments stretching from one detected state change to the next can then be clustered by means of standard clustering methods.

8.2.1 Gyroscope-based Segmentation Periods at which the smartphone state can change were considered to start when the Euclidean norm of the gyroscope measurements exceeded ω¯. Like- wise, these periods were considered to end a time period t¯ after the last sam- pling instance at which the gyroscope measurements exceeded the same thresh- old. All data within these periods was then discarded, and the data before and after was used to form two new independent trip segments. Segments that were shorter than t¯, or where the speed did not exceed s¯, were also discarded. The segmentation algorithm is illustrated in Figure 8.2.

8.2.2 Segmentation Parameter Selection

The threshold of ω¯ = 100 [◦/s] was chosen so that it will typically not be reached during normal vehicle maneuvers2 (when the smartphone is fixed to the vehicle), but so that it will be reached in most of the cases when the smartphone is, by hand, moved from one place to another within the vehicle.

2One way to assess the range of angular velocities that are encountered in everyday driving is to consider the lateral acceleration of a vehicle driving in an horizontal circle with constant speed. Thus, assuming a minimum turning radius of 5 [m] and a coefficient of friction of 1, it follows that the vehicle cannot drive with an angular velocity higher than 9.8/5 180/π [◦/s] 80 [◦/s] without losing its grip of the road surface. Here, we · ≈ use that the no-sliding condition under the stated assumptions is ω2r/g < µ, where ω, r, g, p and µ denote the vehicle’s angular velocity, the turning radius, the gravity force, and the coefficient of friction, respectively [135]. 8.2 Trip Segmentation 125

Illustration of segmentation

c3 Segment 1

c2 Segment 2 Segment 3 States c1 |

ω¯ Angular velocity | tation t Figure 8.2: Illustration of the trip segmentation described in Section 8.2.1 based on typical gyroscope measurements dependent on time t. The true placements over the considered trip are assumed to be c2, c3, and c1.

Further, the lower limits on the duration and speed of a segment of interest were set to t¯= 10 [s] and s¯ = 10 [km/h], respectively.

8.2.3 Accelerometer-based Segmentation For the trips where gyroscope measurements were unavailable (some older smartphones are not equipped with gyroscopes) we instead used smartphone- based accelerometer measurements to estimate the angular velocity. First, the accelerometer measurements were low-pass filtered using a Butterworth filter of order five and with a cutoff frequency of 2 [Hz] (to suppress noise and make the number of trip segments similar to that obtained with gyroscope measurements). Second, instantaneous estimates of the smartphone’s roll and pitch angles were computed by assuming that the filtered signals only reflect the accelerometer measurements due to gravity3 [7]. Obviously, this approach also relies on the assumption that eventual sensor biases can be neglected. Third, we used the estimated roll and pitch angles to compute the matrix ∆ | δCk = Ck+1Ck at each sampling instance k. Here, Ck is the rotation matrix corresponding to the roll and pitch estimates at sampling instance k and a zero yaw angle (the yaw angle can be estimated if magnetometer measurements

3The smartphone frame was defined to have its three coordinate axes pointing to the right (as seen from a user facing the display), pointing upwards along the display, and in the direction of the display. 126 8 Smartphone Placement within Vehicles are available). Under the small angle approximation, δC f , where f is k · s s the sampling rate, is equal to a skew symmetric matrix from which estimates of the smartphone’s angular velocity can be extracted (see equation (5.8c)). Once these estimates have been extracted, the segmentation can be performed in the same way as when gyroscope measurements are available.

8.3 Kernel-based k-means Clustering

The objective of kernel-based k-means clustering is to map the N data points x ,..., x into the k different classes { 1 N } k ∆ 2 Ω1,..., Ωk = arg min h(xi) mc . (8.1) { } ,..., k − k Ω1 Ωk c=1 xi Ωc X X∈

∆ ∆ Here, mc = xi Ωc h(xi)/ Ωc and Ωc = xi Ωc 1 denote the centroid and ∈ | | | | ∈ the cardinality of the cth cluster, respectively, while h( ) is the basis function P P · vector. Further, denotes the Euclidean norm, and the kernel function is k · k defined as ∆ | k(xi, xj) = h(xi) h(xj). (8.2) Typically, one would employ iterative methods to find one or several local minimums to the optimization problem (8.1) [397].

8.3.1 Features This subsection describes the features that were extracted from the segments. The features were derived from smartphone-based inertial and global naviga- tion satellite system (GNSS) measurements, information on the smartphone’s screen state, and accelerometer data from a vehicle-fixed sensor tag. The measurements from the tag were only used to compute the fifth feature, the tag-smartphone correlation. 1) Smartphone-to-vehicle orientation: Methods for estimating the relative orientation of a smartphone and a vehicle have previously been reviewed in Section 4.4.1 and Chapter 6 (see also [398]). In this study, we first computed the median values (over the segment) of the accelerometer measurements in each direction. The roll and pitch angles of the smartphone-to-vehicle orien- tation were then estimated by assuming that these median values only reflect the accelerometer measurements due to gravity, and that the vehicle’s roll and pitch angles could be approximated as zero4 [7]. Finally, the yaw angle was

4The vehicle frame was defined as a forward-right-down frame. The Euler angles de- scribing the smartphone-to-vehicle orientation were defined to be the angles of rotations 8.3 Kernel-based k-means Clustering 127 estimated by finding the direction in the horizontal plane (perpendicular to the direction of the vector with the median accelerometer measurements) that had the largest variance in the accelerometer measurements, and assuming that this was the vehicle’s forward/backward direction [210]. We then sep- arated the vehicle’s forward and backward directions by assuming that the accelerometer measurements in the forward direction were positively corre- lated with the differentiated GNSS measurements of speed. As should be obvious, the computations rely on the assumption that while the smartphone may vibrate or be affected by minor hand movements, the smartphone-to- vehicle orientation is not subject to any significant changes during the course of a given segment (i.e., that the segmentation described in Section 8.2 has been successful). 2) Accelerometer variance: The accelerometer variance was first computed separately along each of the three spatial dimensions. We then computed the sum of the three variance measures and took the logarithm of the resulting value to compress order-of-magnitude variations. 3) Gyroscope variance: The gyroscope variance was computed in the same way as the accelerometer variance. 4) Maximum speed: The maximum speed of the vehicle during the segment as indicated by GNSS measurements. 5) Tag-smartphone correlation: The correlation between the accelerometer measurements from the external accelerometer tag and the smartphone (both rotated to the vehicle frame) was first computed separately along each spatial dimension. The feature was defined as the sum of the three correlations. 6) Segment duration: The logarithm of the temporal length of a segment. 7) Screen state: The percentage of time that the screen was on during the segment. 8) Initial screen state: The percentage of time that the screen was on during the first ten seconds of the segment. All features except the smartphone-to-vehicle orientation were normalized over all segments by means of linear rescaling, and after the normalization took on values between 1 and 1. If a feature could not be computed for a − given segment due to missing data (for example, some vehicles did not have a sensor tag installed) the feature value was set to the median value (as taken over the remaining segments) of the same feature.

that applied to the yaw-pitch-roll axes (in that order) rotated the vehicle frame to the smartphone frame. 128 8 Smartphone Placement within Vehicles

8.3.2 Kernel The clustering was made using the kernel

ν ν 2 2 2 k(x , x ) = exp ( x\ x\ + (2ν /π) )/(2σ ) (8.3) i j − k i − j k i,j   ν where x\ is the vector of normalized features (all except the smartphone-to- vehicle orientation) and σ (0, ) (8.4) ∈ ∞ is the kernel width. Further, νi,j is the rotation angle corresponding to the | rotation matrix CiCj [399], and Ci is the rotation matrix for the smartphone- to-vehicle orientation associated with data point i. The calibration of the kernel width is described in Section 8.3.4. The factor 2/π was included since (2ν/π)2 [0, 4], i.e., (2ν/π)2 will take on values in the same range as the ∈ ν ν contribution from each individual feature to x\ x\ 2. Note the similarity k i − j k between the employed kernel and the well-known radial basis function (RBF) kernel [400].

8.3.3 Initialization The iterative methods that are used to find local minimums to the k-means problem often make use of some initialization based on available information on the problem at hand. We made an initialization based on the features describing the smartphone-to-vehicle orientation and the initial screen state. All in all, fifteen states were constructed. The expected typical orientations and initial screen states of the fourteen first states are specified in Table 8.1. The second column describes the aggregated and abbreviated placement labels that will be used for the experimental study in Section 8.4, and the third column gives more details on the placements associated with the individual states. The expected orientations for most of the states are illustrated in Figure 8.3. The fifteenth state is a dummy state “Unknown” for unexpected orientations. This state will tend to attract segments that are difficult to interpret, thereby preventing such segments from having a distorting influence on the remaining states. Initial trials showed that including a dummy state made the differences between the feature distributions associated with the “Held by hand” label and those of the other labels more pronounced. We arrived at the fifteen states described in the previous paragraph by manually fine-tuning the set of states and their specification to make the clus- tering results comply with the behavioral statistics described in Section 8.3.4. States that were included during initial clustering trials but then removed due to having too few associated segments included φ = π, θ = π/4, ψ = 0, with 8.3 Kernel-based k-means Clustering 129  Initial screen state off off off off off off off off on off on on on on ] 2 2 rad 2 2 2 2 2 2 [ π/ π/ ψ π/ π π/ π π/ π/ π/ 0 π/ ] rad  π [ θ 00 0 0 0 3 0 −    ] − − 222 02 0 2 0 0 0 0 2 3 2 4 0 rad [ π/ π/ π/ π/ π/ π/ π/ π/ φ π π π π 3 3 3 3 3 0π 3 3 0 5 Initialization for k-means clustering. Table 8.1: . Description of (likely) placement Lap or driver/passenger seat Lap or driver/passenger seat Lap or driver/passenger seat Lap or driver/passenger seat Cup holder or carCup mount holder Cup holder Cup holder or carPants mount pocket Dashboard or center console Held by hand (call,Held left by ear) hand (call,Held right by ear) hand (e.g., ) ] rad d 180 [ π/ · 10 denote the roll, pitch, and yaw angles (of the smartphone-to-vehicle orientation), respectively. Further, ψ Placement label 1) Seat 1) Seat 1) Seat 1) Seat 2) Cup holder 2) Cup holder 2) Cup holder 2) Cup holder 2) Cup holder 3) Pants 4) Console 5) Held by hand 5) Held by hand 5) Held by hand , and c φ, θ denotes the angle State 1 Here, 2 3 4 5 6 7 8 9 10 11 12 13 14 130 8 Smartphone Placement within Vehicles

1 2, 11 3 4

5 6, 9 7 8 10

Smartphone orientations as seen when looking in the vehicle’s forward direction.

14 Smartphone orientation as seen when looking to the left in the vehicle’s lateral direction.

Figure 8.3: The smartphone orientations that were used for the initialization. The numbers above the images refer to the states described in Table 8.1. The orientations for states twelve and thirteen are not shown due to their more complicated nature. the screen on. This could be interpreted as handheld interaction with the display in landscape mode. Each segment was initialized in the state in Table 8.1 with the nearest orientation and an initial screen state matching that of the segment. In math- ematical terms, all trip segments i where the screen was on more than 50 % of the first ten seconds were initialized in the state

ci = arg min νc,i (8.5) c Ωon ∈ where νc,i denotes the rotation angle corresponding to the rotation matrix | CcCi , Cc is the rotation matrix for the smartphone-to-vehicle orientation specified for state c in Table 8.1, and Ωon denotes the set of states for which the initial screen state is specified as “on” in Table 8.1. This should be inter- preted as choosing to initialize the segment i in the state with the “closest” expected smartphone-to-vehicle orientation, among the states for which the screen initially is expected to be on. The only exceptions were the trip seg- 8.3 Kernel-based k-means Clustering 131 ments where

min νc,i > γ (8.6) c Ωon ∈ with γ (0, π) (8.7) ∈ being a fixed threshold parameter. In these cases, the smartphone-to-vehicle orientation of segment i was not considered to be close enough to any of the orientations specified for states c Ω in Table 8.1, and hence, these segments ∈ on were initialized in the state “Unknown”. The same approach was taken for trip segments where the screen was on less than 50 % of the first ten seconds. However, in this case, the minimization was taken over Ωoff, i.e., the states for which the screen state is specified as “off” in Table 8.1. The calibration of the threshold parameter is described in Section 8.3.4. The choice of using multiple separate states for a single conceptual place- ment (all the four first states in Table 8.1 are, for example, associated with the smartphone being placed on the “lap or driver/passenger seat”) is nat- ural as we expect the set of smartphone-to-vehicle orientations and initial screen states associated with some given placements to be non-convex5. This is clearly understood when studying e.g., the smartphone-to-vehicle orienta- tions for states twelve, thirteen, and fourteen, all associated with the place- ment “Held by hand” (see Table 8.1). The use of multiple separate states for a single conceptual placement is further motivated by the fact that different orientations can be expected to be more common than others. For example, we would expect the clusters of states six and eight (both associated with the smartphone being placed in a cup holder) to have different characteristics since the smartphone display will more often be facing the driver or the passengers (state six) than not (state eight). We do not expect the clusters associated with a given placement to be non-convex in any of the features other than the smartphone-to-vehicle orientation and the two screen state features.

8.3.4 Clustering Parameter Selection It now remains to tune the kernel width σ and the threshold parameter γ from equations (8.4) and (8.7), respectively. Methods for parameter selection in cluster analysis include manual tuning, optimization of internal evaluation measures quantifying some chosen clustering objective, optimization of exter- nal evaluation measures using ground truth data, optimization of robustness measures using cross-validation [402], or choosing parameter values based on

5Standard k-means clustering can only find clusters that are convex in the basis function vector [401]. 132 8 Smartphone Placement within Vehicles

Clustering Framework

Behavioral statistics Clustering Section 8.2: Section 8.3.4: Trips Segments solution Trip Parameter Segmentation Selection γ Clustering given σ and γ

Section 8.3.1 & 8.3.2: σ Section 8.3.3: k-means clustering Initialization using σ Initial using γ Clustering

Figure 8.4: Block diagram illustrating the clustering framework described in Sections 8.2 and 8.3. the density of feature values [403]. We compared the obtained clustering dis- tribution with previously published behavioral statistics (unfortunately, data was only available for a subset of the locations we considered). The kernel width and the threshold were chosen as the solution to the least-squares prob- lem L 2 arg min (ˆyd(σ, γ) yd) . (8.8) σ,γ − dX=1

Here, yˆd(σ, γ) denotes the percentage of segments that were clustered into one of the states associated with placement label d given the parameters σ and γ. Similarly, yd denotes the percentage of “reaching instances” where a given set of drivers reached for their smartphone from placement d as reported in a study published by the NHTSA [404]. The parameter d traverses the first four placement labels in Table 8.1, giving L = 4 with y1 = 48.9 %, y2 = 42.0 %, y3 = 5.7 %, and y4 = 3.4 %. Since the NHTSA study did not report on the percentage of segments where the smartphone remained held by hand for a substantial period of time, and, in addition, included some less common reaching locations such as “purse”, both yˆd(σ, γ) and yd were normalized so L L that d=1 yˆd(σ, γ) = 1 and d=1 yd = 1. AnP overview of the clusteringP framework presented in Sections 8.2 and 8.3 is given in Figure 8.4. 8.4 Experimental Results 133

Effect of segmentation 1

0.75

0.5 edf

0.25 Before segmentation After segmentation 0 0 10 20 30 40 50 60 70 80 90 100 Screen state [%]

Figure 8.5: The empirical distribution functions of the screen state feature before and after the segmentation.

8.4 Experimental Results

The studied data consists of 1000 trips collected from 194 drivers in South Africa in May 2016 as part of the partnership between Cambridge Mobile Telematics and Discovery Insurance6. The mean trip duration and the mean trip length were about 20 minutes and 11 kilometers, respectively. All sig- nals were sampled at 15 [Hz], with the exception of the GNSS data that was sampled at 1 [Hz]. Accelerometer, GNSS, and screen state measurements were available from all trips, while gyroscope and tag measurements were available for all but 64 and 137 trips, respectively.

8.4.1 Trip Segmentation The trips were segmented as described in Section 8.2, thereby dividing the 1000 trips into 3218 segments. Figure 8.5 illustrates the segmentation by dis- playing the empirical distribution function (edf) of the screen state feature, contrasting the distributions across entire trips versus trip segments. As can be seen from Figure 8.5, the percentage of trips or segments where the screen state feature takes on values far from 0 % and 100 % decreases after the seg- mentation. This is intuitive since a change in the instantaneous screen state (i.e., the screen is turned on or off) in the middle of a trip implies that the placement of the smartphone is likely to have changed over the trip. In other words, the edf shows that the segmentation “polarizes” phone use — typical

6One of the outcomes of the partnership has been a contest where drivers earned points for safe driving. Over time, the contest had the effect of reducing the participants’ speeding, hard braking, hard cornering, and phone usage while driving [405]. 134 8 Smartphone Placement within Vehicles

Comparison of clustering and behavioral statistics 50

40 yˆd(σ, γ) yd 30

[%] 20

10

0 seat cup holder pants console Smartphone placement/Reaching locations

Figure 8.6: Distribution of the “reaching locations” as estimated by the clustering algorithm (yˆd(σ, γ)) and as specified in a report from NHTSA (yd) [404]. usage over a trip segment is mostly near 0 % or 100 %. This polarization sug- gests that the segmentation successfully divides the trips into segments with distinct smartphone placements.

8.4.2 Clustering Parameter Selection The parameter selection was performed as described in Section 8.3.4, and the 3 optimal values σ = 5 10− and γ = 2.1 were found by means of a grid search · 4 with step sizes 2 10− and 0.05 in the directions of σ and γ, respectively. · The distributions of yˆd(σ, γ) and yd are illustrated in Figure 8.6. Although the distributions agree quite well, one should be careful not to overinterpret the results. For example, note that the NHTSA data and the clustered data were collected in the US and South Africa, respectively. As a result, the underlying distributions may be different, which could mean that we are, to some extent, distorting the clustering results when trying to fit the distribution of the clustered data to that of the NHTSA data. In addition, selection biases may be present in the studied populations (the NHTSA e.g., only used data from drivers who reported using their cell phone at least once per day while driving), and we should also expect the clustering algorithm to misinterpret some segments. The clustering distribution is detailed in Table 8.2.

8.4.3 Feature Comparison Since there is no ground truth available, the results of the clustering will be analyzed by studying the edfs of the different features that were used for 8.4 Experimental Results 135 6 80 2.5 5 611 4 174 3 4.0 5.4 19.0 129 2 31.4 1010 The clustering distribution. 1 37.7 1214 Table 8.2: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 6.0 12.0 11.1 8.7 6.4 9.5 6.1 5.9 3.5 4.0 5.4 5.3 3.9 9.8 2.5 192 385 357 280 205 306 196 189 114 129 174 170 125 316 80 [%] [%] d c of trip segments of trip segments State # pct. of trip segments Placement label # pct. of trip segments The clustering distribution over2) the Cup fifteen holder, states 3) described Pants, in 4) Table Console, 8.1, 5) and Held over by the hand, aggregated and placement 6) labels Unknown. 1) Seat, 136 8 Smartphone Placement within Vehicles

(a) Empirical distribution of accelerometer variance 1

0.75 seat cup holder 0.5

edf pants console 0.25 held by hand unknown 0 0 1 2 3 4 5 6 7 8 9 10 Accelerometer variance [(m/s2)2]

(b) Empirical distribution of gyroscope variance 1

0.75 seat cup holder 0.5

edf pants console 0.25 held by hand unknown 0 0 100 200 300 400 500 600 2 Gyroscope variance [(◦/s) ]

(c) Empirical distribution of speed 1

0.75 seat cup holder 0.5

edf pants console 0.25 held by hand unknown 0 0 50 100 150 Maximum speed [km/h]

Figure 8.7: The distributions of the features (a) accelerometer variance, (b) gyroscope variance, and (c) maximum speed. the clustering. The edfs are computed separately for the aggregated place- ments labels specified in the second column of Table 8.1. Figures 8.7, 8.8, and 8.9 display the distributions of all features expect the smartphone-to-vehicle orientation. The figures show that the most distinct distributions originate 8.4 Experimental Results 137

(a) Empirical distribution of tag-smartphone correlation 1 seat 0.75 cup holder pants 0.5 console edf held by hand unknown 0.25

0 -3 -2 -1 0 1 2 3 Tag-smartphone correlation

(b) Empirical distribution of segment duration 1

0.75 seat cup holder 0.5

edf pants console 0.25 held by hand unknown 0 0 5 10 15 20 25 30 Segment duration [min]

Figure 8.8: The distributions of the features (a) tag-smartphone correlation and (b) segment duration. from the clusters associated with the smartphone being held by hand. For example, Figures 8.7 (b) and 8.8 (a) illustrate that segments where the smart- phone is held by hand generally have higher gyroscope variance and lower tag-smartphone correlation. This is, of course, expected since the smartphone in these segments often will be subject to movements originating from hand motions. Similarly, segments where the smartphone is held by hand tend to be associated with lower speeds (Figure 8.7 (c)) and shorter segment durations (Figure 8.8 (b)). Most likely, this reflects that users tend to interact more with their smartphones in urban areas when they e.g., briefly need to use navigation apps or call someone who they will meet. Alternatively, the speed difference may be caused by increased phone use when drivers are in slow traffic. Finally, we note that Figure 8.7 (a) indicates that the accelerometer variance is not significantly affected by the smartphone being held by hand. Presumably, this is because most of the variance stems from vehicle dynamics rather than from hand movements. 138 8 Smartphone Placement within Vehicles

(a) Empirical distribution of screen state 1

0.75 seat cup holder 0.5 pants edf console 0.25 held by hand unknown

0 0 10 20 30 40 50 60 70 80 90 100 Screen state [%]

(b) Empirical distribution of initial screen state 1

0.75 seat cup holder 0.5 pants edf console 0.25 held by hand unknown 0 0 10 20 30 40 50 60 70 80 90 100 Initial screen state [%]

Figure 8.9: The distributions of the features (a) screen state and (b) initial screen state.

As can be seen from Figure 8.9 (b), the clustering is, to a large extent, driven by the initial screen state. By comparing Figures 8.9 (a) and (b), it can also be seen that there are many segments where the screen is on only during the initial part of the segment. This could happen when e.g., the smartphone goes to sleep mode due to inactivity (console), or when the screen is shut off during a call (held by hand). Moreover, Figure 8.9 (b) shows that the initial screen state was on for about 10 % of the segments mapped to the label “Cup holder”. Obviously, these segments primarily derive from state 9.

8.4.4 Classification

The presented clustering framework can easily be modified to enable classifica- tion of new segments. The simplest way to do this is to construct a k-nearest neighbor (k-NN) classifier based on the presented kernel and the collected data [400]. To illustrate this, we first extracted the feature values (other than the 8.4 Experimental Results 139 roll and pitch angles) from a randomly chosen data point that was mapped to state fourteen (held by hand; e.g., text messaging) during the clustering. A classifier was then constructed by clustering all data except the data point that was used to set the feature values, and then assuming that the map- pings from the clustering were correct. Figure 8.10 (a) shows the resulting classification margin of a k-NN classifier with k = 10 as a function of the roll and pitch angles describing the smartphone-to-vehicle orientation. Here, the classification margin is defined as the score (the number of votes among the k-NN) of the true state minus the largest score among the false states. The classification margin can be seen to reach its maximum value in an area that is roughly centered around φ = 5π/4 and θ = 0, i.e., the roll and pitch angles that were specified for state fourteen in Table 8.1. The exact values of the classification margin will obviously depend heavily on other features than the roll and pitch angles. We would expect that the smartphone display is facing upwards in the vehicle and has its up direction pointing upwards in the vehicle when a user interacts with his smartphone with the screen orientation set to portrait. To demonstrate this, Figure 8.10 (b) displays the smartphone orientations at different points in the roll-pitch plane when the yaw axis is equal to zero. We note that the smartphone display is facing upwards in the vehicle when b [Cse3]3 = cos(φ) cos(θ) < 0, or equivalently, when π/2 < φ < 3π/2 and π/2 < θ < π/2. Similarly, the smartphone display has its up direction − b pointing upwards in the vehicle when [Cse2]3 = sin(φ) cos(θ) < 0, or equiv- alently, when π < φ < 2π and π/2 < θ < π/2. Here, we have used e − i and [e]i to denote the ith three-dimensional unit vector and the ith element of the vector e, respectively. As can be seen from Figure 8.10, the studied data point is indeed the most likely to be classified into state fourteen in the interval π < φ < 3π/2, i.e., when the smartphone display is facing upwards in the vehicle and has its up direction pointing upwards in the vehicle.

8.4.5 Smartphone-based Detection of Harsh Accelerations As outlined in Chapter 4, accelerometer-based detection of harsh accelera- tions has been the focus of a wide range of academic studies on smartphone- based driver safety classification. It is also considered to be an important element in many smartphone-based telematics programs. However, the effect that the smartphone’s placement in the vehicle has on the detection of harsh accelerations still remains to be investigated. To remedy this, accelerome- ter data from both the tag and the smartphone (from each segment where tag data was available) were processed as follows. To begin with, the ac- celerometer measurements were rotated to the vehicle frame as described in 140 8 Smartphone Placement within Vehicles

(a) Margin(x) /2 10

/4 5

0 0 [rad] θ

- /4 -5

- /2 -10 0 /2 3 /2 2 φ [rad]

Figure 8.10: (a) Example of how the classification margin can depend on the roll and pitch angles of the smartphone-to-vehicle orientation. The true state was state fourteen in Table 8.1. The point φ = 5π/4 and θ = 0, associated with state fourteen during the initialization, is marked with a black cross. (b) Smartphone orientations (up to yaw rotations) as seen from a passenger or driver looking at the smartphone in the vehicle’s forward direction.

Section 8.3.1. Then, all sampling instances where 1) the absolute value of the accelerometer measurements in the vehicle’s forward direction exceeded 8.4 Experimental Results 141 the threshold athres = 2 [m/s2] (similar thresholds have previously been con- sidered in [302, 305, 314]); and 2) the previously detected acceleration event from the same sensor occurred more than δt =∆ 5 [s] ago, were noted as de- tected acceleration events. The second condition was checked by traversing all samples from the first to the last. This produced sets of time points for detected acceleration events ttag,i and tsmph,i for each segment i. Next, { n } { n } treating the events detected using the tag data as the ground truth, the num- ber of false alarms associated with a given placement label d was defined as smph,i the sum of events tn for which

min tsmph,i ttag,i > δt (8.9) m | n − m | where the minimization is taken over all events ttag,i detected using the tag { m } data from segment i, and where the summation is taken over all events tsmph,i { n } from segments i that were mapped to placement label d during the clustering. smph,i tag,i Expressed in words, each detected event tn for which no event tm that satisfies tsmph,i ttag,i δt exists, is counted as a false alarm associated with | n − m | ≤ the placement label of segment i. Similarly, the number of missed detections and true positives associated with placement label d were defined as the sum tag,i of events tn for which

min tsmph,i ttag,i > δt (8.10) m | m − n | and min tsmph,i ttag,i δt, (8.11) m | m − n | ≤ respectively, with the same approach to the minimization and summation as for the false alarms. The sensitivity (recall) and precision for a given label d can now be defined as Sensitivity =∆ TP/(TP + MD) and Precision =∆ TP/(TP+FA), were MD, FA, and TP denote the associated number of missed detections, false alarms, and true positives, respectively. As suggested from Figure 8.8 (a) (and as should be expected from intu- ition), Table 8.3 shows that segments where the smartphone is believed to be held by hand have a lower detection accuracy than segments of most of the other labels. The only exception is the dummy state “Unknown”. Several factors may have influenced the results for this state. For example, we note that when removing the four segments assigned to “unknown” with the largest number of false alarms and missed detections from the computations, the sen- sitivity and precision take on the values 83.2 % and 47.2 %, respectively. In other words, only four segments need to be removed from the computations to make the detection accuracy of the dummy state higher than that of the handheld state. Since the total number of segments assigned to “unknown” 142 8 Smartphone Placement within Vehicles

Table 8.3: Accuracy of detection of harsh acceleration.

Placement label d Sensitivity [%] Precision [%] Seat 80.6 59.2 Cup holder 80.8 51.0 Pants 73.8 66.5 Console 77.3 47.3 Held by hand 73.8 46.3 Unknown 70.1 28.6 is 80 (see Table 8.2), this corresponds to 5 % of the segments in this cluster. Given the size of the studied data set, we cannot exclude the possibility that this is caused by inadequate trip segmentations or other problems related to anomalous data. Since the dummy state is designed to collect segments with unexpected smartphone-to-vehicle orientations, segments that are subject to e.g., inadequate trip segmentation (which can lead to deficient IMU align- ments) are likely to be mapped to this state. Last, we emphasize that the results in Table 8.3 only illustrate one way in which smartphone interaction interferes with the detection of harsh accelera- tions. An additional source of interference is related to the removal of data during the trip segmentation described in Section 8.2: As illustrated in Figure 8.8 (b), the typical segment duration is much shorter when the smartphone is held by hand. As a consequence, drivers who frequently hold their smart- phone in their hand while driving will generally have more periods where the high dynamics of their hand movements make the inertial measurements from their smartphone useless for purposes of analyzing their driving behavior.

8.5 Discussion and Concluding Remarks

Smartphone-based collection of driving data is continuing to gain in popular- ity, and has found widespread use in applications such as insurance telematics, driver assistance, and traffic state estimation. Many of the distinguishing fea- tures of data collected using smartphone-embedded sensors derive from the mobility of the smartphone. On the one hand, this mobility is one of the big impediments to utilizing smartphone-based data in the same way as data from vehicle-fixed sensors. On the other hand, it also makes it possible to extract information on e.g., driver distraction, directly from the smartphone sensors. In this chapter, we examined the problem of inferring the placement of a smartphone within a vehicle. 8.5 Discussion and Concluding Remarks 143

The considered trips were first segmented at time points when the smart- phone is expected to have been picked up by a user. Kernel-based k-means clustering was then used to group trip segments into fifteen different clusters, and the results indicated good agreement with behavioral statistics published by the NHTSA. Given that the NHTSA data was collected by manual labeling of video recordings, the clustering method presented here can be seen as an alternative solution with more favorable scalability properties. The most dis- tinctive feature distributions were associated with clusters of segments where the smartphone was held by hand. These segments generally displayed high gyroscope variances, low maximum speeds, low tag-smartphone correlations, and short segment durations. Moreover, we evaluated the extent to which the smartphone being held by hand impairs the ability to detect harsh acceler- ations, and illustrated how the clustering framework can be reformulated to enable classification. Several extensions can be made to attempt to increase the accuracy of the presented clustering method. Since a driver’s behavior can be expected to be heavily correlated with his or her position, one natural idea would be to include one or several features using GNSS position measurements (this was not done in this study due to privacy issues). Similarly, by using data collected over a long time period it may be possible to profile the behavior of each driver and thereby extract individual patterns that relate the smartphone’s placement to driving characteristics, the time of day or week, and the vehicle’s position. Although one could try to estimate the position of the smartphone with respect to the vehicle and include it as a feature, it is the authors’ belief that only smartphone-to-vehicle positioning methods that rely on pre-installed vehicle infrastructure would be accurate enough to provide any useful information in this context [101]. Obviously, it would also be possible to incorporate labeled trips into the presented framework. To summarize, we have demonstrated the feasibility of inferring the place- ment of smartphones in vehicles using information from built-in smartphone sensors. This capability could e.g., be used to assess the utility of collected inertial measurements for the detection of harsh accelerations, for accident reconstructions, or to profile drivers in terms of distracted driving.

Chapter 9 Fusion of OBD and GNSS Measurements of Speed

Abstract

There are two primary sources of sensor measurements for driver behavior profiling within insurance telematics and fleet manage- ment. The first is the on-board diagnostics system, typically found within most modern cars. The second is the global navigation satellite system, whose associated receivers commonly are embed- ded into smartphones or off-the-shelf telematics devices. In this chapter, we present maximum likelihood and maximum a pos- teriori estimators for the problem of fusing speed measurements from these two sources to jointly estimate a vehicle’s speed and the scale factor of the wheel speed sensors. In addition, we ana- lyze the performance of the estimators by use of the Cramér-Rao bound, and discuss the estimation of model parameters describ- ing measurement errors and vehicle dynamics. Last, simulations and real-world data are used to show that the proposed estimators yield a substantial performance gain compared to when employing only one of the two measurement sources.

9.1 Introduction

Driver behavior profiling has received considerable attention in the literature, and is at the heart of many of today’s insurance telematics and fleet manage-

145 146 9 Fusion of OBD and GNSS Measurements of Speed

OBD and GNSS measurements 96 OBD 95 GNSS 94 True speed ]

93 km/h [ s 92

91

90 0 1 2 3 4 5 6 t [sec]

Figure 9.1: Illustration of OBD and GNSS measurements of speed s as dependent on time t. Note that all OBD measurements are quantized as multiples of 1 [km/h]. ment programs [66]. These programs have continuously gained in popularity over the last years, with the smart transportation market predicted to be worth around $220 billion by 2021 [406]. Although there are many examples of trials or research studies using expensive, complex, or logistically demanding sensor setups [117], most of the major industry players (such as Progressive insurance, Allstate, Metromile, etc.) still primarily rely on speed measurements from the on-board diagnostics (OBD) system, and position and speed measurements from low-cost global navigation satellite system (GNSS) receivers [137, 178]. These implementations tend to be scalable, easy to deploy, and easy to un- derstand for the users. An illustration of OBD and GNSS measurements of speed is provided in Figure 9.1. Some telematics providers have chosen to extract driving information also from inertial sensors [407]. On the one hand, incorporating inertial sensors into the sensor setup enables estimation of the vehicle’s orientation and direct estimation (not requiring differentiation) of the vehicle’s acceleration (which can be used to e.g., detect harsh braking events). In addition, inertial sensors tend to increase the estimation rate since commonly used inertial sensors can have sampling rates in the order of 100 [Hz] [173], while GNSS receivers used in telematics applications typically have an update rate of only 1 [Hz]. On the other hand, making use of inertial sensors can be challenging for several reasons. When integrating inertial measurements to estimate for example 9.1 Introduction 147 the vehicle’s orientation, velocity, or position, errors will accumulate due to biases and random noise [408]. Moreover, in insurance telematics programs where inertial measurements are collected from smartphone-embedded sensors [67], problems often arise due to the fact that the smartphone is mobile with respect to the vehicle [48, 100]. The high sampling rate also tends to put strain on computational resources. Due to the aforementioned difficulties and the fact that most of the industry still primarily relies on OBD and GNSS measurements, this study will exclusively focus on the scenario where inertial sensors are absent. Furthermore, we note that fusion of measurements from inertial sensors and GNSS receivers is a mature technology that has been well-studied in the literature (see Chapter 5). In this chapter, we employ OBD and GNSS measurements of speed to con- struct estimators of a vehicle’s speed and of the scale factor of the wheel speed sensors. Specifically, we demonstrate how to capitalize on the complementary characteristics of the OBD (bounded error when the scale factor is known) and GNSS (unbiased errors) measurements. Compared to when using either of these two sensors individually, the proposed sensor fusion method offers both a higher estimation rate and an enhanced estimation accuracy. In addition, speed estimates only based on OBD measurements will become more accurate once an accurate estimate of the scale factor is available. Hence, this implies an increased resilience to GNSS outages or other situations where GNSS mea- surements are not available (such as when one relies on GNSS measurements from a smartphone that during some drives may be discharged or not located in the car). The proposed method is appropriate for several sensor setups that are commonly used in current insurance telematics and fleet management pro- grams. These setups include OBD dongles with an embedded GNSS tracker, and OBD dongles connected to GNSS-equipped smartphones. The speed es- timates resulting from the proposed sensor fusion could e.g., be differentiated to construct acceleration estimates for the detection of harsh braking or accel- eration, be used to detect speeding, or be used as input in traffic flow models. Since the scale factor is slowly time-varying, we also describe how to test for changes in the scale factor. This could be used to assess overall tire pressure and detect flat tires. By maintaining recommended tire inflation pressure, it is possible to not only improve fuel economy and handling, but also to reduce tire wear, noise emissions, braking distances, and rolling resistance [409]. As opposed to standard indirect tire-pressure monitoring systems (TPMSs) [410], the tire-pressure monitor presented here can be used to detect underinflation also when the four tires are equally underinflated. 148 9 Fusion of OBD and GNSS Measurements of Speed

9.2 Estimation

This section presents maximum likelihood (ML) and maximum a posteriori (MAP) estimators for the problem of using OBD and GNSS measurements of speed to jointly estimate a vehicle’s speed and the scale factor of the OBD wheel speed sensors. We begin by considering the case when the OBD and GNSS samples can be assumed to be time synchronized, and then examine the same problem but with asynchronous samples. While the latter model is more realistic, the former enables both a simpler estimation algorithm and more straightforward Cramér-Rao studies in Section 9.3.

9.2.1 Synchronous Samples The measurements obtained at sampling instance k are modeled according to1

o sk = csk + qk, (9.1a) g sk = sk + k. (9.1b) Here, so and sg denote measurements of speed obtained from the OBD sys- tem and GNSS receiver, respectively. Further, s is the speed of the vehicle, c is an unknown positive scale factor arising due to the uncertainty of the wheel radius2 [7], q is the quantization error of the OBD system, and e is the measurement error of the GNSS receiver. The quantization is made so that so = n δs for some nonnegative integer n and some positive quantization in- k · terval δs, with q [ δs/2, δs/2]. In practice, we have n = 0, 1,..., 255 and k ∈ − δs = 1 [km/h] [413]. Although there are other errors present in OBD measure- ments of speed [414] (such as wheel slips, discussed in Section 9.2.5), the scale factor and quantization errors are typically dominant. The GNSS measure- ment error ek is assumed to be a white Gaussian noise process with variance 2 σg (Gaussian errors is a common assumption for GNSS position measurements along a single spatial dimension [415]). o ∆ o o | Let us assume that we have made the measurements s = [s1 . . . sN ] and g ∆ g g | s = [s1 . . . sN ] from the OBD system and the GNSS receiver, respectively. 1It should be noted that the presented model does not explicitly consider GNSS position measurements. Generally, speed estimates derived from GNSS position measurements are not sufficiently accurate to be used to improve upon doppler-based GNSS measurements of speed [411]. As a result, it is sensible to exclude GNSS position measurements from model (9.1). 2Although it is typically possible to extract speed measurements from each of the wheels individually by using the controller area network (CAN) bus [412], most off-the-shelf telem- atics devices will extract a scalar speed measurement from the standardized OBD-II param- eter IDs (PIDs). Hence, the scale factor c is the “average” scale factor among the wheels that were used to construct the measurement so. 9.2 Estimation 149

∆ | The ML estimates of the speeds s = [s1 . . . sN ] and the scale factor c are then given by o g ˆsml, cˆml = arg max p(s , s s, c) (9.2) { } s,c | where p( ) denotes a conditional probability density function. The likelihood ·|· function can be factorized as p(so, sg s, c) = p(so s, c)p(sg s) | | | (9.3) = N p(so s , c)p(sg s ). k=1 k| k k| k Now, noting that p(sg s ) is a GaussianQ with mean s and variance σ2, while k| k k g p(so s , c) 1 cs so δs/2 3, where 1 denotes the indicator function, k| k ∝ {| k − k| ≤ } {·} it follows that the ML estimates in (9.2) can be found as the solution to the constrained nonlinear optimization problem

N g 2 minimize (sk s ) s,c − k (9.4) kX=1 subject to cs so δs/2 for k = 1,...,N. | k − k| ≤ Although (9.4) is nonconvex, it can be parameterized as the equivalent one- dimensional unconstrained convex optimization problem

N g 2 minimize (fk(d) sk) (9.5) d − kX=1 where g g o sk, sk/d sk δs/2 ∆ o | g − o | ≤ fk(d) = (sk + δs/2)d, sk/d sk > δs/2 (9.6)  − (so δs/2)d, sg/d so < δs/2. k − k − k −  The problem (9.5) is readily solved using e.g., Newton’s method, and the optimal parameters to (9.4) can then be found as sˆml,k = fk(dˆml) and cˆml = 1/dˆml, where dˆml is the optimal solution to (9.5). If there is no d such that sg/d so δs/2 for k = 1,...,N, the optimization function in (9.5) is strictly | k − k| ≤ convex and the ML estimates are unique. The solution to the optimization problem described by (9.5) and (9.6) is visualized in Figure 9.2. It is easy to verify that the equivalent optimization problem parameterized with the optimization variable c = 1/d is not guaranteed to be convex. 3 o o Although assuming e.g., qk ( δs/2, δs/2] and p(s sk, c) 1 δs/2 csk s < ∈ − k| ∝ {− ≤ − k δs/2 would make the quantization function deterministic and be somewhat more strin- } gent, we choose, for notational reasons, to include both boundaries in the permissible parameter set. For all practical purposes, this will not alter the considered estimation o problem. Further, for notational simplicity, we disregard the case when sk = 0, which gives o o p(s sk, c) 1 0 csk s δs/2 . k| ∝ { ≤ − k ≤ } 150 9 Fusion of OBD and GNSS Measurements of Speed

The relation between measurements and optimal estimates

o sk g sk

k s s ˆml,k

o (sk + δs/2)/cˆml (so δs/2)/cˆ k − ml

1 2 N k ···

Figure 9.2: Illustration of the relation between measurements and ML esti- mates under the assumption of synchronous measurements.

9.2.2 Asynchronous Samples In general, the OBD and GNSS measurements cannot be assumed to be syn- chronous. Taking this into account, the measurement model in (9.1) is modi- fied according to

so = cs + q , k , (9.7a) k k k ∈ O sg = s +  , k . (9.7b) k k k ∈ G Here, and denote the sampling instances where there are OBD and GNSS O G measurements, respectively. It is assumed that = 1,...,N . Further, O ∪ G { } the measurement errors are assumed to have the same distributions as in (9.1). Now, let us follow the approach of the previous subsection to find ML ∆ | estimates of s = [s1 . . . sN ] and c from (9.7). The ML estimates of s : k and c are obtained by applying the method presented { k ∈ O ∩ G} for synchronous samples. Obviously, the resulting optimization problem need only consider measurements made at sampling instances in . If = , O∩G O∩G ∅ it is not possible to estimate c. Since the ML estimate of s for k k ∈ O ∩ G (i.e., at sampling instances where only OBD measurements are available) is obtained as sˆ = arg max p(so s , cˆ ) = arg max 1 cˆ s so δs/2 , ml,k sk k| k ml sk {| ml k − k| ≤ } there is no ML estimate of s for k unless there is an ML estimate k ∈ O ∩ G of c. Moreover, even if there exists an ML estimate of c, there is no unique 9.2 Estimation 151

ML estimate of sk for k . Here, denotes the complement of a set ∈ O ∩ G A g g . Further, we note that sˆ = arg max p(s s ) = s for any k . A ml,k sk k| k k ∈ O ∩ G To summarize, by relaxing the assumption of synchronous measurements, the following issues arise: 1) if there are no sampling instances with concurrent OBD and GNSS measurements neither the scale factor, nor the speed at sam- pling instances where only OBD measurements are available, can be estimated by means of ML; 2) there is no unique ML estimate of the speed at sampling instances where only OBD measurements are available; and 3) at sampling instances where only GNSS measurements are available the ML estimates are no better than the GNSS measurements. One way to resolve these issues is to introduce a dynamic model for the vehicle speeds. Here, we use the model

sk+1 = sk + wk (9.8)

2 where wk is assumed to be a white Gaussian noise process with variance ∆tk σs , and ∆tk denotes the sampling interval between sampling instances k and k+1. By incorporating this dynamic model as a prior p(s) on the vehicle speeds, the MAP estimates of s and c are obtained as

o g ˆsmap, cˆmap = arg max p(s, c s , s ) (9.9) { } s,c |

o ∆ o g ∆ g where s = sk k and s = sk k . The posterior can be factorized as { } ∈O { } ∈G p(s, c so, sg) p(so, sg s, c)p(s, c) | ∝ | = p(so s, c)p(sg s)p(s) (9.10) | | o g N 1 = k p(sk sk, c) k p(sk sk) k=1− p(sk+1 sk) ∈O | ∈G | | where we have usedQ the uninformativeQ priors p(Qc) 1 and p(s ) 1 on ∝ 1 ∝ the scale factor and the initial speed, respectively. It now follows that the MAP estimates can be found as the solution to the constrained nonlinear optimization problem

N 1 1 g 2 1 − 1 2 minimize (sk s ) + (sk sk) s,c 2 k 2 +1 σg k − σs k=1 ∆tk − X∈G X subject to cs so δs/2 for k , (9.11) | k − k| ≤ ∈ O which can be solved using e.g., interior-point methods [416].

9.2.3 Estimation of Model Parameters To solve the optimization problem (9.11), we need the two model parameters 2 2 2 2 σg and σs . While (9.11) can be seen to only depend on the ratio σg/σs , 152 9 Fusion of OBD and GNSS Measurements of Speed

2 2 identifying the parameters σg and σs will be easier if we estimate the two 2 2 parameters separately. Hence, when σg and σs are considered to be unknown, we will make use of the expectation-maximization (EM) algorithm with θ =∆ [σ2 σ2 ]| as the parameter vector to be estimated, s, c as the missing data, g s { } and sg, so as the observed data. Consequently, θ is iteratively estimated { } according to

2 1 g 2 o g σˆg,i+1 = Ep (i)(sk s ,s ) (sk sk) , (9.12a) θ | k − |G| X∈G h i N 1 2 1 − 1 2 σˆ = p s ,s so,sg (sk sk) , (9.12b) s,i+1 E θ(i)( k k+1 ) +1 N 1 ∆tk | − − kX=1 h i

(i) ∆ 2 2 | where θ = [ˆσ ,i σˆs,i ] and Ep[ ] is the expectation with respect to the prob- g · ability density function p. A derivation of (9.12) is given in Appendix 9.A. o g o g Since we do not know p (i)(s s , s ) or p (i)(s , s s , s ), we will use the θ k| θ k k+1| standard approximation [39]

g 2 g 2 o g Ep (i)(sk s ,s ) (sk sk) (ˆsmap,k sk) , (9.13a) θ | − ≈ − h 2i 2 o g Ep (i)(sk,sk+1 s ,s ) (sk+1 sk) (ˆsmap,k+1 sˆmap,k) , (9.13b) θ | − ≈ − h i (i) where sˆmap,k is the MAP estimate of sk given θ .

9.2.4 Testing for Changes in the Scale Factor For the purpose of developing a TPMS, let us consider the setting with syn- chronous samples described in Section 9.2.1. The speeds s =∆ s(1), s(2) are o ∆ o1 o2 g ∆ g1 g2 (1) {∆ } measured as s = s , s and s = s , s , where s = sk k (1) , (2) ∆ o1{ ∆ o} o2 ∆{ o } g1 ∆ g { } ∈S s = sk k (2) , s = sk k (1) , s = sk k (2) , s = sk k (1) , and g2 ∆ {g } ∈S (1){ } ∈S { }(2)∈S { } ∈S s = sk k (2) , while = 1,...,N1 and = N1 +1,...,N . To test { } ∈S S { } S { } whether the scale factor is different in so1 and so2, let us formulate the null hypothesis : c(1) = c(2) (9.14) H0 where c(1) and c(2) denote the scale factors in so1 and so2, respectively. The null hypothesis is conveniently tested by computing the generalized likelihood ratio (GLR) [417]

arg max p(so, sg s, c(1), c(2)) s,c(1)=c(2) | Λ(so, sg) = (9.15) arg max p(so, sg s, c(1), c(2)) s,c(1),c(2) | 9.3 The Cramér-Rao Bound 153 and deciding on whenever Λ(so, sg) > η for some chosen threshold η. The H0 optimal parameters in the numerator are the ML estimates of s and c(1) = c(2) obtained from so and sg, while the optimal parameters in the denominator are the ML estimates of s(1) and c(1) obtained from so1 and sg1, and the ML estimates of s(2) and c(2) obtained from so2 and sg2. The generalization to the case with asynchronous samples is straightforward.

9.2.5 Additional Error Sources We will now briefly discuss two error sources that have been disregarded in the measurement models (9.1) and (9.7). Firstly, during wheel slips, the speed of the vehicle differs from the tangential speed of the wheels. As a result, the OBD measurements will be distorted. The effect is typically small during smooth driving on a dry surface. However, if the wheel slips need to be compensated for, one may use that there is an approximately linear relation between the vehicle’s acceleration and the ratio of the tangential wheel speed and the longitudinal speed of the vehicle [418]. Secondly, most low-cost GNSS receivers only provide speed measurements in the two-dimensional plane that is tangential to the surface of the earth. Consequently, the speed measure- ments obtained from a GNSS receiver will tend to be lower than the true three-dimensional speed when travelling along steep inclines. This effect may be compensated for by using data from a three-dimensional road map.

9.3 The Cramér-Rao Bound

The Cramér-Rao bound (CRB) will now be used to gain some insight into the estimation problems discussed in the preceding section. For brevity and tractability, the analysis will be focused on the case with synchronous samples investigated in Section 9.2.1. The CRB can be formulated as [8]

1 Cov(xˆ) I(x)− (9.16)  and holds for any unbiased estimator xˆ of x =∆ [s| c]|. Here, I(x) =∆ o g | o g Ep(so,sg x)[(∂x ln p(s , s x)) ∂x ln p(s , s x)] is the Fisher information matrix | o g | | | (FIM), (∂x ln p(s , s x)) is the gradient of the log-likelihood function (the | score function) with respect to x, and we have used A B to denote that  A B is positive semidefinite. − As evident from the optimization problem (9.4), the information gained from the OBD measurements can be transformed into inequality constraints on the ML estimates. Since imposing inequality constraints on parameter estimates do not alter the FIM [419], it follows that to compute the FIM, 154 9 Fusion of OBD and GNSS Measurements of Speed one only needs to consider the GNSS measurements. The resulting FIM is singular (no Fisher information is gained about the scale factor) and not very interesting since it merely allows us to recover the accuracy of the GNSS measurements, which we obviously can expect to surpass when also OBD measurements are available. One way to obtain more useful results is by instead studying the CRB of a closely related problem. This approach will be considered next.

9.3.1 Bounds under Equality Constraints Let us assume that some of the inequality constraints originating from the OBD measurements can be transformed into equality constraints. Thus, we consider the problem of estimating s and c from p(so, sg s, c) under the con- | straints cs so = γ for k = M + 1,...,N. Here, γ [ δs/2, δs/2] is a k − k k k ∈ − constant and M is the number of samples for which there are no associated equality constraints4. It is assumed that M < N, i.e., that at least one equal- ity constraint exists. Since we are adding information to the original problem, the CRB for this new problem will still be a lower bound for the original problem. As shown in Appendix 9.B, the inverse FIM for our new constrained estimation problem takes the form

1 2 IM 0M,N M+1 I − ( constr(x))− = σg | (9.17) "0N M+1,M ϕϕ /α # − where ϕ =∆ [s| c]|, s =∆ [s . . . s ]|, and α =∆ N s2. M+1:N − M+1:N M+1 N k=M+1 k Moreover, I` and 0`1,`2 denote the identity and zero matrices of dimensionsP ` 1 and ` ` , respectively. Studying the diagonal elements of (I (x))− we 1 × 2 constr obtain the scalar CRBs Var(ˆs ) σ2 (9.18a) ml,k ≥ g for k = 1,...,M, 2 2 sk Var(ˆsml,k) σg N 2 (9.18b) ≥ k=M+1 sk for k = M + 1,...,N, and P 2 2 c Var(ˆcml) σg N 2 . (9.18c) ≥ k=M+1 sk 4It is important to note that the order of theP speed measurements in Section 9.2.1 is o ambiguous. Hence, assuming that we add the equality constraints csk s = γk to N M − k − randomly chosen sampling instances, it is always possible to redefine the indices of the measurements so that only the last N M speed measurements are subject to equality − constraints. 9.4 Experimental Study 155

Assuming that the CRBs give an indication of the performance of the estima- tor derived in Section 9.2.1, this means that estimates of sM+1, . . . , sN , and c tend to improve as the number of equality constraints increases (M becomes smaller). Relating back to the original problem in Section 9.2.1, an increas- ing number equality constraints should be interpreted as smaller δs since this means that the inequality constraints become tighter (obviously, δs = 0 im- plies that cs so δs/2 is the equality constraint cs so = γ with γ = 0). | k − k| ≤ k − k k k Similarly, estimates of c tend to improve at higher speeds and with smaller scale factors. Naturally, all estimates can be expected to improve as the noise variance of the GNSS measurements decrease. The qualitative conclusions drawn from the CRB studies are validated by means of numerical simulations in Section 9.4.1.

9.3.2 Comments on the Equality Constraints We will now further illuminate the relation between the estimation problem considered in Section 9.2.1 and the estimation problem with equality con- straints considered in Section 9.3.1. To begin with, consider the same estima- tion problem as in Section 9.2.1 but with an added prior p(c) 1 clow c ∝ { ≤ ≤ chigh on the scale factor. Then, assuming no prior information on the speeds } s, the MAP estimates of s and c can be computed from the optimization problem in (9.4) but with the added constraint clow c chigh. Similarly, ≤ ≤ one may use the optimization problem in (9.5) but with the added constraint high low o high o low 1/c d 1/c . Since sˆml,k [(sk δs/2)/c , (sk + δs/2)/c ], it can ≤ ≤ low g o ∈ − easily be seen that if c sk sk > δs/2 for some k, we can replace the inequal- o − o ity constraint csk sk δs/2 with the equality constraint csk sk = δs/2. | g− | ≤ − Likewise, if chighs so < δs/2 for some k, we can replace the inequality k − k − constraint cs so δs/2 with the equality constraint cs so = δs/2. | k − k| ≤ k − k − To summarize, some of the inequality constraints in the optimization prob- lem (9.4) might, under the assumption that we add constraints (a rectangular prior) on the scale factor, be replaced by equality constraints of the same type that was considered in Section 9.3.1. However, which (if any) constraints that can be reformulated in this way will be dependent on the obtained mea- surements, and hence, this is not a general reformulation of the estimation problem (9.4) with an added prior on the scale factor.

9.4 Experimental Study

To evaluate the estimators, data from inertial sensors and GNSS receivers were recorded from cars in two different scenarios: The first data set captured high- speed driving on a race track and was recorded using a car-mounted Samsung 156 9 Fusion of OBD and GNSS Measurements of Speed

Table 9.1: Studied data sets.

Data set 1 2 Environment Race track Suburban Top speed [km/h] 181.9 42.5 Average speed [km/h] 94.6 33.3

Galaxy S III. The second data set reflected driving at modest speeds in a suburban environment and was recorded using a Microstrain 3DM-GX3-35 positioned on the roof of the car. Both data sets consists of about two minutes of data. Summary statistics of the data sets are provided in Table 9.1. The data was fused in a GNSS-aided inertial navigation system (INS) (see Chapter 5), and speed estimates were extracted by taking the norm of the three-dimensional velocity estimates. Hence, both data sets produced two minutes of speed estimates at 20 [Hz] (the same update rate as the inertial sensors). Next, we will investigate the performance of the estimators presented in Section 9.2 by simulating measurements from these speed sequences, which will be treated as ground truth s. Since the measurement errors are simulated, they will not be affected by wheel slips or steep road inclines (see Section 9.2.5). Unless otherwise stated, we will simulate the scale factor c from (¯c U − 0.02, c¯+ 0.02) with c¯ = 1, where (a, b) denotes the uniform distribution over U the interval (a, b)5. Similarly, we will use the default values δs = 1 [km/h] and σ = 0.2 3.6 [km/h] [420]. g ·

9.4.1 Experimental Study — Synchronous Samples Synchronous OBD and GNSS measurements were simulated with a sampling rate of 1 [Hz]. Figure 9.3 displays root-mean-square errors (RMSEs) of the scale factor estimates obtained from the ML estimator presented in Section 9.2.1. Each marker in the plots was obtained as the result of 100 000 simula- tions. As can easily be seen, Figure 9.3 confirms the qualitative conclusions drawn from the CRB studies in Section 9.3.1. Specifically, the estimates im- prove at higher speeds, at higher OBD resolutions (lower δs), at lower GNSS error variances, and at lower scale factors. Moreover, as suggested by the CRB in (9.18c), the RMSE can be seen to grow approximately linearly with σg and c. Note that the CRB presented in Section 9.3.1 is not the CRB for the model in Section 9.2.1 (as concluded in Section 9.3, the FIM for this es-

5Scale factor errors in the order of 1 percent can occur due to changes in pressure, temperature, and load, while tire wear can reduce the scale factor by up to 3 percent [7]. 9.4 Experimental Study 157

10-3 (a) Estimation error as dependent on OBD resolution 5

4 Data set 1 ) Data set 2 ml

c 3

2

RMSE(ˆ 1

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 δs [km/h]

10-3 (b) Estimation error as dependent on GNSS error 5

4 Data set 1 ) Data set 2 ml

c 3

2

RMSE(ˆ 1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

σg [km/h]

10-3 (c) Estimation error as dependent on scale factor 5

4 Data set 1 ) Data set 2 ml

c 3

2

RMSE(ˆ 1

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 c¯ Figure 9.3: Errors of scale factor estimates based on synchronous measure- ments. timation problem is singular). Hence, there is no quantitative comparison to be made between the RMSEs and the derived CRB. As indicated by the CRB in (9.18b), Figure 9.4 (a) shows that the speed estimates improve as the OBD resolution increases, and that their accuracy 158 9 Fusion of OBD and GNSS Measurements of Speed

(a) Estimation error as dependent on OBD resolution 1 ] 0.8 Data set 1 Data set 2 km/h 0.6 ) [ ml s 0.4

0.2 RMSE(ˆ 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 δs [km/h]

(b) Estimation error as dependent on GNSS error (data set 1) 1.5 ]

1 km/h ML ) [

ml OBD s 0.5 GNSS RMSE(ˆ 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

σg [km/h]

(c) Estimation error as dependent on GNSS error (data set 2) 1.5

] ML 1 OBD km/h

) [ GNSS ml s 0.5 RMSE(ˆ 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

σg [km/h] Figure 9.4: Errors of speed estimates based on synchronous measurements. is independent of the magnitude of the speeds. In Figures 9.4 (b) and (c), the ML estimator is compared to the estimators ˆs = so and ˆs = sg, from hereon referred to as the OBD estimator and GNSS estimator, respectively. Obviously, the accuracy of the OBD estimator is independent of σg, while the RMSE of the GNSS estimator is exactly equal to σg. The RMSE of the 9.4 Experimental Study 159

OBD estimator is larger on the first data set since this has higher speeds and thereby produces larger scale factor errors. The ML estimator improves upon the accuracy of the individual measurement sources in both cases. For small GNSS error variances, the OBD measurements do not add much information to the GNSS measurements and the RMSE of the ML estimator can be seen to approach σg.

9.4.2 Experimental Study — Asynchronous Samples Asynchronous OBD and GNSS measurements were simulated with sampling intervals of 0.9 [s] and 1 [s], respectively, with the first OBD and GNSS mea- surements assumed to be synchronous. Each marker in the plots was obtained as the result of 10 000 simulations. We will begin by comparing the MAP esti- mator presented in Section 9.2.2 to the OBD and GNSS estimators (which only estimate the speed at the sampling instances with OBD and GNSS measure- ments, respectively). To achieve a fair comparison, the prior p(s) was added to the GNSS estimator. In other words, the GNSS estimator was defined as ˆs = arg max p(sg s)p(s), while the OBD estimator was defined as ˆs = so. s | Attempts to incorporate the prior p(s) into the OBD estimator by defining it according to ˆs = arg max p(so s, c = 1)p(s) did not result in any performance s | gain. For clarity, the RMSE of the MAP estimator has been divided into two parts. These are denoted MAPo and MAPg, and only account for the errors at sampling instances with OBD and GNSS measurements, respectively. 2 Figure 9.5 displays the RMSEs when using σs = 1.5 [m/s /√Hz] and σs = 0.25 [m/s2/√Hz] on the first and second data set, respectively. These values N 1 of σs are roughly equal to the dynamic variation (1/(N 1) k=1− (sk+1 2 1/2 − · − sk) /∆tk) when ∆tk = 1 [s]. In contrast, Figure 9.6 showsP the RMSEs that were obtained when σg and σs were estimated using the EM algorithm described in Section 9.2.3. The EM recursions were terminated when θ(i+1) k − θ(i) < 0.01 or when 10 iterations had been completed. As can be seen in k Figures 9.5 and 9.6, the MAP estimator outperforms the OBD and GNSS estimators on both data sets; both when fixing the model parameters and when estimating them using the EM algorithm. In similarity with the results displayed for synchronous measurements in Figures 9.4 (b) and (c), the RMSE g of MAP approaches that of the GNSS estimator for small σg. However, with asynchronous measurements, the MAP and GNSS estimators perform better on the second data set than on the first. This is a consequence of the second data set having lower dynamic variation. Thanks to the prior p(s), the RMSE of the GNSS estimator is significantly lower than σg in Figure 9.5 (b). However, in Figure 9.6 (b), the added information from the prior is offset by the uncertainty in the estimates of σg and σs, and the RMSE is once again in 160 9 Fusion of OBD and GNSS Measurements of Speed

(a) Estimation error as dependent on GNSS error (data set 1) 1.5 ]

km/h 1 MAPo OBD ) [ g

map MAP GNSS s 0.5 RMSE(ˆ 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

σg [km/h]

(b) Estimation error as dependent on GNSS error (data set 2) 1.5 ] MAPo OBD g km/h 1 MAP GNSS ) [ map s 0.5 RMSE(ˆ 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

σg [km/h] Figure 9.5: Errors of speed estimates based on asynchronous measurements. the vicinity of σg. As illustrated in Figures 9.5 (a) and 9.6 (a), the RMSEs of MAPg and the GNSS estimator are roughly equal on the first data set. By comparison, MAPo outperforms the OBD measurements with a wide margin for all σg.

9.4.3 Experimental Study — Hypothesis Testing To illustrate the performance of the hypothesis testing described in Section 9.2.4, synchronous measurements were simulated from both data sets as de- scribed in Section 9.4.1. In other words, the first and second data sets were associated with the sets (1) and (2), respectively. The true positive rate was S S computed as follows. In each simulation, a scale factor c was generated for the first data set, while the second data set used the scale factor c+δc. For a given threshold, the true positive rate was defined as the percentage of simulations where the null hypothesis was rejected. To compute the false positive rate, the simulated scale factor c was used for both data sets. The false positive rate was then defined as the percentage of simulations where the null hypothesis 9.5 Summary 161

(a) Estimation error as dependent on GNSS error (data set 1) 1.5 ]

km/h 1 ) [ map s 0.5 MAPo OBD g

RMSE(ˆ MAP GNSS 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

σg [km/h]

(b) Estimation error as dependent on GNSS error (data set 2) 1.5 ] MAPo OBD g km/h 1 MAP GNSS ) [ map s 0.5 RMSE(ˆ 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

σg [km/h] Figure 9.6: Errors of speed estimates based on asynchronous measurements and when estimating the model parameters using the EM algorithm. was rejected. Figure 9.7 displays the receiver operating characteristics (ROC) of the likelihood ratio for four different values of δc, with each true and false positive rate computed based on 100 000 simulations. The detector can be seen to reach excellent performance already at δc = 0.01.

9.5 Summary

This chapter addressed the problem of fusing OBD and GNSS measurements of speed to estimate a vehicle’s speed and the scale factor of the OBD measure- ments. Under the assumption of synchronous samples, it was shown that the ML estimates can be extracted from the solution to a one-dimensional convex optimization problem. For the case with asynchronous samples, we analyzed the identifiability of the likelihood function and argued for the inclusion of a prior on the vehicle dynamics. Thus, the vehicle speed was modeled as a 162 9 Fusion of OBD and GNSS Measurements of Speed

ROC curve 1

0.9

0.8

0.7

0.6

0.5

0.4

True Positive Rate 0.3 δc = 0.0025 δc = 0.005 0.2 δc = 0.0075 0.1 δc = 0.01 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 False Positive Rate

Figure 9.7: ROC curve when testing for changes in the scale factor. random walk with Gaussian steps, and the MAP estimates were then derived as the solution to a constrained optimization problem. Further, we derived the EM recursions for estimating the model parameters that are needed for the optimization giving the MAP estimates, and described how to test for changes in the time-varying scale factor. It was concluded that the FIM for the model with synchronous measure- ments is singular, and hence, that the CRB is unavailable. Consequently, we instead considered a similar model, where some of the inequality constraints originating from the OBD measurements are transformed into equality con- straints. As expected, the resulting CRB indicated that the speed estimates will tend to improve at higher OBD resolutions and at lower GNSS error vari- ances. Similarly, the scale factor estimates will tend to improve at higher OBD resolutions, at lower GNSS error variances, at higher speeds, and at lower scale factors. An experimental study was conducted by simulating measurements errors from real-world vehicle dynamics. The experiments aligned well with the CRB, and the proposed estimators were shown to provide a significantly better performance than can be achieved by either of the sensors individually. In summary, this chapter has derived speed and scale factor estimators 9.A Derivation of EM Iterations 163 that exploit the complementary features of OBD and GNSS measurements of speed. The estimators are able to not only increase the estimation accuracy and the estimation rate, but also to strengthen the resilience to GNSS outages. Given the ubiquity of sensor setups utilizing OBD and GNSS measurements for insurance telematics and fleet management, the presented sensor fusion scheme can be expected to be of use in e.g., intelligent speed compliance and the detection of harsh braking. Further, it has been demonstrated that the estimation framework enables low-cost testing of changes in the scale factor. This could e.g., be used to provide tire-pressure monitoring as a value-added service in current telematics programs.

9.A Derivation of EM Iterations

Here, we derive the EM iterations in (9.12). The expectation step consists of computing (i) ∆ o g o g Q(θ, θ ) = Ep (i) (s,c s ,s )[ln pθ(s, c, s , s )] (9.19) θ | and the maximization step gives the updated estimates as [22] θ(i+1) = arg max Q(θ, θ(i)). (9.20) θ 2 2 Now, note that σg and σs only alter the GNSS measurements and the prior, respectively, so that o g g o ln pθ(s, c, s , s ) = ln p 2 (s s) + ln p 2 (s) + ln p(s s, c). (9.21) σg | σs | 2 2 Hence, the updated estimates of σg and σs can be obtained as 2 g o g 2 σˆg,i+1 = arg max Ep (i) (s s ,s )[ln pσg (s s)] 2 θ | σg | g o g 2 = arg max Ep (i) (sk s ,s )[ln pσg (sk sk)] (9.22a) 2 θ | σg k | X∈G 1 g 2 2 o g = arg max Ep (i) (sk s ,s )[(sk sk) ] |G| ln σg 2 2 θ | σg − 2σg k − − 2 X∈G and 2 σˆs,i+1

o g 2 = arg max Ep (i) (s s ,s )[ln pσs (s)] (9.22b) 2 θ | σs N 1 − = arg max p s ,s so,sg [ln p 2 (sk sk)] E θ(i) ( k k+1 ) σs +1 σ2 | | s kX=1 N 1 1 − 1 2 N 1 2 = arg max p s ,s so,sg [(sk sk) ] − ln σ , 2 E θ(i) ( k k+1 ) +1 s σ2 − 2σs ∆tk | − − 2 s kX=1 164 9 Fusion of OBD and GNSS Measurements of Speed respectively. The EM recursions in (9.12) now immediately follow from (9.22).

9.B Derivation of Cramér-Rao Bound

This appendix will derive the inverse FIM in (9.17). We begin by writing the equality constraints from Section 9.3.1 as

g(x) = 0N M,1 (9.23) − where g(x) =∆ cs so γ , while so and γ are M+1:N − M+1:N − M+1:N M+1:N M+1:N defined in analogy with sM+1:N . Moreover, the Jacobian of the constraints is

∆ G = ∂xg(x) (9.24) = [0N M,M cIN M sM+1:N ] − − and we can construct the matrix

∆ I 0 U = M M,1 (9.25) "0N M+1,M ϕ # − whose columns form a basis for the null space of G. Now, noting that the FIM for the corresponding unconstrained problem (only considering the GNSS measurements) is 1 I 0 I N N,1 , (x) = 2 (9.26) σg "01,N 0 # it follows that the inverse FIM, after including the equality constraints (9.23), becomes [18] 1 | 1 | (Iconstr(x))− = U(U I(x)U)− U (9.27) which gives (9.17) after straightforward calculations. Chapter 10

Conclusions

Today, most fleet management and insurance telematics programs collect data using in-vehicle sensors or externally installed hardware components, referred to as black boxes, in-vehicle data recorders, or aftermarket devices. On the one hand, these data collection solutions have a proven track record and tend to benefit from their high continuity of service. Moreover, dedicated solutions can be tailored to fit the needs of individual telematics providers, and have often been designed to comply with industry-specific requirements on energy consumption, accuracy, connectivity, and computing power. On the other hand, the required dongles or black boxes are typically costly for the provider, difficult to install, require maintenance, and may not be compatible with all vehicle types. An alternative solution that has gained a great deal of attention is to col- lect data from drivers’ personal smartphones. As an example, we highlight Progressive Insurance, the industry leader in insurance telematics with more than three million policyholders signed up to their telematics program Snap- shot. Recently, Progressive introduced Snapshot Mobile, a smartphone-based version of their well-established telematics program. The initiative is part of Progressive’s collaboration with TrueMotion, one of many telematics providers primarily focused on smartphone-based solutions. Smartphone-based solu- tions are scalable and readily available, have a familiar and user-friendly in- terface, and can utilize a diverse set of commonly embedded sensors. However, smartphones cannot be assumed to be fixed to the vehicle, and this inevitably complicates the interpretation of data from embedded sensors. In this thesis, we have focused on how to overcome the challenges arising due to the smart- phone’s mobility within the vehicle. Moreover, we provided a general overview

165 166 10 Conclusions of the last ten years of related research, and discussed how to fuse measure- ments gathered from smartphones and in-vehicle sensors. The contributions within sensor fusion for smartphone-based vehicle telematics were presented in Chapter 4 and Chapters 6 9. Next, we summarize these chapters and de- − scribe how they relate to previous research and ongoing industrial activities.

Summary

Chapter 4: Smartphone-based Vehicle Telematics

About ten years ago, commercial off-the-shelf smartphones began to be equipp- ed with navigation and motion sensors, such as global navigation satellite system (GNSS) receivers, accelerometer, and gyroscopes. This spawned a whole new area of research, where smartphone-embedded sensors could be used to implement low-cost telematics applications. Chapter 4 presented the first comprehensive survey of the field of smartphone-based vehicle telematics. The chapter reviewed important academic projects and industrial endeavours; the sampling rate, accuracy, energy consumption, and utility of commonly available sensors; and aspects of the human-smartphone interaction. The es- timation of the relative orientation of the smartphone and the vehicle (so called inertial measurement unit (IMU) alignment) was identified as a key challenge on the path towards efficient utilization of orientation-dependent sensors. Previously proposed methods for IMU alignment were described in terms of the estimated rotational quantities, the employed sensors, and the utilized characteristics of vehicle dynamics. Similarly, we reviewed methods for estimating the position of the smartphone with respect to the vehicle, and discussed supplementary information sources that can be used to enhance the navigation accuracy. Moreover, the chapter outlined methods for transporta- tion method classification. Presented methods have employed features that describe the rate of change in orientation, the distribution of speeds and accel- erations, and the frequency of brakings or stops. Additionally, one can make use of seasonal patterns, weather reports, information on bus routes, timeta- bles, and rail maps. We also examined the pros and cons of using smartphones in vehicular ad hoc networks (VANETs), and highlighted the use of mobile cloud computing (MCC) in cooperative Intelligent Transportation Systems (C-ITS). Academic publications on smartphone-based driver safety classifica- tion were presented in an exhaustive table that categorized the studies based on the detected driving events, the employed sensors, and the classification methods. Many studies focus on the detection of harsh braking or corner- ing using accelerometer-based estimates of horizontal accelerations. However, driver characterizations have also been made based on events of speeding, 167 lane changing, and swerving, using measurements from gyroscopes, magne- tometers, cameras, and in-vehicle sensors. Finally, we scrutinized the topic of road condition monitoring. In the simplest methods, potholes or speed bumps are detected by thresholding accelerometer measurements in the vehicle’s ver- tical direction. However, the impact of road anomalies can also be detected using e.g., microphones or suspension sensors. Extended capabilities include differentiation between road-wide anomalies and one-sided potholes, rejection of spurious detections caused by e.g., curb ramps, and the estimation of depth, height, or length of anomalies. Chapter 4 summarizes the last ten years of research in smartphone-based inference of driving behavior and traffic conditions, and thereby fills a major gap in the literature. The review draws upon more than 300 references and provides a solid basis for future research in the area. The industrial relevance of the discussed topics can be confirmed by studying the criteria for the ranking (conducted by PTOLEMUS Consulting Group in 2016) of smartphone-based telematics service providers in [421]. Specifically, the criteria considered the efficiency of methods for driver/passenger classification, transportation mode classification, integration of on-board diagnostics (OBD) sensors, and scoring (driver safety classification), as well as aspects related to energy consumption and driver distraction. Looking to the future, there is a need for comparative studies on different methods for IMU alignment, smartphone-to-vehicle posi- tioning, driver behavior classification, and road condition monitoring. These could then be used to develop industry standards that are robust to variations in sensor accuracy and user behavior. Moreover, it could prove profitable to explore ways to integrate promising applications (within e.g., insurance telem- atics or C-ITS) into already established apps (within e.g., ridesharing or driver navigation) with a large number of users. An example of this is the partnership between the global taxi service company Uber, currently operating in more than 600 cities worldwide, and the insurance telematics startup Metromile.

Chapter 6: IMU Alignment for Smartphone-based Automotive Navigation

Fusion of measurements collected from GNSS receivers, accelerometers, and gyroscopes fixed to land-vehicles has gained considerable attention both in the literature and within commercial and military applications. As described in Chapter 5, it is often possible to formulate a state-space model where the vehicle dynamics and the systematic sensor errors that are to be estimated enter into the state vector, while the measurements from inertial sensor and the GNSS receiver are used as control input and observational information, respectively. State estimates can then be obtained by e.g., standard filtering (online) or smoothing (offline) approaches. Now, consider the implementation 168 10 Conclusions of a GNSS-aided inertial navigation system (INS) based on measurements from a vehicle-fixed smartphone. Naturally, this system will estimate the navigation state (e.g., position, velocity, and orientation) of the smartphone, even though our main interest lies in the navigation state of the vehicle. For navigation purposes, the difference in position and velocity is typically small. However, the orientation of the vehicle is completely unrelated to the orientation of the smartphone. Thus, many methods for estimating the relative orientation of the smartphone and the vehicle have been proposed. Often, these will heuris- tically exploit that the vehicle’s velocity tends to be roughly aligned with the forward direction of the vehicle frame (so called non-holonomic constraints). In Chapter 6, we proposed to augment the state vector of the standard GNSS- aided INS with the relative orientation of the smartphone and the vehicle. To achieve observability of the augmented state, the measurement vector was ex- tended with pseudo measurements based on non-holonomic constraints and the assumption of a vehicle roll angle close to zero. The resulting nonlinear state- space model was solved using an extended Kalman filter. The framework has several advantages over competing methods for smartphone-to-vehicle align- ment. For example, since it is an extension of the well-established GNSS-aided INS, it will be easy to understand for analysts that are familiar with standard navigation practices. Moreover, it jointly estimates the navigation state, sen- sor biases, and the smartphone-to-vehicle orientation, and therefore reduces the need for multiple estimation frameworks. As opposed to most previous methods, it is also statistically optimal under the stated assumptions and approximations, it provides accuracy measures of the estimated smartphone- to-vehicle orientation, and it does not assume that the vehicle is traveling on a flat surface. An experimental study based on data collected from three smart- phones was conducted. The resulting errors of the vehicle Euler angles were in the order of 2 [◦]. In summary, the presented method allows us to make use of data from smartphone-embedded and orientation-dependent sensors in the same way as if their orientation with respect to the vehicle had been known a priori. This could e.g., be used for accelerometer-based detection of harsh accelerations and potholes.

Chapter 7: IMU-based Smartphone-to-Vehicle Positioning

The capability of positioning a smartphone within a vehicle is of use in both smartphone-based insurance telematics and in distracted driving solutions. Although several accurate methods have been presented, they have been de- pendent on either vehicle-installed technology only found in modern high-end cars, or on expensive aftermarket infrastructure utilizing e.g., audio ranging or Bluetooth positioning. In Chapter 7, we investigated the possibility of using 169 inertial measurements to position a smartphone with respect to a vehicle-fixed accelerometer triad (or an additional smartphone). Specifically, we exploited the fact that the specific force, measured by the smartphone-embedded ac- celerometers, varies with smartphone’s position in the vehicle. This enabled a scalable, low-cost method only dependent on inertial sensors. The presented estimation scheme was the first IMU-based method for smartphone-to-vehicle positioning to enable joint estimation in all spatial directions using an arbi- trary number of smartphones. The method can be said to generalize earlier heuristically motivated methods that were limited in the sense of 1) only be- ing applicable to a set of two smartphones; 2) only classifying (left/right or front/back) and not estimating the relative positions of two smartphones; and 3) approximating the vehicle’s angular acceleration to be zero. Moreover, the Cramér-Rao bound (CRB) was used to relate the vehicle dynamics to the expected estimation accuracy. The method was evaluated in a field study in which the relative positions of seven smartphones were estimated. The result- ing root-mean-square error (RMSE) was below 0.5 [m] in the horizontal plane of the vehicle, which is in the same order of magnitude as the typical distance between two people in a passenger vehicle.

Chapter 8: Smartphone Placement within Vehicles

Despite broad awareness of the close connection between a smartphone’s place- ment in a vehicle and the utility and characteristics of vehicle data recorded from the same smartphone, there are few academic studies investigating this relation. In Chapter 8, we used kernel-based k-means clustering to infer the placement of smartphones within vehicles. Each data point was associated with a vehicle trip segment, and features were computed using data from smartphone sensors and a vehicle-fixed sensor tag equipped with an accelerom- eter triad. All in all, the trips were clustered into fifteen different states cor- responding to the smartphone being rigidly mounted on a cradle, placed on the passenger seat, held by hand, etc. To validate the interpretation of the clusters, we compared the data associated with different clusters and related the results to real world knowledge of driving behavior. The most distinc- tive feature distributions were associated with clusters of segments where the smartphone was held by hand. These segments generally displayed higher gy- roscope variances, lower maximum speeds, lower tag-smartphone correlations, and shorter segment durations. The information extracted from the clustering can be used to assess the reliability of collected data (inertial measurements collected from smartphones held by hand are e.g., less suitable for the detec- tion of harsh acceleration events since hand motions may interfere with the measurements), for accident reconstructions (to e.g., find out whether a driver 170 10 Conclusions was interacting with his smartphone before an accident), and for assessments of driver distraction (phone usage while driving increases the risk of an acci- dent). The study in Chapter 8 was conducted in close collaboration with the telematics startup Cambridge Mobile Telematics.

Chapter 9: Fusion of OBD and GNSS Measurements of Speed

OBD and GNSS measurements are the primary sources of information in many of today’s insurance telematics and fleet management programs. How- ever, there have been no studies on how to fuse measurements from these two sources in a statistically efficient manner. In Chapter 9, we made use of OBD and GNSS measurements of speed to estimate a vehicle’s speed and the scale factor of the OBD measurements. Under the assumption of synchronous measurements, it was shown that maximum likelihood (ML) estimates can be extracted from the solution to a one-dimensional convex optimization prob- lem. In the case of asynchronous measurements, we argued for the inclusion of a prior on the vehicle dynamics, and then derived maximum a posteriori (MAP) estimates as the solution to a constrained optimization problem. To characterize the estimation problem in more detail, we considered the CRB for the model with synchronous measurements. Since the Fisher information matrix (FIM) for the original model is singular, we instead derived the CRB for a similar model with added equality constraints. As expected, the resulting CRB indicated that the speed estimates will tend to improve at higher OBD resolutions and at lower GNSS error variances. Similarly, the scale factor es- timates will tend to improve at higher OBD resolutions, at lower GNSS error variances, at higher speeds, and at lower scale factors. Last, an experimen- tal study was conducted based on real-world vehicle dynamics and simulated measurement errors. The experiments confirmed the qualitative conclusions from the CRB and demonstrated that the proposed estimators provide bet- ter performance than can be achieved with either of the sensors individually. The presented sensor fusion scheme is based on a sensor setup that is already very common in current insurance telematics programs. Hence, the estimators could quickly be employed on a large-scale. Although the increased perfor- mance might not be too impressive in the context of speed compliance, it may certainly be used to detect harsh braking or the use of . Addi- tionally, the scale factor estimates provide a low-cost method for monitoring tire-pressure and detecting flat tires.

Altogether, this thesis has made several important contributions to the field of smartphone-based vehicle telematics. The proposed inference methods have been motivated and analyzed by theoretical means, and their practical value 171 has been clearly demonstrated using both numerical simulations and field studies. Hopefully, the presented findings can be of help on the path towards widespread adoption of smartphone-based telematics systems.

Future Research

Several directions for future research can be identified based on the work in this thesis.

• As demonstrated in Chapter 6, we can expect to quickly be able to estimate the smartphone-to-vehicle orientation to sufficient accuracy when the smartphone is fixed to the vehicle. The situation is, how- ever, more complicated when the smartphone is not fixed to the ve- hicle. More research is needed to assess the feasibility of tracking a time-varying smartphone-to-vehicle orientation, and of building a pro- file of each driver’s typical orientations (some drivers may e.g., always keep their smartphone in their pants pocket in approximately the same orientation).

• As discussed in Section 4.4.1 and Chapter 7, a large number of methods have been presented for smartphone-to-vehicle positioning and driver/pa- ssenger classification. The methods employ different sensors, have dif- ferent logistic requirements, and have different capabilities. What is needed now is a description of how to combine these different infor- mation sources, and a way to evaluate the marginal value of an added information source. For the insurance telematics industry, it may also be of interest to study the problem of assessing which trips, recorded using only smartphone-embedded sensors, that were made with the in- sured vehicle (and not with e.g., a cab). If this problem can be solved in a satisfactory manner, this would strengthen the case for insurance telematics programs that only rely on smartphone-embedded sensors.

• Speed estimates obtained using the estimation framework presented in Chapter 9 may be differentiated to construct acceleration estimates. These could then in turn be used for the detection of harsh braking. However, in this situation, a more natural approach may be to con- struct differentiators that use the speed measurements to directly esti- mate the vehicle’s acceleration. Normally, the differentiators would be constructed to e.g., minimize the mean square error (MSE) of the accel- eration estimates while using information about the spectral density of typical vehicle dynamics.

Bibliography

[1] P. Andersson, Telekommunikation Förr och Nu. Linköpings Universitet, 1987. [2] J. Garnert, Hallå! Historiska Media, 2005. [3] T. Althin, Telefonlandet Sverige. Kungliga Telegrafstyrelsen, 1953. [4] “The 2016 U.S. mobile app report,” comScore, Tech. Rep., Sep. 2016. [5] J. Poushter, “Smartphone ownership and internet usage continues to climb in emerging economies,” Pew Research Center, Tech. Rep., Feb. 2016. [6] F. Gustafsson and G. Hendeby, Sensing and Control for Autonomous Vehicles: Applications to Land, Water and Air Vehicles. Springer, 2017, ch. Exploring New Localization Applications Using a Smartphone, pp. 161–179. [7] P. D. Groves, Principles of GNSS, Inertial, and Multisensor Integrated Navigation Systems, 1st ed. Artech House, 2008. [8] S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory. Prentice-Hall, 1993. [9] E. Lehmann and G. Casella, Theory of Point Estimation. Springer, 1998. [10] E. A. Wan and R. V. D. Merwe, “The unscented Kalman filter for non- linear estimation,” in Proc. Adaptive Syst. Signal Process., Commun., and Control Symp., Lake Louise, AB, Oct. 2000, pp. 153–158. [11] F. Gustafsson and G. Hendeby, “Some relations between extended and unscented Kalman filters,” IEEE Trans. Signal Process., vol. 60, no. 2, pp. 545–555, Feb. 2012. [12] T. Schön, F. Gustafsson, and P. J. Nordlund, “Marginalized particle fil- ters for mixed linear/nonlinear state-space models,” IEEE Trans. Signal Process., vol. 53, no. 7, pp. 2279–2289, Jul. 2005. [13] D. Simon, Optimal State Estimation. Wiley, 2006. [14] T. Kailath, A. H. Sayed, and B. Hassibi, Linear Estimation. Prentice

173 174 BIBLIOGRAPHY

Hall, 2000. [15] B. P. Gibbs, Advanced Kalman Filtering, Least-Squares and Modeling: A Practical Handbook. Wiley, 2011. [16] H. L. Van Trees, Detection, Estimation, and Modulation Theory: Part I. Wiley, 1968. [17] P. Tichavsky, C. H. Muravchik, and A. Nehorai, “Posterior Cramér- Rao bounds for discrete-time nonlinear filtering,” IEEE Trans. Signal Process., vol. 46, no. 5, pp. 1386–1396, May 1998. [18] T. J. Moore and B. M. Sadler, “The value of information in constrained parametric models,” in Proc. IEEE Statistical Signal Process. Workshop, Ann Arbor, MI, Aug. 2012, pp. 293–296. [19] N. Bergman, “Recursive Bayesian estimation,” Ph.D. dissertation, Linköping University, 1999. [20] A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” J. Royal Statistical Soc., Series B, vol. 39, no. 1, pp. 1–38, 1977. [21] Z. Ghahramani and G. E. Hinton, “Parameter estimation for linear dy- namical systems,” University of Toronto, Tech. Rep., 1996. [22] T. B. Schön, “An explanation of the expectation maximization algo- rithm,” Linköping University, Tech. Rep., Aug. 2009. [23] G. J. McLachlan and T. Krishnan, The EM Algorithm and Extensions. Wiley, 2008. [24] D. T. Westwick, “Nonlinear system identification in biomedical engi- neering: Techniques, applications and challenges,” in Proc. IFAC Symp. Syst. Identification and Syst. Parameter Estimation, Newcastle, Aus- tralia, Mar. 2006, pp. 128–130. [25] J. Lu, Z. Yang, K. Okkelberg, and M. Ghovanloo, “Joint magnetic cali- bration and localization based on expectation maximization for tongue tracking,” Accepted in IEEE Trans. Biomed. Eng. [26] G. B. Stanley, Neural Engineering. Springer, 2005, ch. Neural System Identification, pp. 367–388. [27] L. P. Perera, P. Oliveira, and C. G. Soares, “System identification of vessel steering with unstructured uncertainties by persistent excitation maneuvers,” IEEE J. Oceanic Eng., vol. 41, no. 3, pp. 515–528, Jul. 2016. [28] N. Kantas, A. Doucet, S. S. Singh, J. Maciejowski, and N. Chopin, “On particle methods for parameter estimation in state-space models,” Statistical Sci., vol. 30, no. 3, pp. 328–351, Aug. 2015. BIBLIOGRAPHY 175

[29] T. B. Schön, F. Lindsten, J. Dahlin, J. Wågberg, C. A. Naesseth, A. Svensson, and L. Dai, “Sequential Monte Carlo methods for sys- tem identification,” in Proc. IFAC Symp. Syst. Identification, Beijing, China, Oct. 2015, pp. 775–786. [30] S. Malik and M. K. Pitt, “Particle filters for continuous likelihood evalu- ation and maximisation,” J. Econometrics, vol. 165, no. 2, pp. 190–209, Dec. 2011. [31] C. Snyder, T. Bengtsson, P. Bickel, and J. Anderson, “Obstacles to high- dimensional particle filtering,” Monthly Weather Rev., vol. 136, no. 12, pp. 4629–4640, Dec. 2008. [32] A. Dembo and O. Zeitouni, “Parameter estimation of partially observed continuous time stochastic processes via the EM algorithm,” Stochastic Process. and their Appl., vol. 23, no. 1, pp. 91–113, Oct. 1986. [33] K. Lange, “A gradient algorithm locally equivalent to the EM algo- rithm,” J. Royal Statistical Soc., Series B, vol. 57, no. 2, pp. 425–437, 1995. [34] Z. Ghahramani and S. T. Roweis, “Learning nonlinear dynamical sys- tems using an EM algorithm,” in Proc. Int. Conf. Advances in Neural Inf. Process. Syst., Denver, CO, Nov. 1999, pp. 431–437. [35] T. B. Schön, A. Wills., and B. Ninness., “Maximum likelihood non- linear system estimation,” in Proc. IFAC Symp.on Syst. Identification, Newcastle, Australia, Mar. 2006, pp. 1003–1008. [36] E. Liitiäinen, N. Reyhani, and A. Lendasse, “EM-algorithm for training of state-space models with application to time series prediction,” in Proc. European Symp. Artificial Neural Netw., Bruges, Belgium, Apr. 2006, pp. 137–142. [37] E. Özkan, C. Fritsche, and F. Gustafsson, “Online EM algorithm for joint state and mixture measurement noise estimation,” in Proc. IEEE Int. Conf. Inf. Fusion, Singapore, Singapore, Jul. 2012, pp. 1935–1940. [38] J. Kokkala, A. Solin, and S. Särkkä, “Expectation maximization based parameter estimation by sigma-point and particle smoothing,” in Proc. IEEE Int. Conf. Inf. Fusion, Salamanca, Spain, Jul. 2014. [39] G. C. G. Wei and M. A. Tanner, “A Monte Carlo implementation of the EM algorithm and the poor man’s data augmentation algorithms,” J. American Statistical Association, vol. 85, no. 411, pp. 699–704, Sep. 1990. [40] B. Delyon, M. Lavielle, and E. Moulines, “Convergence of a stochastic approximation version of the EM algorithm,” Ann. Statist., vol. 27, no. 1, 176 BIBLIOGRAPHY

pp. 94–128, Mar. 1999. [41] X.-L. Meng and D. B. Rubin, “Maximum likelihood estimation via the ECM algorithm: A general framework,” Biometrika, vol. 80, no. 2, pp. 267–278, Jun. 1993. [42] X.-L. Meng and D. V. Dyk, “The EM algorithm — an old folk-song sung to a fast new tune,” Series B — Statistical Methodology, vol. 59, no. 3, pp. 511–567, Jun. 1997. [43] C. Liu and D. B. Rubin, “The ECME algorithm: A simple extension of EM and ECM with faster monotone convergence,” Biometrika, vol. 81, no. 4, pp. 633–648, Dec. 1994. [44] D. A. van Dyk and X.-L. Meng, “On the orderings and groupings of conditional maximizations within ECM-type algorithms,” J. Comput. and Graphical Statistics, vol. 6, no. 2, pp. 202–223, Jun. 1997. [45] K. Lange, “A quasi-Newton acceleration of the EM algorithm,” Statistica Sinica, vol. 5, pp. 1–18, 1995. [46] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge Uni- versity Press, 2009. [47] K. Madsen, H. Nielsen, and O. Tingleff, “Methods for non-linear least squares problems,” Technical University of Denmark, Tech. Rep., Apr. 2004. [48] J. Wahlström, I. Skog, and P. Händel, “IMU alignment for smartphone- based automotive navigation,” in Proc. IEEE Int. Conf. Inf. Fusion, Washington, DC, Jul. 2015, pp. 1437–1443. [49] K. Metaxoglou and A. Smith, “Maximum likelihood estimation of VARMA models using a state-space EM algorithm,” J. Time Series Anal., vol. 28, no. 5, pp. 666–685, Feb. 2007. [50] D. C. Sorensen, “Newton’s method with a model trust region modifica- tion,” SIAM J. Numer. Anal.,, vol. 19, no. 2, pp. 409–426, Apr. 1982. [51] J. Nocedal and S. J. Wright, Numerical Optimization. Springer, 2006. [52] H. P. Gavin, “The Levenberg-Marquardt method for nonlinear least squares curve-fitting problems,” Duke University, Tech. Rep., Mar. 2017. [53] H. B. Nielsen, “Damping parameter in Marquardt’s method,” Technical University of Denmark, Tech. Rep. IMM-REP-1999-05. [54] D. Simon, “Kalman filtering with state constraints: a survey of linear and nonlinear algorithms,” IET Control Theory Appl., vol. 4, no. 8, pp. 1303–1318, Aug. 2010. [55] S. Särkkä, Bayesian Filtering and Smoothing. Cambridge University BIBLIOGRAPHY 177

Press, Sep. 2013. [56] H. Nurminen, J. Talvitie, S. Ali-Löytty, P. Müller, E. S. Lohan, R. Piché, and M. Renfors, “Statistical path loss parameter estimation and posi- tioning using RSS measurements in indoor wireless networks,” in Proc. IEEE Int. Conf. Indoor Positioning Indoor Navigation, Sydney, Aus- tralia, Nov. 2012. [57] Y. Zhao, F. Yin, F. Gunnarsson, M. Amirijoo, E. Özkan, and F. Gustafs- son, “Particle filtering for positioning based on proximity reports,” in Proc. IEEE Int. Conf. Inf. Fusion, Washington, DC, Jul. 2015, pp. 1046– 1052. [58] M. Rhudy, Y. Gu, and M. R. Napolitano, “An analytical approach for comparing linearization methods in EKF and UKF,” Int. J. Advanced Robot. Syst., vol. 10, no. 4, Apr. 2013. [59] D. Singh, G. Tripathi, and A. Jara, “A survey of internet-of-things: Future vision, architecture, challenges and services,” in IEEE World Forum on Internet of Things, Seoul, Korea, Mar. 2014, pp. 287–292. [60] J. Rivera and R. van der Meulen, “Gartner says 4.9 billion connected “things” will be in use in 2015,” Nov. 2014, Gartner. [61] J. McLeod, “How the smart phone is driving the internet-of-things,” Aug. 2013, Kilopass Technol. Inc. [62] L. Goasduff and A. A. Forni, “Gartner says worldwide sales of smart- phones grew 7 percent in the fourth quarter of 2016,” Feb. 2017, Gartner. [63] Z. Xu, Z. Chen, and H. Nie, “Handheld computers: Smartphone-centric wireless applications,” IEEE Microwave Mag., vol. 15, no. 2, pp. 36–44, Mar. 2014. [64] “Global consumer telematics systems market,” Tech. Rep., Oct. 2014, IndustryARC. [65] “Automotive telematics market,” Tech. Rep., Aug. 2014, Markets and markets, Report Code: AT 2658. [66] J. Engelbrecht, M. J. Booysen, G.-J. van Rooyen, and F. J. Bruwer, “Survey of smartphone-based sensing in vehicles for intelligent trans- portation system applications,” IET Intell. Transport. Syst., vol. 9, no. 10, pp. 924–935, Dec. 2015. [67] P. Händel, I. Skog, J. Wahlström, F. Bonawiede, R. Welch, J. Ohlsson, and M. Ohlsson, “Insurance telematics: Opportunities and challenges with the smartphone solution,” IEEE Intell. Transport. Syst. Mag., vol. 6, no. 4, pp. 57–70, Oct. 2014. [68] P. Ekler, T. Balogh, T. Ujj, H. Charaf, and L. Lengyel, “Social driv- 178 BIBLIOGRAPHY

ing in environment,” in Proc. IEEE Int. Conf. European Wireless, Budapest, Hungary, May 2015, pp. 136–141. [69] P. Braun, “Why does in-car tech lag behind phone tech? It’s trickier than you think,” Feb. 2015, digitaltrends.com. [70] S. Tarkoma, M. Siekkinen, E. Lagerspetz, and Y. Xiao, Smartphone Energy Consumption. Cambridge University Press, 2014. [71] F. Gustafsson, “Statistical signal processing for sys- tems,” in Proc. IEEE Workshop Statistical Signal Process., Novosibirsk, Russia, Jul. 2005, pp. 1428–1435. [72] P. Händel, J. Ohlsson, M. Ohlsson, I. Skog, and E. Nygren, “Smartphone-based measurement systems for road vehicle traffic mon- itoring and usage-based insurance,” IEEE Syst. J., vol. 8, no. 4, pp. 1238–1248, Dec. 2014. [73] M. B. Kjaergaard, S. Bhattacharya, H. Blunck, and P. Nurmi, “Energy- efficient trajectory tracking for mobile devices,” in Proc. Int. Conf. Mo- bile Syst., Appl., Services, Washington, DC, Jun. 2011, pp. 307–320. [74] B. Hoh, T. Iwuchukwu, Q. Jacobson, D. Work, A. Bayen, R. Herring, J.-C. Herrera, M. Gruteser, M. Annavaram, and J. Ban, “Enhancing pri- vacy and accuracy in probe vehicle-based traffic monitoring via virtual trip lines,” IEEE Trans. Mobile Comput., vol. 11, no. 5, pp. 849–864, May 2012. [75] A. Malikopoulos and J. Aguilar, “An optimization framework for driver feedback systems,” IEEE Trans. Intell. Transport. Syst., vol. 14, no. 2, pp. 955–964, Jun. 2013. [76] M. Reininger, S. Miller, Y. Zhuang, and J. Cappos, “A first look at vehicle data collection via smartphone sensors,” in Proc. IEEE Symp. Sensors Appl., Zadar, Croatia, Apr. 2015. [77] J. Goncalves, J. Goncalves, R. Rossetti, and C. Olaverri-Monreal, “Smartphone sensor platform to study traffic conditions and assess driv- ing performance,” in Proc. IEEE Int. Conf. Intell. Transport. Syst., Qingdao, China, Oct. 2014, pp. 2596–2601. [78] J. E. Siegel, R. Bhattacharyya, S. Sarma, and A. Deshpande, “Smartphone-based wheel imbalance detection,” in Proc. ASME Int. Conf. Dynamic Syst. Control, Colombus, Ohio, Oct. 2015. [79] I. Skog and P. Händel, “Indirect instantaneous car-fuel consumption measurements,” IEEE Trans. Instrum. Meas., vol. 63, no. 12, pp. 3190– 3198, Dec. 2014. [80] J. Krumm, “A survey of computational location privacy,” Personal Ubiq- BIBLIOGRAPHY 179

uitous Comput., vol. 13, no. 6, pp. 391–399, Aug. 2009. [81] C. Thompson, J. White, B. Dougherty, A. Albright, and D. Schmidt, “Using smartphones to detect car accidents and provide situational awareness to emergency responders,” in Mobile Wireless Middleware, Operating Syst., Appl. Springer, Jun. 2010, vol. 48, pp. 29–42. [82] S. Tao, V. Manolopoulos, S. Rodriguez, and A. Rusu, “Real-time ur- ban traffic state estimation with A-GPS mobile phones as probes,” J. Transport. Technol., vol. 2, no. 1, pp. 22–31, Nov. 2012. [83] M. Liu, “A study of mobile sensing using smartphones,” Int. J. Distrib. Sensor Netw., vol. 9, no. 3, Mar. 2013. [84] P. Daponte, L. D. Vito, F. Picariello, and M. Riccio, “State of the art and future developments of measurement applications on smartphones,” Meas., vol. 46, no. 9, pp. 3291–3307, Nov. 2013. [85] N. Lane, E. Miluzzo, H. Lu, D. Peebles, T. Choudhury, and A. Campbell, “A survey of mobile phone sensing,” IEEE Commun. Mag., vol. 48, no. 9, pp. 140–150, Sep. 2010. [86] P. A. Zandbergen, “Accuracy of iPhone locations: A comparison of as- sisted GPS, WiFi and cellular positioning,” Trans. GIS, vol. 13, pp. 5–25, Jun. 2009. [87] G. F. Türker and A. Kutlu, “Survey of smartphone applications based on OBD-II for intelligent transportation systems,” Int. J. Eng. Research Appl., vol. 6, no. 1, pp. 69–73, Jan. 2016. [88] R. Meng, C. Mao, and R. R. Choudhury, “Driving analytics: Will it be OBDs or smartphones?” Proc. Driver Analytics Workshop, 2015. [89] G. Perrucci, F. Fitzek, and J. Widmer, “Survey on energy consumption entities on the smartphone platform,” in Proc. IEEE Int. Conf. Veh. Technol., Budapest, Hungary, May 2011. [90] S. Maloney and I. Boci, “Survey: Techniques for efficient energy con- sumption in mobile architectures,” in Power (mW), vol. 16, 2012, pp. 7–35. [91] R. Perez-Torres, C. Torres-Huitzil, and H. Galeana-Zapien, “Power man- agement techniques in smartphone-based mobility sensing systems: A survey,” Pervasive Mobile Comput., vol. 31, no. C, pp. 1–21, Sep. 2016. [92] L. Wang, Y. Cui, I. Stojmenovic, X. Ma, and J. Song, “Energy efficiency on location based applications in mobile cloud computing: A survey,” Comput., vol. 96, no. 7, pp. 569–585, Jul. 2014. [93] A. Carroll and G. Heiser, “An analysis of power consumption in a smart- phone,” in Proc. USENIX Technical Conf., Boston, MA, Jun. 2010, pp. 180 BIBLIOGRAPHY

271–285. [94] E. Williams-Bergen, J. Hedlund, K. Sprattler, S. Group, S. Ferguson, C. Marti, B. Harsha, J. Adkins, and V. Harper, “Distracted driving — What research shows and what states can do,” Governors Highway Safety Association, Tech. Rep., Jul. 2011. [95] N. Kinnear and A. Stevens, “The battle for attention; Driver distraction — A review of recent research and knowledge,” Institute of Advanced Motorists, Tech. Rep., Dec. 2015. [96] S. P. McEvoy, M. R. Stevenson, A. T. McCartt, M. Woodward, C. Ha- worth, P. Palamara, and R. Cercarelli, “Role of mobile phones in motor vehicle crashes resulting in hospital attendance: a case-crossover study,” BMJ, vol. 331, no. 7514, pp. 428–430, Aug. 2005. [97] D. L. Strayer, J. M. Cooper, J. Turrill, J. Coleman, N. Medeiros-Ward, and F. Biondi, “Measuring cognitive distraction in the automobile,” AAA — Foundation for Traffic Safety, Tech. Rep., Jun. 2013. [98] D. L. Strayer, J. Turrill, J. R. Coleman, E. V. Ortiz, and J. M. Cooper, “Measuring cognitive distraction in the automobile II: Assessing in- vehicle voice-based interactive technologies,” AAA — Foundation for Traffic Safety, Tech. Rep., Oct. 2014. [99] M. A. Quddus, W. Y. Ochieng, and R. B. Noland, “Current map- matching algorithms for transport applications: State-of-the art and future research directions,” Transport. Research Part C: Emerg Tech- nol., vol. 15, no. 5, pp. 312–328, Oct. 2007. [100] J. Wahlström, I. Skog, and P. Händel, “Driving behavior analysis for smartphone-based insurance telematics,” in Proc. Workshop on Physical Analytics, Florence, Italy, May 2015, pp. 19–24. [101] J. Wahlström, I. Skog, P. Händel, and A. Nehorai, “IMU-based smartphone-to-vehicle positioning,” IEEE Trans. Intell. Vehicles, vol. 1, no. 2, pp. 139–147, Jul. 2016. [102] J. Biancat, C. Brighenti, and A. Brighenti, “Review of transportation mode detection techniques,” Endorsed Trans. Ambient Syst., vol. 1, no. 4, Oct. 2014. [103] S. Gessulat, “A review of real-time models for transportation mode de- tection,” Free University of Berlin, Tech. Rep., Mar. 2013. [104] S. Reddy, M. Mun, J. Burke, D. Estrin, M. Hansen, and M. Srivastava, “Using mobile phones to determine transportation modes,” ACM Trans. Sen. Netw., vol. 6, no. 2, pp. 13:1–13:27, Feb. 2010. [105] D. Shin, D. Aliaga, B. Tuncer, S. M. Arisona, S. Kim, D. Zund, and BIBLIOGRAPHY 181

G. Schmitt, “Urban sensing: Using smartphones for transportation mode classification,” Comput., Environment, Urban Syst., vol. 53, pp. 76–86, Sep. 2014. [106] U. Lee and M. Gerla, “A survey of urban vehicular sensing platforms,” Comput. Netw., vol. 54, no. 4, pp. 527–544, Mar. 2010. [107] E. Lee, E.-K. Lee, M. Gerla, and S. Oh, “Vehicular cloud networking: Architecture and design principles,” IEEE Commun. Mag., vol. 52, no. 2, pp. 148–155, Feb. 2014. [108] G. Araniti, C. Campolo, M. Condoluci, A. Iera, and A. Molinaro, “LTE for vehicular networking: A survey,” IEEE Commun. Mag., vol. 51, no. 5, pp. 148–157, May 2013. [109] H. Salin, “Gap analysis in cooperative systems within intelligent trans- portation systems,” Master’s thesis, KTH Royal Institute of Technology, 2012. [110] N. Fernando, S. W. Loke, and W. Rahayu, “Mobile cloud computing: A survey,” Future Generation Computer Systems, vol. 29, no. 1, pp. 84–106, Jan. 2013. [111] L. Gu, D. Zeng, and S. Guo, “Vehicular cloud computing: A survey,” in Proc. IEEE Globecom Workshops, Atlanta, GA, Dec. 2013, pp. 403–407. [112] M. Whaiduzzaman, M. Sookhak, A. Gani, and R. Buyya, “A survey on vehicular cloud computing,” J. Netw. Comput. Appl., vol. 40, pp. 325–344, Apr. 2014. [113] R. Yu, Y. Zhang, S. Gjessing, W. Xia, and K. Yang, “Toward cloud- based vehicular networks with efficient resource management,” IEEE Netw., vol. 27, no. 5, pp. 48–55, Sep. 2013. [114] J. Weiss and J. Smollik, “Beginner’s roadmap to working with driving behavior data,” Casualty Actuarial Soc. E-Forum, vol. 2, 2012. [115] G. A. M. Meiring and H. C. Myburgh, “A review of intelligent driv- ing style analysis systems and related artificial intelligence algorithms,” Sensors, vol. 15, no. 12, pp. 30 653–30 682, Dec. 2015. [116] A. Bolovinou, A. Amditis, and F. Bellotti, “Driving style recognition for co-operative driving: A survey,” in Proc. Int. Conf. Adaptive Self- Adaptive Syst. Appl., Venice, Italy, May 2014, pp. 73–78. [117] S. Kaplan, M. A. Guvensan, A. G. Yavuz, and Y. Karalurt, “Driver be- havior analysis for safe Driving: A Survey,” IEEE Trans. Intell. Trans- port. Syst., vol. 16, no. 6, pp. 3017–3032, Dec. 2015. [118] N. Kalra and D. Bansal, “Analyzing driver behavior using smartphone sensors: A survey,” Int. J. Electron. Electr. Eng., vol. 7, no. 7, pp. 182 BIBLIOGRAPHY

697–702, Jan. 2014. [119] H. Eren, S. Makinist, E. Akin, and A. Yilmaz, “Estimating driving behavior by a smartphone,” in Proc. IEEE Symp. Intell. Vehicles, Alcalá de Henares, Spain, Jun. 2012, pp. 234–239. [120] D. Johnson and M. Trivedi, “Driving style recognition using a smart- phone as a sensor platform,” in Proc. IEEE Conf. Intell. Transport. Syst., Washington, DC, Oct. 2011, pp. 1609–1615. [121] G. Chugh, D. Bansal, and S. Sofat, “Road condition detection using smartphone sensors: A survey,” Int. J. Electron. Electr. Eng., vol. 7, no. 6, pp. 595–602, 2014. [122] A. Mukherjee and S. Majhi, “Characterisation of road bumps using smartphones,” European Transport Research Review, vol. 8, no. 2, Apr. 2016. [123] B. Hull, V. Bychkovsky, Y. Zhang, K. Chen, M. Goraczko, A. K. Miu, E. Shih, H. Balakrishnan, and S. Madden, “CarTel: A distributed mobile sensor computing system,” in Proc. ACM SenSys, Boulder, CO, Nov. 2006, pp. 125–138. [124] J. Eriksson, L. Girod, B. Hull, R. Newton, S. Madden, and H. Balakr- ishnan, “The pothole patrol: Using a mobile sensor network for road surface monitoring,” in Proc. Int. Conf. Mobile Syst. Appl. Services, Breckenridge, CO, Jun. 2008, pp. 29–39. [125] P. Mohan, V. N. Padmanabhan, and R. Ramjee, “Nericell: Rich moni- toring of road and traffic conditions using mobile smartphones,” in Proc. ACM Int. Conf. Embedded Netw. Sensor Syst., Raleigh, NC, Nov. 2008, pp. 323–336. [126] E. Horvitz, J. Apacible, R. Sarin, and L. Liao, “Prediction, expecta- tion, and surprise: Methods, designs, and study of a deployed traffic forecasting service,” in Proc. Int. Conf. Uncertainty Artificial Intell., Edinburgh, Scotland, Jul. 2005, pp. 275–283. [127] A. M. Bayen, J. Butler, A. D. Patire, and A. D. Patire, “Mobile millen- nium final report,” University of California, Tech. Rep., Sep. 2011. [128] J. C. Herrera, D. B. Work, R. Herring, X. J. Ban, Q. Jacobson, and A. M. Bayen, “Evaluation of traffic data obtained via GPS-enabled mobile phones: The mobile century field experiment,” Transport. Research Part C: Emerg. Technol., vol. 18, no. 4, pp. 568–583, Aug. 2010. [129] A. Allström, J. Archer, A. Bayen, S. Blandin, D. Gundlegård, H. Kout- sopoulos, J. Lundgren, M. Rahmani, and O.-P. Tossavainen, “Mobile millennium Stockholm,” in Proc. Conf. Models Technol. Intell. Trans- BIBLIOGRAPHY 183

port. Syst., Leuven, Belgium, Jun. 2011. [130] P. d’Orey and M. Ferreira, “ITS for sustainable mobility: A survey on applications and impact assessment tools,” IEEE Trans. Intell. Trans- port. Syst., vol. 15, no. 2, pp. 477–493, Apr. 2014. [131] J. G. Rodrigues, A. Aguiar, and J. Barros, “SenseMyCity: Crowdsourc- ing an urban sensor,” CoRR, vol. arXiv:1412.2070, Dec. 2014. [132] D. B. Work, S. Blandin, O.-P. Tossavainen, B. Piccoli, and A. M. Bayen, “A traffic model for velocity data assimilation,” Appl. Mathematics Re- search eXpress, vol. 2010, no. 1, pp. 1–35, Apr. 2010. [133] J. C. Herrera and A. M. Bayen, “Incorporation of Lagrangian measure- ments in freeway traffic state estimation,” Transport. Research Part B: Methodological, vol. 44, no. 4, pp. 460–481, May 2010. [134] A. Allström, D. Gundlegård, and C. Rydergren, “Evaluation of travel time estimation based on LWR-v and CTM-v: A case study in Stock- holm,” in Proc. IEEE Int. Conf. Intell. Transport. Syst., Anchorage, AK, Sep. 2012, pp. 1644–1649. [135] J. Wahlström, I. Skog, and P. Händel, “Detection of dangerous cor- nering in GNSS-data-driven insurance telematics,” IEEE Trans. Intell. Transport. Syst., vol. 16, no. 6, pp. 3073–3083, Dec. 2015. [136] J. Ohlsson, P. Händel, S. Han, and R. Welch, “Process innovation with disruptive technology in auto insurance: Lessons learned from a smartphone-based insurance telematics initiative,” in BPM — Driving Innovation in a Digital World. Springer, 2015, pp. 85–101. [137] “The state of usage based insurance today,” Jun. 2016, Ptolemus Con- sulting Group. [138] J. Fuller, “Vehcon, inc. announces verified mileage solution for $1 per vehicle,” Reuters, Sep. 2014. [139] D. Spiegel, “Uber’s books still top secret, but its biggest weakness isn’t,” Jun. 2016, CNBC. [140] L. Hook, “Lyft raises more than $500m, giving it a new valuation of $6.9bn,” Apr. 2017, Financial Times. [141] S. von Watzdorf and F. Michahelles, “Accuracy of positioning data on smartphones,” in Proc. Int. Workshop Location Web, Tokyo, Japan, Nov. 2010, pp. 2:1–2:4. [142] “NMEA 0183 — Standard for interfacing marine electronic devices, ver- sion 4.0,” National Marine Electronic Association, Tech. Rep., 2008. [143] ION, “Google’s Android OS to provide raw GNSS measurements,” The Institute of Navigation — Quarterly Newletter, vol. 26, no. 3, p. 7, 2016. 184 BIBLIOGRAPHY

[144] J. Zhang, F.-Y. Wang, K. Wang, W.-H. Lin, X. Xu, and C. Chen, “Data- driven intelligent transportation systems: A survey,” IEEE Trans. Intell. Transport. Syst., vol. 12, no. 4, pp. 1624–1639, Dec. 2011. [145] T. Menard, J. Miller, M. Nowak, and D. Norris, “Comparing the GPS capabilities of the Samsung Galaxy S, Droid X, and the Apple iPhone for vehicle tracking using FreeSim-Mobile,” in Proc. IEEE Int. Conf. Intell. Transport. Syst., Washington, DC, Oct. 2011, pp. 985–990. [146] T. Menard and J. Miller, “Comparing the GPS capabilities of the iPhone 4 and iPhone 3G for vehicle tracking using FreeSim-Mobile,” in Proc. IEEE Intell. Vehicles Symp., Baden-Baden, Germany, Jun. 2011, pp. 278–283. [147] D. Aloi, M. Alsliety, and D. Akos, “A methodology for the evaluation of a GPS receiver performance in telematics applications,” IEEE Trans. Instrum. Meas., vol. 56, no. 1, pp. 11–24, Feb. 2007. [148] E. D. Kaplan and C. J. Hegarty, Understanding GPS — Principles and Applications, 2nd ed. Artech House, 2006. [149] A. A. Al-Ameen Salih, N. L. A. Che Ahmad Zaini, and A. Zhahir, “The suitability of GPS receivers update rates for navigation applications,” in Proc. World Academy Sci., Eng. Technol., vol. 78, Jan. 2013, pp. 192–199. [150] J. R. Blum, D. G. Greencorn, and J. R. Cooperstock, “Smartphone sensor reliability for augmented reality applications.” in MobiQuitous, vol. 120, Beijing, China, Dec. 2012, pp. 127–138. [151] S. Godha, M. G. Petovello, and G. Lachapelle, “Performance analy- sis of MEMS IMU/HSGPS/magnetic sensor integrated system in urban canyons,” in Position, Location and Navigation Group, Long Beach, CA, Sep. 2005, pp. 1977–1990. [152] C. Bo, X. Jian, T. Jung, J. Han, X. Li, X. Mao, and Y. Wang, “De- tecting driver’s smartphone usage via non-intrusively sensing driving dynamics,” IEEE Internet of Things J., vol. 4, no. 2, pp. 340–350, Apr. 2017. [153] X. Zong, X. Wen, and T. Zhang, “Establishment of coordinate rela- tionship with accelerometer,” in Proc. Int. Conf. Intell. Syst. Research Mechatronics Eng., Zhengzhou, China, Apr. 2015, pp. 1279–1291. [154] N. Wahlström and F. Gustafsson, “Magnetometer modeling and valida- tion for tracking metallic targets,” IEEE Trans. Signal Process., vol. 62, no. 3, pp. 545–556, Feb. 2014. [155] D. Zachariah and M. Jansson, “Camera-aided inertial navigation using BIBLIOGRAPHY 185

epipolar points,” in Proc. IEEE Symp. Position Location Navigation, Indian Wells, CA, May 2010, pp. 303–309. [156] S. Sivaraman and M. Trivedi, “Looking at vehicles on the road: A survey of vision-based vehicle detection, tracking, and behavior analysis,” IEEE Trans. Intell. Transport. Syst., vol. 14, no. 4, pp. 1773–1795, Dec. 2013. [157] M.-C. Chuang, R. Bala, E. Bernal, P. Paul, and A. Burry, “Estimat- ing gaze direction of vehicle drivers using a smartphone camera,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognition Workshops, Columbus, OH, Jun. 2014, pp. 165–170. [158] E. Koukoumidis, L.-S. Peh, and M. R. Martonosi, “SignalGuru: Lever- aging mobile phones for collaborative traffic signal schedule advisory,” in Proc. Int. Conf. Mobile Syst., Appl., Services, Bethesda, MD, Jun. 2011, pp. 127–140. [159] H. N. Do, M. T. Vo, B. Q. Vuong, H. T. Pham, A. H. Nguyen, and H. Q. Luong, “Automatic license plate recognition using mobile device,” in Proc. Int. Conf. Advanced Technol. Commun., Hanoi, Vietnam, Oct. 2016, pp. 268–271. [160] M. Mathias, R. Timofte, R. Benenson, and L. V. Gool, “Traffic sign recognition — How far are we from the solution?” in Proc. IEEE Int. Conf. Neural Netw., Dallas, TX, Aug. 2013. [161] J. Salvi, X. Armangué, and J. Batlle, “A comparative review of cam- era calibrating methods with accuracy evaluation,” Pattern Recognition, vol. 35, no. 7, pp. 1617–1635, Jul. 2002. [162] A. Delaunoy, J. Li, B. Jacquet, and M. Pollefeys, “Two cameras and a screen: How to calibrate mobile devices?” in Proc. Int. Conf. 3D Vision, Tokyo, Japan, Dec. 2014, pp. 123–130. [163] C. Jia and B. L. Evans, “Online camera-gyroscope autocalibration for cell phones,” IEEE Trans. Image Process., vol. 23, no. 12, pp. 5070–5081, Dec. 2014. [164] P. Saponaro and C. Kambhamettu, “Towards auto-calibration of smart phones using orientation sensors,” in Proc. IEEE Conf. Comput. Vision and Pattern Recognition Workshops, Portland, OR, Jun. 2013, pp. 20– 26. [165] D. Skocaj, D. Tabernik, D. Racki, M. Hegedic, A. Vrecko, and M. Kris- tan, “Evaluation of performance of smart mobile devices in machine vision tasks,” in Comput. Vision Winter Workshop, Krtiny, Czech Re- public, Feb. 2014. [166] G. Panahandeh, M. Jansson, and P. Händel, “Calibration of an IMU- 186 BIBLIOGRAPHY

camera cluster using planar mirror reflection and its observability anal- ysis,” IEEE Trans. Instrum. Meas., vol. 64, no. 1, pp. 75–88, Jan. 2015. [167] M. Mielke and R. Bruck, “Smartphone application for automatic clas- sification of environmental sound,” in Proc. Int. Conf. Mixed Design of Integr. Circuits Syst., Gdynia, Poland, Jun. 2013, pp. 512–515. [168] J. Yang, S. Sidhom, G. Chandrasekaran, T. Vu, H. Liu, N. Cecan, Y. Chen, M. Gruteser, and R. P. Martin, “Detecting driver phone use leveraging car speakers,” in Proc. Int. Conf. Mobile Comput. Netw., Las Vegas, NV, Sep. 2011, pp. 97–108. [169] L. E. Gomez-Martin, “Smartphone usage and the need for consumer privacy laws,” Pittsburgh J. Technol. Law, Policy, vol. 12, pp. 217–237, 2012. [170] M. Perlmutter and L. Robin, “High-performance, low cost inertial MEMS: A market in motion!” in Proc. IEEE/ION Symp. Position Lo- cation and Navigation, Myrtle Beach, SC, Apr. 2012, pp. 225–229. [171] I. Aicardi, P. Dabove, A. Lingua, and M. Piras, “Sensors integration for smartphone navigation: Performances and future challenges,” Int. Archives of the Photogrammetry, Remote Sensing and Spatial Informa- tion Sciences, vol. XL-3, pp. 9–16, Aug. 2014. [172] A. Chowdhury, T. Chakravarty, and P. Balamuralidhar, “A novel approach to improve vehicle speed estimation using smartphone’s INS/GPS sensors,” in Proc. Int. Conf. Sens. Technol., Liverpool, UK, Sep. 2014, pp. 441–446. [173] X. Niu, Q. Wang, Y. Li, Q. Li, and J. Liu, “Using inertial sensors in smartphones for curriculum experiments of inertial navigation technol- ogy,” Educ. Sci., vol. 5, no. 1, pp. 26–46, Mar. 2015. [174] A. Chowdhury, A. Ghose, T. Chakravarty, and P. Balamuralidhar, Smart Sensors, Measurement and Instrumentation. Springer, Jul. 2016, vol. 16, ch. An Improved Fusion Algorithm For Estimating Speed From Smartphone’s INS/GPS Sensors, pp. 235–256. [175] J. Zaldivar, C. Calafate, J. Cano, and P. Manzoni, “Providing accident detection in vehicular networks through OBD-II devices and Android- based smartphones,” in Proc. IEEE Int. Conf. Local Comput. Netw., Bonn, Germany, Oct. 2011, pp. 813–819. [176] A. Sathyanarayana, S. Sadjadi, and J. Hansen, “Leveraging sensor in- formation from portable devices towards automatic driving maneuver recognition,” in Proc. IEEE Int. Conf. Intell. Transport. Syst., Anchor- age, AK, Sep. 2012, pp. 660–665. BIBLIOGRAPHY 187

[177] A. Sathyanarayana, S. O. Sadjadi, and J. Hansen, “Automatic driving maneuver recognition and analysis using cost effective portable devices,” SAE Int. J. Passeng. Cars — Electron. Electr. Syst, vol. 6, no. 2, pp. 467–477, Apr. 2013. [178] A. Chowdhury, T. Chakravarty, and P. Balamuralidhar, “Estimating true speed of moving vehicle using smartphone-based GPS measure- ment,” in Proc. IEEE Int. Conf. Syst., Man Cybernetics, San Diego, CA, Oct. 2014, pp. 3348–3353. [179] O. Walter, J. Schmalenstroeer, A. Engler, and R. Haeb-Umbach, “Smartphone-based sensor fusion for improved vehicular navigation,” in Proc. IEEE Positioning Navigation Commun., Dresden, Germany, Mar. 2013. [180] Y. Chon, E. Talipov, H. Shin, and H. Cha, “Mobility prediction-based smartphone energy optimization for everyday location monitoring,” in Proc. ACM Conf. Embedded Netw. Sensor Syst., Seattle, WA, Nov. 2011, pp. 82–95. [181] M. B. Kjærgaard, J. Langdal, T. Godsk, and T. Toftkjær, “EnTracked: Energy-efficient robust position tracking for mobile devices,” in Proc. Int. Conf. Mobile Syst., Appl., Services, Kraków, Poland, Jun. 2009, pp. 221–234. [182] S. Hu, L. Su, S. Li, S. Wang, C. Pan, S. Gu, M. T. Al Amin, H. Liu, S. Nath, R. R. Choudhury, and T. F. Abdelzaher, “Experiences with eNav: A low-power vehicular navigation system,” in Proc. ACM Int. Conf. Pervasive Ubiquitous Comput., Osaka, Japan, Sep. 2015, pp. 433– 444. [183] J. Paek, J. Kim, and R. Govindan, “Energy-efficient rate-adaptive GPS- based positioning for smartphones,” in Proc. Int. Conf. Mobile Systems, Appl., Services, San Francisco, CA, Jun. 2010, pp. 299–314. [184] M. Alrefaie, I. Carreras, F. Cartolano, R. Di Cello, and F. De Rango, “Map matching accuracy: Energy efficient location sampling using smartphones,” in Proc. IEEE Int. Conf. Intell. Transport. Syst., The Hague, the Netherlands, Oct. 2013, pp. 2243–2248. [185] H. Aly and M. Youssef, “semMatch: Road semantics-based accurate map matching for challenging positioning data,” in Proc. Int. Conf. Ad- vances in Geographic Inf. Syst., Bellevue, WA, 2015, pp. 5:1–5:10. [186] K. Liu, Q. Huang, J. Wang, X. Li, and D. O. Wu, “Less energy higher accuracy: Smartphone localization via social collaboration,” University of Florida, Tech. Rep., 2012. 188 BIBLIOGRAPHY

[187] A. P. Miettinen and J. K. Nurminen, “Energy efficiency of mobile clients in cloud computing,” in Proc. USENIX Conf. Hot Topics Cloud Com- put., Boston, MA, 2010. [188] A. Abdelmotalib and Z. Wu, “Power consumption in smartphones (hard- ware behaviourism),” Int. J. Comput. Sci. Issues, vol. 9, no. 3, pp. 161– 164, May 2012. [189] R. Friedman, A. Kogan, and Y. Krivolapov, “On power and throughput tradeoffs of WiFi and Bluetooth in smartphones,” IEEE Trans. Mobile Comput., vol. 12, no. 7, pp. 1363–1376, Jul. 2013. [190] B. Priyantha, D. Lymberopoulos, and J. Liu, “Little rock: Enabling en- ergy efficient continuous sensing on mobile phones,” Microsoft Research, Tech. Rep., Feb. 2010. [191] Y. Wang, J. Yang, H. Liu, Y. Chen, M. Gruteser, and R. P. Martin, “Sensing vehicle dynamics for determining driver phone use,” in Proc. Int. Conf. Mobile Systems, Appl., Services, Taipei, Taiwan, Jun. 2013, pp. 41–54. [192] Y. Wang, Y. J. Chen, J. Yang, M. Gruteser, R. P. Martin, H. Liu, L. Liu, and C. Karatas, “Determining driver phone use by exploiting smartphone integrated sensors,” IEEE Trans. Mobile Comput., vol. 15, no. 8, pp. 1965–1981, Aug. 2016. [193] P. Green, J. Sullivan, O. Tsimhoni, J. Oberholtzer, M. Buonarosa, J. De- vonshire, J. Schweitzer, E. Baragar, and J. Sayer, “Integrated vehicle- based safety systems (IVBSS): Human factors and driver-vehicle inter- face (DVI) summary report,” U.S. Department of Transport., National Highway Traffic Safety Administration, Tech. Rep., Jan. 2008. [194] T. Yamabe and R. Kiyohara, “A study of on-vehicle information devices using a smartphone,” in Proc. IEEE Int. Conf. Comput. Softw. Appl. Workshops, Taichung, Taiwan, Jul. 2014, pp. 561–566. [195] M. Watkins, I. Amaya, P. Keller, M. Hughes, and E. Beck, “Autonomous detection of distracted driving by cell phone,” in Proc. IEEE Int. Conf. Intell. Transport. Syst., Washington, DC, Oct. 2011, pp. 1960–1965. [196] C. Saiprasert, S. Supakwong, W. Sangjun, and S. Thajchayapong, “Ef- fects of smartphone usage on driver safety level performance in urban road conditions,” in Proc. IEEE Int. Conf. Electr. Eng/Electron., Com- put., Telecommun. Inf. Technol., Nakhon Ratchasima, Thailand, May 2014. [197] D. L. Strayer, J. M. Watson, and F. A. Drews, The Psychology of Learn- ing and Motivation, Dec. 2011, vol. 54, ch. Cognitive Distraction While BIBLIOGRAPHY 189

Multitasking in the Automobile, pp. 33–58. [198] A. Webb, “Move over mobile phone: The next ad frontier is the wind- shield,” Jan. 2017, Bloomberg. [199] J. Jasper, “How advertisers and OEMs can work together to create a unique connected car experience,” Jul. 2015, Abalta. [200] H. S. Park, M. W. Park, K. H. Won, K.-H. Kim, and S. K. Jung, “In- vehicle AR-HUD system to provide driving-safety information,” ETRI J., vol. 35, no. 6, pp. 1038–1047, Dec. 2013. [201] “OEM — World,” Apr. 2013, IHS Automotive. [202] S. Matsuyama, T. Yamabe, N. Takahashi, and R. Kiyohara, “Intelli- gent user interface of smartphones for on-vehicle information devices,” Procedia Comput. Sci., vol. 35, pp. 1635–1643, Sep. 2014. [203] Y. Li, G. Zhou, Y. Li, and D. Shen, “Determining driver phone use leveraging smartphone sensors,” Multimedia Tools Appl., vol. 75, no. 24, pp. 16 959–16 981, Dec. 2016. [204] X. Niu, Q. Zhang, Y. Li, Y. Cheng, and C. Shi, “Using inertial sensors of iPhone 4 for car navigation,” in Proc. IEEE/ION Position, Location Navig. Symp., Myrtle Beach, SC, Apr. 2012, pp. 555–561. [205] V. Gikas and H. Perakis, “Rigorous performance evaluation of smart- phone GNSS/IMU sensors for ITS applications,” Sensors, vol. 16, no. 8, Aug. 2016. [206] M. Forster, R. Frank, and T. Engel, “Evaluation of sensors in mod- ern smartphones for vehicular traffic monitoring,” University of Luxem- bourg, Tech. Rep., 2012. [207] K. Li, M. Lu, F. Lu, Q. Lv, L. Shang, and D. Maksimovic, “Personalized driving behavior monitoring and analysis for emerging hybrid vehicles,” in Pervasive Comput. Springer, Jun. 2012, vol. 7319, pp. 1–19. [208] J. Paefgen, F. Kehr, Y. Zhai, and F. Michahelles, “Driving behavior analysis with smartphones: insights from a controlled field study,” in Proc. Conf. Mobile and Ubiquitous Multimedia, Luleå, Sweden, Dec. 2012, pp. 36:1–8. [209] B. Khaleghi, A. El-Ghazal, A. R. Hilal, J. Toonstra, W. B. Miners, and O. A. Basir, “Opportunistic calibration of smartphone orientation in a vehicle,” in Proc. IEEE Int. Symp. World of Wireless, Mobile Multime- dia Netw., Boston, MA, Jun. 2015. [210] R. Larsdotter and D. Jaller, “Automatic calibration and virtual align- ment of MEMS-sensor placed in vehicle for use in road condition de- termination system,” Master’s thesis, Chalmers University of Technol., 190 BIBLIOGRAPHY

Jan. 2014. [211] C. Woo and D. Kulic, “Manoeuvre segmentation using smartphone sen- sors,” in Proc. IEEE Symp. Intell. Vehicles, Göteborg, Sweden, Jun. 2016, pp. 572–577. [212] M. R. Carlos, L. C. González, F. Martínez, and R. Cornejo, Proc. Int. Conf. Ubiquitous Comput. Ambient Intell. Gran Canaria, Spain,: Springer, Nov. 2016, ch. Evaluating Reorientation Strategies for Ac- celerometer Data from Smartphones for ITS Applications, pp. 407–418. [213] J. Dai, J. Teng, X. Bai, Z. Shen, and D. Xuan, “Mobile phone based drunk driving detection,” in Proc. Int. Conf. Pervasive Comput. Tech- nol. for Healthcare, Munich, Germany, Mar. 2010. [214] D. Banerjee, N. Banerjee, D. Chakraborty, A. Iyer, and S. Mittal, Mobile and Ubiquitous Syst.: Comput., Netw., Services. Springer, Sep. 2014, vol. 131, ch. How’s My Driving? A Spatio-Semantic Analysis of Driving Behavior with Smartphone Sensors, pp. 653–666. [215] F. Xianping, M. Yugang, and Y. Guoliang, “A driving behavior retrieval application for vehicle surveillance system,” Int. J. Modern Educ. Com- put. Sci., vol. 2, pp. 44–50, Apr. 2011. [216] T. Osafune, T. Takahashi, N. Kiyama, T. Sobue, H. Yamaguchi, and T. Higashino, “Analysis of accident risks from driving behaviors,” Int. J. Intell. Transport. Syst. Research, Sep. 2016. [217] Y. Li, F. Xue, L. Feng, and Z. Qu, “A driving behavior detection system based on a smartphone’s built-in sensor,” Int. J. Commun. Syst., vol. 30, no. 8, May 2017. [218] J. Almazan, L. Bergasa, J. Yebes, R. Barea, and R. Arroyo, “Full auto- calibration of a smartphone on board a vehicle using IMU and GPS embedded sensors,” in Proc. IEEE Symp. Intell. Veh., Gold Coast City, Australia, Jun. 2013, pp. 1374–1380. [219] J.-H. Hong, B. Margines, and A. K. Dey, “A smartphone-based sensing platform to model aggressive driving behaviors,” in Proc. ACM Conf. on Human Factors in Comput. Syst., Toronto, ON, Apr. 2014, pp. 4047– 4056. [220] H. Sharma, S. Naik, A. Jain, R. K. Raman, R. K. Reddy, and R. B. Shet, “S-road assist: Road surface conditions and driving behavior anal- ysis using smartphones,” in Proc. Int. Conf. Connected Vehicles Expo, Shenzhen, China, Oct. 2015, pp. 291–296. [221] M. Pfriem and F. Gauterin, “Employing smartphones as a low-cost multi sensor platform in a field operational test with electric vehicles,” in Proc. BIBLIOGRAPHY 191

IEEE Int. Conf. Syst. Sci., Manoa, HI, Jan. 2014, pp. 1143–1152. [222] Y. Zheng and J. H. L. Hansen, “Unsupervised driving performance as- sessment using free-positioned smartphones in vehicles,” in Proc. IEEE Int. Conf. Intell. Transpor. Syst., Rio de Janeiro, Brazi, Nov. 2016, pp. 1598–1603. [223] Y. Zheng, N. Shokouhi, A. Sathyanarayana, and J. Hansen, “Free- positioned smartphone sensing for vehicle dynamics estimation — Val- idation and application,” in WCX17: SAE World Congress Experience, Detroit, MI, Apr. 2017. [224] P. Vavouranakis, S. Panagiotakis, G. Mastorakis, C. X. Mavromoustakis, and J. M. Batalla, Beyond the Internet of Things. Springer Interna- tional Publishing, Jan. 2017, ch. Recognizing Driving Behaviour Using Smartphones, pp. 269–299. [225] R. Bhoraskar, N. Vankadhara, B. Raman, and P. Kulkarni, “Wolver- ine: Traffic and road condition estimation using smartphone sensors,” in Proc. IEEE Int. Conf. Commun. Syst. Netw., Bangalore, India, Jan. 2012. [226] A. Ghose, A. Chowdhury, V. Chandel, T. Banerjee, and T. Chakravarty, “An enhanced automated system for evaluating harsh driving using smartphone sensors,” in Proc. Int. Conf. Distrib. Comput. Netw., Sin- gapore, Singapore, Jan. 2016, pp. 38:1–38:6. [227] H. Zhao, H. Zhou, C. Chen, and J. Chen, “Join driving: A smart phone- based driving behavior evaluation system,” in Proc. IEEE Int. Conf. Global Commun., Atlanta, GA, Dec. 2013, pp. 48–53. [228] J. Yu, H. Zhu, H. Han, Y. J. Chen, J. Yang, Y. Zhu, Z. Chen, G. Xue, and M. Li, “SenSpeed: Sensing driving conditions to estimate vehicle speed in urban environments,” IEEE Trans. Mobile Comput., vol. 15, no. 1, pp. 202–216, Jan. 2016. [229] Z. He, J. Cao, X. Liu, and S. Tang, “Who sits where? Infrastructure-free in-vehicle cooperative positioning via smartphones,” Sensors, vol. 14, no. 7, pp. 11 605–11 628, Jun. 2014. [230] F. Li, H. Zhang, H. Che, and X. Qiu, “Dangerous driving behavior detection using smartphone sensors,” in Proc. IEEE Int. Conf. Intell. Transport. Syst., Rio de Janeiro, Brazil, Nov. 2016, pp. 1902–1907. [231] J. Wallin and J. Zachrisson, “Sensor fusion in smartphones with appli- cation to car racing performance analysis,” Master’s thesis, Linköping University, Jun. 2013. [232] L. Kang, Z. Liu, and S. Banerjee, “How’s my driving: Sensing driving 192 BIBLIOGRAPHY

behaviours by using smartphones,” University of Wisconsin-Madison, Tech. Rep., 2014. [233] F. Bruwer and M. Booysen, “Vehicle acceleration estimation using smart-phone based sensors,” in Southern African Transport Conf., Pre- toria, South Africa, Jul. 2015. [234] A. Alasaadi and T. Nadeem, “UniCoor: A smartphone unified coordi- nate system for ITS applications,” in Proc. IEEE Int. Conf. Mobile Ad Hoc Sensor Syst., Brasília, Brazil, Oct. 2016, pp. 290–298. [235] N. Promwongsa, P. Chaisatsilp, S. Supakwong, C. Saiprasert, T. Phol- prasit, and P. Prathombutr, “Automatic accelerometer reorientation for driving event detection using smartphone,” in Proc. ITS Asia Pacific Forum, Auckland, New Zealand, Apr. 2014. [236] H. Chu, V. Raman, J. Shen, A. Kansal, V. Bahl, and R. R. Choudhury, “I am a smartphone and I know my user is driving,” in Proc. Int. Conf. Commun. Syst. Netw., Bangalore, India, Jan. 2014. [237] J. Yang, S. Sidhom, G. Chandrasekaran, T. Vu, H. Liu, N. Cecan, Y. Chen, M. Gruteser, and R. P. Martin, “Sensing driver phone use with acoustic ranging through car speakers,” IEEE Trans. Mobile Com- put., vol. 11, no. 9, pp. 1426–1440, Sep. 2012. [238] M. Feld, T. Schwartz, and C. Muller, “This is me: Using ambient voice patterns for in-car positioning,” in Ambient Intell. Springer, Nov. 2010, pp. 290–294. [239] C. Zhang, M. Patel, S. Buthpitiya, K. Lyons, B. Harrison, and G. D. Abowd, “Driver classification based on driving behaviors,” in Proc. Int. Conf. Intell. User Interfaces, Sonoma, CA, Mar. 2016, pp. 80–84. [240] V. Paruchuri and A. Kumar, “Camera based localization of a smart- phone in a vehicle,” J. Comput. Sci. Appl. Inf. Technol., vol. 1, no. 1, Nov. 2016. [241] J. He, A. Chaparro, X. Wu, J. Crandall, and J. Ellis, “Mutual interfer- ences of driving and texting performance,” Comput. Human Behavior, vol. 52, pp. 115–123, Nov. 2015. [242] C. Pote, D. Bhargude, A. Bandiwar, A. Yelai, and A. M. Bagade, “Car driver detection and accident prevention system,” Int. Eng. Research J., vol. 2, no. 2, pp. 625–629, Apr. 2016. [243] H. Dong, M. Wu, Q. Shan, N. Chen, S. Qin, Y. Tian, L. Jia, and Y. Qin, “Urban residents travel analysis based on mobile communication data,” in Proc. IEEE Int. Conf. Intell. Transport. Syst., The Hague, the Netherlands, Oct. 2013, pp. 1487–1492. BIBLIOGRAPHY 193

[244] M. Bierlaire, J. Chen, and J. Newman, “A probabilistic map match- ing method for smartphone GPS data,” Transport. Research Part C: Emerging Technol., vol. 26, pp. 78–98, Jan. 2013. [245] M. Haklay and P. Weber, “OpenStreetMap: User-generated street maps,” IEEE Pervasive Comput., vol. 7, no. 4, pp. 12–18, Oct. 2008. [246] P. Hashemi and R. Ali Abbaspour, “Assessment of logical consistency in OpenStreetMap based on the spatial similarity concept,” in Open- StreetMap in GISci. Springer, Mar. 2015, pp. 19–36. [247] M. Haklay, “How good is volunteered geographical information? A com- parative study of OpenStreetMap and ordnance survey datasets,” En- vironment and Planning B: Planning and Design, vol. 37, no. 4, pp. 682–703, Aug. 2010. [248] S. Hu, L. Su, H. Liu, H. Wang, and T. F. Abdelzaher, “SmartRoad: A crowd-sourced traffic regulator detection and identification system,” De- partment of Comput. Sci., University of Illinois at Urbana-Champaign, Tech. Rep., 2013. [249] C. Bo, T. Jung, X. Mao, X.-Y. Li, and Y. Wang, “SmartLoc: Sens- ing landmarks silently for smartphone-based metropolitan localization,” EURASIP J. Wireless Commun. Netw., vol. 111, Apr. 2016. [250] G. Tan, M. Lu, F. Jiang, K. Chen, X. Huang, and J. Wu, “Bump- ing: A bump-aided inertial navigation method for indoor vehicles using smartphones,” IEEE Trans. Parallel Distrib. Syst., vol. 25, no. 7, pp. 1670–1680, Jul. 2014. [251] H. Aly, A. Basalamah, and M. Youssef, “Robust and ubiquitous smartphone-based lane detection,” Pervasive Mobile Comput., vol. 26, pp. 35–56, Feb. 2016. [252] S. Godha and M. E. Cannon, “Development of a DGPS/MEMS IMU integrated system for navigation in urban canyon conditions,” in Proc. Int. Symp. on GPS/GNSS, Hong Kong, Dec. 2005. [253] X. Niu, H. Zhang, K. Chiang, and N. El Sheimy, “Using land- vehicle steering constraint to improve the heading estimation of MEMS GPS/INS georeferencing systems,” in Proc. Int. Conf. Canadian Geo- matics, Calgary, AB, Jun. 2010. [254] E. Marti, J. Garcia, and J. Molina, “Navigation capabilities of mid-cost GNSS/INS vs. smartphone: Analysis and comparison in urban naviga- tion scenarios,” in Proc. IEEE Int. Conf. Inf. Fusion, Salamanca, Spain, Jul. 2014. [255] P. Zhou, Y. Zheng, and M. Li, “How long to wait? Predicting bus ar- 194 BIBLIOGRAPHY

rival time with mobile phone based participatory sensing,” IEEE Trans. Mobile Comput., vol. 13, no. 6, pp. 1228–1241, Jun. 2014. [256] T. Sohn, A. Varshavsky, A. LaMarca, M. Chen, T. Choudhury, I. Smith, S. Consolvo, J. Hightower, W. Griswold, and E. de Lara, “Mobility detection using everyday GSM traces,” in Ubiquitous Comput. Springer, Sep. 2006, vol. 4206, pp. 212–224. [257] I. Anderson and H. Muller, “Practical activity recognition using GSM data,” Dep. of Comp. Sci., Bristol, Tech. Rep., 2006. [258] A. Bolbol, T. Cheng, I. Tsapakis, and J. Haworth, “Inferring hybrid transportation modes from sparse GPS data using a moving window SVM classification,” Comput., Environment Urban Syst., vol. 36, no. 6, pp. 526–537, Nov. 2012. [259] L. Stenneth, K. Thompson, W. Stone, and J. Alowibdi, “Automated transportation transfer detection using GPS enabled smartphones,” in Proc. IEEE Int. Conf. Intell. Transport. Syst., Anchorage, AK, Sep. 2012, pp. 802–807. [260] S. Hemminki, P. Nurmi, and S. Tarkoma, “Accelerometer-based trans- portation mode detection on smartphones,” in Proc. ACM Int. Conf. Embedded Netw. Sensor Syst., Roma, Italy, Nov. 2013, pp. 13:1–13:14. [261] A. Thiagarajan, J. Biagioni, T. Gerlich, and J. Eriksson, “Cooperative transit tracking using smart-phones,” in Proc. ACM Conf. Embedded Netw. Sensor Syst., Zurich, Switzerland, Nov. 2010, pp. 85–98. [262] Z. A. Lari and A. Golroo, “Automated transportation mode detection using smart phone applications via machine learning: Case study mega city of Tehran,” in Transport. Research Board Annu. Meeting, Washing- ton DC, Jan. 2015. [263] B. Assemi, H. Safi, M. Mesbah, and L. Ferreira, “Developing and validat- ing a statistical model for travel mode identification on smartphones,” IEEE Trans. Intell. Transport. Syst., vol. 17, no. 7, pp. 1920–1931, Jul. 2016. [264] D. Montoya, S. Abiteboul, and P. Senellart, “Hup-me: Inferring and reconciling a timeline of user activity from rich smartphone data,” in Proc. Int. Conf. Advances in Geographic Inf. Syst., Bellevue, WA, Nov. 2015, pp. 62:1–62:4. [265] H. R. Eftekhari and M. Ghatee, “An inference engine for smartphones to preprocess data and detect stationary and transportation modes,” Transport. Research Part C: Emerging Technol., vol. 69, pp. 313–327, Aug. 2016. BIBLIOGRAPHY 195

[266] C. Li, P. C. Zegras, F. Zhao, F. Pereira, K. V. Nawarathne, Z. Qin, M. Ben-Akiva, and J. Zhao, “FMS-TQ: combining smartphone and iBeacon 4 technologies in a transit quality survey,” in Transportation Research Board Meeting, Washington, DC, Jan. 2016. [267] G. Guido, A. Vitale, V. Astarita, F. Saccomanno, V. P. Giofré, and V. Gallelli, “Estimation of safety performance measures from smart- phone sensors,” in Procedia — Social and Behavioral Sci., vol. 54, Paris, France, Sep. 2012, pp. 1095–1103. [268] A. Buchenscheit, F. Schaub, F. Kargl, and M. Weber, “A VANET-based emergency vehicle warning system,” in Proc. IEEE Veh. Netw. Conf., Tokyo, Japan, Oct. 2009. [269] M. Gramaglia, P. Serrano, J. Hernandez, M. Calderon, and C. J. Bernar- dos, “New insights from the analysis of free flow vehicular traffic in highways,” in Proc. IEEE Int. Symp. World of Wireless, Mobile and Multimedia Netw., Lucca, Italy, Jun. 2011. [270] M. Cesana, L. Fratta, M. Gerla, E. Giordano, and G. Pau, “C-VeT the UCLA campus vehicular testbed: Integration of VANET and mesh networks,” in Proc. Int. Conf. European Wireless, Lucca, Italy, Apr. 2010, pp. 689–695. [271] P. Caballero-Gil, C. Caballero-Gil, and J. Molina-Gil, “Design and im- plementation of an application for deploying vehicular networks with smartphones,” Int. J. Distrib. Sensor Netw., vol. 9, no. 12, Dec. 2013. [272] X. Li, Y. Feng, F. Wang, and Q. Qian, “When smart phone meets vehicle: A new on-board unit scheme for VANETs,” in Proc. IEEE Int. Conf. Pervasive Intell. Comput., Liverpool, U.K., Oct. 2015, pp. 1095– 1100. [273] V. P. Barcelos, T. C. Amarante, C. D. Drury, and L. H. A. Correia, “Vehicle monitoring system using IEEE 802.11p device and Android application,” in Proc. IEEE Int. Symp. Comput. Commun., Funchal, Portugal, Jun. 2014. [274] S. Tornell, C. Calafate, J.-C. Cano, P. Manzoni, M. Fogue, and F. Mar- tinez, “Evaluating the feasibility of using smartphones for ITS safety ap- plications,” in Proc. IEEE Int. Conf. Veh.Technol., Dresden, Germany, Jun. 2013. [275] S. Busanelli, F. Rebecchi, M. Picone, N. Iotti, and G. Ferrari, “Cross-network information dissemination in vehicular ad hoc networks (VANETs): Experimental results from a smartphone-based testbed,” Future Internet, vol. 5, no. 3, pp. 398–428, Aug. 2013. 196 BIBLIOGRAPHY

[276] A. Djajadi and R. Putra, “Inter-cars safety communication system based on Android smartphone,” in Proc. IEEE Int. Conf. Open Syst., Subang, Malaysia, Oct. 2014, pp. 12–17. [277] Y. Park, J. Ha, S. Kuk, H. Kim, C.-J. Liang, and J. Ko, “A feasibility study and development framework design for realizing smartphone-based vehicular networking systems,” IEEE Trans. Mobile Comput., vol. 13, no. 11, pp. 2431–2444, Nov. 2014. [278] W. Vandenberghe, I. Moerman, and P. Demeester, “On the feasibility of utilizing smartphones for vehicular ad hoc networking,” in Proc. Int. Conf. ITS Telecommun., St. Petersburg, Russia, Aug. 2011, pp. 246–251. [279] R. Silva, S. Noguchi, T. Ernst, A. De La Fortelle, and W. Godoy Junior, “Standards for cooperative intelligent transportation systems: a proof of concept,” in Proc. Int. Conf. Telecommun., Paris, France, Jul. 2014, pp. 35–40. [280] “Cooperative ITS corridor joint deployment,” Ministry of infrastructure and the environment of the Netherlands, Tech. Rep., Jun. 2013. [281] T. Mashiko, “Autonomous driving technology and ITS,” in Proc. Int. Conf. ETSI ITS Workshop, Sophia Antipolis, France, Mar. 2016. [282] F. S. Berggren, “Traffic flow enhancement through guidance for driver based on GPS aided smartphones,” Master’s thesis, KTH Royal Institute of Technology, Jun. 2011. [283] L. Nkenyereye and J. W. Jang, “Cloud computing enabled external ap- plications to car users using nomadic smartphones,” in Proc. Int. Conf. Comput. Inf. Syst. Ind. Appl., Bangkok, Thailand, Jun. 2015. [284] H. Zhang, Q. Zhang, and X. Du, “Toward vehicle-assisted cloud comput- ing for smartphones,” IEEE Trans. Vehicular Technol., vol. 64, no. 12, pp. 5610–5618, Dec. 2015. [285] M. Bagheri, M. Siekkinen, and J. K. Nurminen, “Cloud-based pedes- trian road-safety with situation-adaptive energy-efficient communica- tion,” IEEE Intell. Transport. Syst. Mag., vol. 8, no. 3, pp. 45–62, Jul. 2016. [286] S. Tak, S. Woo, and H. Yeo, “Sampling-based collision warning sys- tem with smartphone in cloud computing environment,” in IEEE Intell. Vehicles Symp., Seoul, Korea, Jun. 2015, pp. 1181–1186. [287] Y. Murata and S. Saito, “Proposal of “cyber parallel traffic world” cloud service,” in Proc. Int. Conf. ITU kaleidoscope academic confer- ence, Saint-Petersburg, Russia, Jun. 2014, pp. 29–36. [288] D. Kwak, R. Liu, D. Kim, B. Nath, and L. Iftode, “Seeing is believing: BIBLIOGRAPHY 197

Sharing real-time visual traffic information via vehicular clouds,” IEEE Access, vol. 4, pp. 3617–3631, May 2016. [289] J. Liu, B. Priyantha, T. Hart, H. S. Ramos, A. A. F. Loureiro, and Q. Wang, “Energy efficient GPS sensing with cloud offloading,” in Proc. ACM Conf. Embedded Netw. Sensor Syst., Toronto, ON, Nov. 2012, pp. 85–98. [290] P. K. Misra, W. Hu, Y. Jin, J. Liu, A. Souza de Paula, N. Wirström, and T. Voigt, “Energy efficient GPS acquisition with sparse-GPS,” in Proc. Int. Symp. Inf. Process. in Sensor Netw., Piscataway, NJ, Nov. 2014, pp. 155–166. [291] “Linking driving behavior to automobile accidents and insurance rates,” Progressive, Tech. Rep., Jul. 2012. [292] S. G. Klauer, T. A. Dingus, V. L. Neale, J. D. Sudweeks, and D. J. Ramsey, “Comparing real-world behaviors of drivers with high versus low rates of crashes and near-crashes,” National Highway Traffic Safety Administration, Tech. Rep., Feb. 2009. [293] M. Yamakado, J. Takahashi, S. Saito, and M. Abe, “G-vectoring, new vehicle dynamics control technology for safe driving,” Ind. Syst., vol. 58, no. 7, pp. 346–350, Dec. 2009. [294] C. Saiprasert, S. Thajchayapong, T. Pholprasit, and C. Tanprasert, “Driver behaviour profiling using smartphone sensory data in a V2I en- vironment,” in Proc. IEEE Int. Conf. Connected Vehicles Expo, Vienna, Austria, Nov. 2014, pp. 552–557. [295] C. Saiprasert, T. Pholprasit, and S. Thajchayapong, “Detection of driv- ing events using sensory data on smartphone,” Int. J. Intell. Transpor. Syst. Research, Jul. 2015. [296] F. J. Bruwer and M. J. Booysen, “Comparison of GPS and MEMS sup- port for smartphone-based driver behavior monitoring,” in Proc. IEEE Symp. Comput. Intell., Cape Town, South Africa, Dec. 2015, pp. 434– 441. [297] P. Girling, Ed., Automotive Handbook, 4th ed. Robert Bosch GmbH, 1996. [298] I. Skog and P. Händel, “In-car positioning and navigation technologies — A survey,” IEEE Trans. Intell. Transport. Syst., vol. 10, no. 1, pp. 4–21, Mar. 2009. [299] G. Castignani, T. Derrmann, R. Frank, and T. Engel, “Driver behavior profiling using smartphones: A low-cost platform for driver monitoring,” IEEE Intell. Transport. Syst. Mag., vol. 7, no. 1, pp. 91–102, Jan. 2015. 198 BIBLIOGRAPHY

[300] R. Stoichkov, “Android smartphone application for driving style recog- nition,” Dept. Electr. Eng. Inf. Technol., Tech. Rep., Jul. 2013. [301] D. Chen, K.-T. Cho, S. Han, Z. Jin, and K. G. Shin, “Invisible sensing of vehicle steering with smartphones,” in Proc. Int. Conf. Mobile Syst., Appl., Services, Florence, Italy, May 2015. [302] L. Bergasa, D. Almeria, J. Almazan, J. Yebes, and R. Arroyo, “DriveSafe: An app for alerting inattentive drivers and scoring driv- ing behaviors,” in Proc. IEEE Intell. Veh. Symp., Dearborn, MI, Jun. 2014, pp. 240–245. [303] A. Chowdhury, T. Banerjee, T. Chakravarty, and P. Balamuralidhar, “Smartphone based estimation of relative risk propensity for inducing good driving behavior,” in Proc. ACM Int. Conf. Pervasive Ubiquitous Comput. and Proc. ACM Int. Symp. Wearable Comput., Osaka, Japan, Sep. 2015, pp. 743–751. [304] T. Banerjee, A. Chowdhury, and T. Chakravarty, “MyDrive: Drive be- havior analytics method and platform,” in Proc. Workshop Physical An- alytics, Singapore, Singapore, Jun. 2016, pp. 7–12. [305] R. Vaiana, T. Iuele, V. Astarita, M. V. Caruso, A. Tassitani, C. Zaffino, and V. P. Giofrè, “Driving behavior and traffic safety: An acceleration- based safety evaluation procedure for smartphones,” Modern Appl. Sci., vol. 8, no. 1, pp. 88–96, Jan. 2014. [306] J. Wahlström, I. Skog, and P. Händel, “Risk assessment of vehicle cor- nering events in GNSS data driven insurance telematics,” in Proc. IEEE Int. Conf. Intell. Transport. Syst., Qingdao, China, Oct. 2014, pp. 3132– 3137. [307] N. Akhtar, K. Pandey, and S. Gupta, “Mobile application for safe driv- ing,” in Proc. IEEE Int. Conf. Commun. Syst. Netw. Technol., Bhopal, India, Apr. 2014, pp. 212–216. [308] S. Daptardar, V. Lakshminarayanan, S. Reddy, S. Nair, S. Sahoo, and P. Sinha, “Hidden Markov model based driving event detection and driver profiling from mobile inertial sensor data,” in SENSORS, Busan, South Korea, Nov. 2015. [309] J. Engelbrecht, M. J. Booysen, and G.-J. van Rooyen, “Recognition of driving manoeuvres using smartphone-based inertial and GPS mea- surement,” in Proc. Int. Conf. Use of Mobile ICT, Stellenbosch, South Africa, Dec. 2014. [310] J. Engelbrecht, M. J. Booysen, G. J. van Rooyen, and F. J. Bruwer, “Performance comparison of dynamic time warping (DTW) and a max- BIBLIOGRAPHY 199

imum likelihood (ML) classifier in measuring driver behavior with smart- phones,” in Proc. IEEE Symp. Comput. Intell., Cape Town, South Africa, Dec 2015, pp. 427–433. [311] T. Pholprasit, W. Choochaiwattana, and C. Saiprasert, “A comparison of driving behaviour prediction algorithm using multi-sensory data on a smartphone,” in Proc. IEEE/ACIS Int. Conf. Software Eng., Artificial Intell., Netw. Parallel/Distributed Comput., Takamatsu, Japan, Jun. 2015. [312] Z. Liu, M. Wu, K. Zhu, and L. Zhang, “SenSafe: A smartphone-based traffic safety framework by sensing vehicle and pedestrian behaviors,” Mobile Inf. Syst., Sep. 2016. [313] A. Buscarino, L. Fortuna, and M. Frasca, “Driving assistance us- ing smartdevices,” in Proc. IEEE Int. Symp. Intell. Control, Antibes, France, Oct. 2014, pp. 838–842. [314] M. Fazeen, B. Gozick, R. Dantu, M. Bhukhiya, and M. Gonzalez, “Safe driving using mobile phones,” IEEE Trans. Intell. Transport. Syst., vol. 13, no. 3, pp. 1462–1468, Sep. 2012. [315] N. Kalra, G. Chugh, and D. Bansal, “Analyzing driving and road events via smartphone,” Int. J. Comput. Appl., vol. 98, no. 12, pp. 5–9, Jul. 2014. [316] S. Chigurupati, S. Polavarapu, Y. Kancherla, and A. K. Nikhath, “In- tegrated computing system for measuring driver safety index,” Int. J. Emerg. Technol. Advanced Eng., vol. 2, no. 6, Jun. 2012. [317] T. Chakravarty, A. Ghose, C. Bhaumik, and A. Chowdhury, “Mo- biDriveScore: A system for mobile sensor based driving analysis: A risk assessment model for improving one’s driving,” in Proc. Int. Conf. Sens. Technol., Wellington, New Zealand, Dec. 2013, pp. 338–344. [318] S. Khedkar, A. Oswal, M. Setty, and S. Ravi, “Driver evaluation sys- tem using mobile phone and OBD-II system,” Int. J. Comput. Sci. Inf. Technol., vol. 6, no. 3, pp. 2738–2745, May 2015. [319] M. A. Ylizaliturri-Salcedo, M. Tentori, and J. A. Garcia-Macias, Ubiqui- tous Comput. Ambient Intell.. Sensing, Process., Using Environmental Inf. Springer, Dec. 2015, vol. 9454, ch. Detecting Aggressive Driving Behavior with Participatory Sensing, pp. 249–261. [320] Z. Ouyang, J. Niu, Y. Liu, and J. Rodrigues, “Multiwave: A novel vehicle steering pattern detection method based on smartphones,” in Proc. IEEE ICC Ad-hoc Sensor Netw. Symp., Kuala Lumpur, Malaysia, May 2016. 200 BIBLIOGRAPHY

[321] N. AbuAli, “Advanced vehicular sensing of road artifacts and driver behavior,” in Proc. IEEE Symp. Comput. Commun., Larnaca, Cyprus, Jul. 2015, pp. 45–49. [322] J. Carmona, F. García, D. Martín, A. de la Escalera, and J. M. Armin- gol, “Data fusion for driver behaviour analysis,” Sensors, vol. 15, no. 10, pp. 25 968–25 991, Oct. 2015. [323] G. Castignani, T. Derrmann, R. Frank, and T. Engel, “Smartphone- based adaptive driving maneuver detection: A large-scale evaluation study,” IEEE Trans. Intell. Transport. Syst., vol. 18, no. 9, pp. 2330– 2339, Sep. 2017. [324] S. V. Hosseinioun, H. Al-Osman, and A. E. Saddik, “Employing sensors and services fusion to detect and assess driving events,” in Proc. IEEE Int. Symp. Multimedia, Miami, FL, Dec. 2015, pp. 395–398. [325] P. Tchankue, J. Wesson, and D. Vogts, “Using machine learning to pre- dict the driving context whilst driving,” in Proc. South African Institute Comput. Sci. Inf. Technol. Conf., East London, South Africa, Oct. 2013, pp. 47–55. [326] P. Chaovalit, C. Saiprasert, and T. Pholprasit, “A method for driving event detection using SAX on smartphone sensors,” in Proc. Int. Conf. ITS Telecommun., Tampere, Finland, Nov. 2013, pp. 450–455. [327] G. Castignani, T. Derrmann, R. Frank, and T. Engel, “Validation study of risky event classification using driving pattern factors,” in Proc. IEEE Symp. Commun. Vehicular Technol. in the Benelux, Luxembourg- Kirchberg, Luxembourg, Nov. 2015. [328] V. Bhoyar, P. Lata, J. Katkar, A. Patil, and D. Javale, “Symbian based rash driving detection system,” Int. J. Emerging Trends & Technol in Comput. Sci., vol. 2, no. 2, pp. 124–126, Mar. 2013. [329] C. Antoniou, V. Papathanasopoulou, V. Gikas, C. Danezis, and H. Per- akis, “Classification of driving characteristics using smartphone sensor data,” in Proc. Symp. European Association for Research in Transport., Leeds, UK, Sep. 2014. [330] J. Meseguer, C. Calafate, J. Cano, and P. Manzoni, “DrivingStyles: A smartphone application to assess driver behavior,” in Proc. IEEE Symp. Comput. Commun., Split, Croatia, Jul. 2013, pp. 535–540. [331] J. Yu, Z. Chen, Y. Zhu, Y. Chen, L. Kong, and M. Li, “Fine-grained abnormal driving behaviors detection and identification with smart- phones,” IEEE Trans. Mobile Comput., vol. 16, no. 8, pp. 2198–2212, Aug. 2017. BIBLIOGRAPHY 201

[332] A. A. Tecimer, Z. C. Taysi, A. G. Yavuz, and M. E. Karsligil Yavuz, “As- sessment of vehicular transportation quality via smartphones,” Turkish J. Elect. Eng. & Comput. Sci., vol. 23, pp. 2161–2170, Aug. 2013. [333] M. V. Ly, S. Martin, and M. Trivedi, “Driver classification and driving style recognition using inertial sensors,” in Proc. IEEE Symp. Intell. Vehicles, Gold Coast City, Australia, Jun. 2013, pp. 1040–1045. [334] M. Wu, S. Zhang, and Y. Dong, “A novel model-based driving behavior recognition system using motion sensors,” Sensors, vol. 16, no. 10, p. 1746, Oct. 2016. [335] S. E. Lee, E. C. B. Olsen, and W. W. Wierwille, “A comprehensive ex- amination of naturalistic lane-changes,” US Department of Transport., Tech. Rep., Mar. 2004. [336] J. W. Joubert, D. de Beer, and N. de Koker, “Combining accelerometer data and contextual variables to evaluate the risk of driver behaviour,” Transport. Research Part F: Traffic Psychology and Behaviour, vol. 41, Part A, pp. 80–96, Aug. 2016. [337] T. Toledo, O. Musicant, and T. Lotan, “In-vehicle data recorders for monitoring and feedback on driver’s behavior,” Transport. Research Part C: Emerging Technol., vol. 16, no. 3, pp. 320–331, Jun. 2008. [338] J. Whipple, W. Arensman, and M. Boler, “A public safety application of GPS-enabled smartphones and the Android operating system,” in Proc. IEEE Int. Conf. Systems, Man and Cybernetics, San Antonio, TX, Oct. 2009, pp. 2059–2061. [339] T. Chakravarty, A. Chowdhury, A. Ghose, C. Bhaumik, and P. Bal- amuralidhar, “Statistical analysis of road-vehicle-driver interaction as an enabler to designing behavioral models,” Int. J. Model. Simul. Sci. Comput., vol. 5, Oct. 2014. [340] M. Lindfors, G. Hendeby, F. Gustafsson, and R. Karlsson, “Vehicle speed tracking using chassis vibrations,” in Proc. IEEE Intell. Vehicles Symp., Göteborg, Sweden, Jun. 2016, pp. 214–219. [341] P. Händel, “Discounted least-squares gearshift detection using ac- celerometer data,” IEEE Trans. Instrum. Meas., vol. 58, no. 12, pp. 3953–3958, Dec. 2009. [342] S. Singh, S. Nelakuditi, R. Roy Choudhury, and Y. Tong, “Your smart- phone can watch the road and you: Mobile assistant for inattentive drivers,” in Proc. ACM Int. Symp. Mobile Ad Hoc Netw. Comput., Hilton Head, SC, Jun. 2012, pp. 261–262. [343] C.-W. You, N. D. Lane, F. Chen, R. Wang, Z. Chen, T. J. Bao, 202 BIBLIOGRAPHY

M. Montes-de Oca, Y. Cheng, M. Lin, L. Torresani, and A. T. Campbell, “CarSafe App: Alerting drowsy and distracted drivers using dual cam- eras on smartphones,” in Proc. Int. Conf. Mobile Syst., Appl., Services, Taipei, Taiwan, Jun. 2013, pp. 13–26. [344] V. Douangphachanh, “The development of a simple method for network- wide road surface roughness condition estimation and monitoring using smartphone sensors,” Ph.D. dissertation, Tokyo Metropolitan Univer- sity, Sep. 2014. [345] F. Seraj, B. J. van der Zwaag, A. Dilo, T. Luarasi, and P. Havinga, Big Data Analytics in the Social and Ubiquitous Context. Springer, Jan. 2016, vol. 9546, ch. RoADS: A Road Pavement Monitoring System for Anomaly Detection Using Smart Phones, pp. 128–146. [346] G. Strazdins, A. Mednis, G. Kanonirs, R. Zviedris, and L. Selavo, “To- wards vehicular sensor networks with Android smartphones for road surface monitoring,” in Proc. Int. Workshop Netw. Cooperating Objects, Chicago, IL, Apr. 2011. [347] A. Mednis, G. Strazdins, R. Zviedris, G. Kanonirs, and L. Selavo, “Real time pothole detection using Android smartphones with accelerome- ters,” in Proc. Int. Conf. Distrib. Comput. Sensor Syst. Workshops, Barcelona, Spain, Jun. 2011. [348] A. Ghose, P. Biswas, C. Bhaumik, M. Sharma, A. Pal, and A. Jha, “Road condition monitoring and alert application: Using in-vehicle smartphone as internet-connected sensor,” in Proc. IEEE Int. Conf. Pervasive Comput. Commun. Workshops, Lugano, Switzerland, Mar. 2012, pp. 489–491. [349] V. Astarita, M. V. Caruso, G. Danieli, D. C. Festa, V. P. Giofre, T. Iuele, and R. Vaiana, “A mobile application for road surface quality control,” in Proc. Meeting EURO Working Group on Transport., Paris, France, Sep. 2012, pp. 1135–1144. [350] K. Darawade, P. Karmare, S. Kothmire, and N. Panchal, “Estimation of road surface roughness condition from Android smartphone sensors,” Int. J. Recent Trends in Eng. & Research, vol. 2, no. 3, pp. 339–346, Mar. 2016. [351] D. V. Mahajan and T. Dange, “Analysis of road smoothness based on smartphones,” Int. J. Innovative Research in Comput. Commun. Eng., vol. 3, no. 6, pp. 5201–5206, Jun. 2015. [352] P. S. Toke, S. Maheshwari, A. Asawale, A. Dhamale, and N. Paul, “Road quality & valley complexity analysis using Android application,” Int. J. Advanced Research Comput. Commun. Eng., vol. 5, no. 5, pp. 732–736, BIBLIOGRAPHY 203

May 2016. [353] K. Kulkarni, K. V. Prashant, S. Nasikkar, T. Ahuja, and N. Mhetre, “Predicting road anomalies using sensors in smartphones,” Imperial J. Interdisciplinary Research, vol. 2, no. 6, pp. 219–222, 2016. [354] B. Lanjewar, R. Sagar, R. Pawar, J. Khedkar, and K. Gosavi, “Road bump and intensity detection using smartphone sensors,” Int. J. Inno- vative Research Comput. Commun. Eng., vol. 4, no. 5, pp. 9185–9192, May 2016. [355] V. P. Tonde, A. Jadhav, S. Shinde, A. Dhoka, and S. Bablade, “Road quality and ghats complexity analysis using Android sensors,” Int. J. Advanced Research Comput. Commun. Eng., vol. 4, no. 3, pp. 101–104, Mar. 2015. [356] M. Perttunen, O. Mazhelis, F. Cong, M. Kauppila, T. Leppänen, J. Kantola, J. Collin, S. Pirttikangas, J. Haverinen, T. Ristaniemi, and J. Riekki, “Distributed road surface condition monitoring using mobile phones,” in Ubiquitous Intell. Comput., Sep. 2011, vol. 6905, pp. 64–78. [357] A. Mednis, G. Strazdins, M. Liepins, A. Gordjusins, and L. Selavo, Netw. Digit. Technol., Jan. 2010, vol. 88, ch. RoadMic: Road surface monitoring using vehicular sensor networks with microphones, pp. 417– 429. [358] D. Rajamohan, B. Gannu, and K. S. Rajan, “MAARGHA: A prototype system for road condition and surface type estimation by fusing multi- sensor data,” ISPRS Int. J. Geo-Inf., vol. 4, no. 3, pp. 1225–1245, Jul. 2015. [359] F. Orhan and P. E. Eren, “Road hazard detection and sharing with multimodal sensor analysis on smartphones,” in Proc. Int. Conf. Next Generation Mobile Apps, Services Technol., Prague, Czech Republic, Sep. 2013, pp. 56–61. [360] H.-W. Wang, C.-H. Chen, D.-Y. Cheng, C.-H. Lin, and C.-C. Lo, “A real-time pothole detection approach for intelligent transportation sys- tem,” Mathematical Problems in Eng., Aug. 2015. [361] A. Vittorio, V. Rosolino, I. Teresa, C. M. Vittoria, P. G. Vincenzo, and D. M. Francesco, “Automated sensing system for monitoring of road surface quality by mobile devices,” Procedia — Social and Behavioral Sci., vol. 111, pp. 242–251, Feb. 2014. [362] G. Xue, H. Zhu, Z. Hu, W. Zhuo, C. Yang, Y. Zhu, J. Yu, and Y. Luo, “Pothole in the dark: Perceiving pothole profiles with participatory ur- ban vehicles,” IEEE Trans. Mobile Comput., vol. 16, no. 5, pp. 1408– 204 BIBLIOGRAPHY

1419, May 2017. [363] F. Seraj, N. Meratnia, K. Zhang, P. J. Havinga, and O. Turkes, “A smartphone based method to enhance road pavement anomaly detection by analyzing the driver behavior,” in Proc. ACM Int. Conf. Pervasive Ubiquitous Comput., Osaka, Japan, Sep. 2015, pp. 1169–1177. [364] M. Jain, N. I. S. B. Ajeet Pal Singh, JIIT, and S. Kaul, “Speed-breaker early warning system,” in Proc. USENIX/ACM Workshop Netw. Syst. for Developing Regions, Boston, MA, Jun. 2012. [365] G. Alessandroni, L. C. Klopfenstein, S. Delpriori, M. Dromedari, G. Luchetti, B. D. Paolini, A. Seraghiti, E. Lattanzi, V. Freschi, A. Carini, and A. Bogliolo, “SmartRoadSense: Collaborative road sur- face condition monitoring,” in Proc. Int. Conf. Mobile Ubiquitous Com- put., Syst., Services Technol., Rome, Italy, Aug. 2014, pp. 210–215. [366] Y.-c. Tai, C.-w. Chan, and J. Y.-j. Hsu, “Automatic road anomaly detec- tion using smart mobile device,” in Proc. Conf. Technol. Appl. Artificial Intell., Hsinchu, Taiwan, Nov. 2010. [367] L. Forslöf and H. Jones, “Roadroid: Continuous road condition monitor- ing with smart phones,” J. Civil Eng. Architecture, vol. 9, pp. 485–496, 2015. [368] A. Mohamed, M. M. M. Fouad, E. Elhariri, N. El-Bendary, H. M. Zaw- baa, M. Tahoun, and A. E. Hassanien, Advances in Intelligent Systems and Computing. Springer, Sep. 2014, vol. 323, ch. RoadMonitor: An Intelligent Road Surface Condition Monitoring System, pp. 377–387. [369] D. V. Mahajan and T. Dange, “Estimation of road roughness condition by using sensors in smartphones,” Int. J. Comput. Eng. & Technol., vol. 6, no. 7, pp. 41–49, Jul. 2015. [370] I. Skog, A. Schumacher, and P. Händel, “A versatile PC-based platform for inertial navigation,” in Proc. Nordic Signal Process. Symp., Reyk- javik, Iceland, Jun. 2006, pp. 262–265. [371] I. Skog and P. Händel, “Calibration of a MEMS inertial measurement unit,” in Proc. IMEKO XVIII World Congress, Rio de Janerio, Brazil, Sep. 2006. [372] P. Händel, B. Enstedt, and M. Ohlsson, “Combating the effect of chas- sis squat in vehicle performance calculations by accelerometer measure- ments,” Meas., vol. 43, no. 4, pp. 483–488, May 2010. [373] J. Farrell, Aided Navigation: GPS with High Rate Sensors, 1st ed. New York, NY, USA: McGraw-Hill, Inc., 2008. [374] J. C. Herrera, “Assessment of GPS-enabled smartphone data and its BIBLIOGRAPHY 205

use in traffic state estimation for highways,” University of California, Berkeley, Tech. Rep., 2009. [375] R. Zantout, M. Jrab, L. Hamandi, and F. Sibai, “Fleet management au- tomation using the global positioning system,” in Int. Conf. Innovations in Inf. Technol., Dec. 2009, pp. 30–34. [376] A. Corti, V. Manzoni, S. Savaresi, M. Santucci, and O. Di Tanna, “A centralized real-time driver assistance system for road safety based on smartphone,” in Advanced Microsystems for Automotive Applications. Springer Berlin Heidelberg, 2012, pp. 221–230. [377] P. Mohan, V. N. Padmanabhan, and R. Ramjee, “Trafficsense: Rich monitoring of road and traffic conditions using mobile smartphones,” Microsoft Research, Tech. Rep., Apr. 2008. [378] K. Yagi, “Extensional smartphone probe for road bump detection,” in Proc. Intell. Transport. Syst. World Congress, Busan, Korea, Oct. 2010. [379] M. Reineh, M. Enqvist, and F. Gustafsson, “Detection of roof load for automotive safety systems,” in Conf. Decis. Contr., Florence, Italy, Dec. 2013, pp. 2840–2845. [380] S. Sukkarieh, “Low cost, high integrity, aided inertial navigation systems for autonomous land vehicles,” Department of Mechanical and Mecha- tronic Engineering, The University of Sydney, Tech. Rep., Mar. 2000. [381] J. Mendel, “Computational requirements for a discrete Kalman filter,” IEEE Trans. Autom. Control, vol. 16, no. 6, pp. 748–758, Dec. 1971. [382] J.-O. Nilsson and P. Händel, “Recursive Bayesian initialization of local- ization based on ranging and dead reckoning,” in IEEE, Intell. Robots and Syst. Int. Conf., Tokyo, Japan, Nov. 2013, pp. 1399–1404. [383] J.-O. Nilsson, I. Skog, and P. Handel, “Aligning the forces — Eliminating the misalignments in IMU arrays,” IEEE Trans. Instrum. Meas., vol. 63, no. 10, pp. 2498–2500, Oct. 2014. [384] I. Rhee, M. Abdel-Hafez, and J. Speyer, “Observability of an inte- grated GPS/INS during maneuvers,” IEEE Trans. Aerosp. Electron. Syst., vol. 40, no. 2, pp. 526–535, Apr. 2004. [385] R. Ansar, P. Sarampakhul, S. Ghosh, N. Mitrovic, M. Asif, J. Dauwels, and P. Jaillet, “Evaluation of smart-phone performance for real-time traffic prediction,” in Proc. IEEE Int. Conf. Intell. Transport. Syst., Qingdao, China, Oct. 2014, pp. 3010–3015. [386] M. Yaqub and I. Gondal, “Smart phone based vehicle condition moni- toring,” in Prof. IEEE Int. Conf. Ind. Electron. Appl., Melbourne, Aus- tralia, Jun. 2013, pp. 267–271. 206 BIBLIOGRAPHY

[387] W.-J. Guan and L.-K. Chen, “Development of advanced driver assist systems using smart phones,” in Proc. IEEE Int. Conf. Advanced Robot. and Intell. Syst., Taipei, Taiwan, Jun. 2014, pp. 176–181. [388] C. Miyajima, Y. Nishiwaki, K. Ozawa, T. Wakita, K. Itou, K. Takeda, and F. Itakura, “Driver modeling based on driving behavior and its evaluation in driver identification,” Proc. IEEE, vol. 95, no. 2, pp. 427– 437, Feb. 2007. [389] J. L. Crassidis and J. L. Junkins, Optimal Estimation of Dynamic Sys- tems, 2nd ed. Chapman & Hall/CRC Press, 2012. [390] N. Bergman and L. Ljung, “Point-mass filter and Cramér-Rao bound for terrain-aided navigation,” in Proc. IEEE Int. Conf. Decision and Control, San Diego, CA, Dec. 1997, pp. 565–570. [391] P. Schopp, L. Klingbeil, C. Peters, and Y. Manoli, “Design, geome- try evaluation, and calibration of a gyroscope-free inertial measurement unit,” Sensors and Actuators A: Physical, vol. 162, no. 2, pp. 379 – 387, 2010. [392] R. A. Horn and C. R. Johnson, Topics in Matrix Analysis. Cambridge University Press, 1991. [393] D. Condurache and M. Matcovschi, “Computation of angular velocity and acceleration tensors by direct measurements,” Acta Mechanica, vol. 153, no. 3-4, pp. 147–167, 2002. [394] C. Zhou, H. Jia, Z. Juan, X. Fu, and G. Xiao, “A data-driven method for trip ends identification using large-scale smartphone-based GPS tracking data,” IEEE Trans. Intell. Transport. Syst., vol. 18, no. 8, pp. 2096– 2110, Aug. 2017. [395] K. S. Huang, P. J. Chiu, H. M. Tsai, C. C. Kuo, H. Y. Lee, and Y. C. F. Wang, “RedEye: Preventing collisions caused by red-light running scoot- ers with smartphones,” IEEE Trans. Intell. Transport. Syst., vol. 17, no. 5, pp. 1243–1257, May 2016. [396] S. Ma, Y. Zheng, and O. Wolfson, “Real-time city-scale taxi rideshar- ing,” IEEE Trans. Knowledge and Data Eng., vol. 27, no. 7, pp. 1782– 1795, Jul. 2015. [397] I. Dhillon, Y. Guan, and B. Kulis, “A unified view of kernel k-means, spectral clustering and graph cuts,” University of Texas at Austin, Tech. Rep., Feb. 2005. [398] L. Girod, H. Balakrishnan, and S. Madden, “Inference of vehicular tra- jectory characteristics with personal mobile devices,” Sep. 2014, US Patent App. 13/832,456. BIBLIOGRAPHY 207

[399] D. Q. Huynh, “Metrics for 3D rotations: Comparison and analysis,” J. Mathematical Imaging and Vision, vol. 35, no. 2, pp. 155–164, Oct. 2009. [400] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning. Springer, 2008. [401] D. Mihai and M. Mocanu, “Statistical considerations on the k-means algorithm,” Ann. University of Craiova, Mathematics Comput. Sci. Se- ries, vol. 42, no. 2, pp. 365–373, Dec. 2015. [402] N. Milosavljevic and D. Okanovic, “Automatic parameter selection for clustering of correlated high-dimensional datasets,” in Proc. IEEE Int. Conf. Inf. Sci., Technol., Wuhan, China, Mar. 2012. [403] L. Zelnik-Manor and P. Perona, “Self-tuning spectral clustering,” Ad- vances in Neural Inf. Process. Syst., vol. 17, pp. 1601–1608, Dec. 2004. [404] G. M. Fitch, S. A. Soccolich, F. Guo, J. McClafferty, Y. Fang, R. L. Olson, M. A. Perez, R. J. Hanowski, J. M. Hankey, and T. A. Dingus, “The impact of hand-held and hands-free cell phone use on driving per- formance and safety-critical event risk,” National Highway Traffic Safety Administration, Tech. Rep., Apr. 2013. [405] R. Matheson, “Could this app make you a better driver?” Jan. 2016, MIT News. [406] “Smart transportation market by solution type (smart ticketing, park- ing management, passenger information, traffic management, integrated supervision, and insurance telematics), service, and region — Global forecast to 2021,” Tech. Rep., Aug. 2017, Markets and markets, Report Code: TC 2365. [407] G. Andria, F. Attivissimo, A. D. Nisio, A. M. L. Lanzolla, and A. Pelle- grino, “Design and implementation of automotive data acquisition plat- form,” in Proc. IEEE Int. Instrum. Meas. Technol., Pisa, Italy, May 2015, pp. 272–277. [408] J. Xu, H. He, F. Qin, and L. Chang, “A novel autonomous initial align- ment method for strapdown inertial navigation system,” IEEE Trans. Instrum. Meas., vol. 66, no. 9, pp. 2274–2282, Sep. 2017. [409] A. E. Kubba and K. Jiang, “A comprehensive study on technologies of tyre monitoring systems and possible energy solutions,” Sensors, vol. 14, no. 6, pp. 10 306–10 345, Jun. 2014. [410] S. Velupillai and L. Guvenc, “Tire pressure monitoring,” IEEE Control Syst., vol. 27, no. 6, pp. 22–25, Dec. 2007. [411] S. Gaglione and M. Petovello, “How does a GNSS receiver estimate 208 BIBLIOGRAPHY

velocity?” insideGNSS, vol. 10, no. 2, pp. 38–41, Mar. 2015. [412] D. I. Katzourakis, E. Velenis, D. Abbink, R. Happee, and E. Holweg, “Race-car instrumentation for driving behavior studies,” IEEE Trans. Instrum. Meas., vol. 61, no. 2, pp. 462–474, Feb. 2012. [413] “Global OBD vehicle communication software manual,” Snap-on, Tech. Rep., Aug. 2013. [414] F. Gustafsson, “Rotational speed sensors: Limitations, pre-processing and automotive applications,” IEEE Trans. Instrum. Meas., vol. 13, no. 2, pp. 16–23, Apr. 2010. [415] M. Matosevic, Z. Salcic, and S. Berber, “A comparison of accuracy using a GPS and a low-cost DGPS,” IEEE Trans. Instrum. Meas., vol. 55, no. 5, pp. 1677–1683, Oct. 2006. [416] A. Forsgren, P. E. Gill, and M. H. Wright, “Interior methods for nonlin- ear optimization,” SIAM Rev., vol. 44, no. 4, pp. 525–597, Apr. 2002. [417] S. M. Kay, Fundamentals of Statistical Signal Processing: Detection The- ory. Prentice-Hall, 1998. [418] S. L. Miller, B. Youngberg, A. Millie, P. Schweizer, and C. Gerdes, “Cal- culating longitudinal wheel slip and tire parameters using GPS veloc- ity„” in Proc. Int. Conf. American Controls, Arlington, VA, Jun. 2001, pp. 1800–1805. [419] T. J. Moore, “A theory of Cramér-Rao bounds for constrained paramet- ric models,” Ph.D. dissertation, University of Maryland, 2010. [420] DoD, “NAVSTAR GPS — User equipment introduction,” US Depart- ment of Defense, Public Release Version, Tech. Rep., Sep. 1996. [421] “Usage-based insurance supplier ranking 2016,” Jun. 2016, Ptolemus Consulting Group.