Research, Design, and Implementation of Virtual and Experimental Environment for CAV

System Design, Calibration, Validation and Verification

THESIS

Presented in Partial Fulfillment of the Requirements for the Degree Master of Science in the

Graduate School of The Ohio State University

By

Shlok Goel, B.S.

Graduate Program in Mechanical Engineering

The Ohio State University

Thesis Committee:

Dr. Shawn Midlam-Mohler, Advisor

Dr. Lisa Fiorentini

Copyright by

Shlok Goel

2020

Abstract

The EcoCAR Mobility Challenge is the current iteration of the Advanced Vehicle Technology

Competitions that challenges twelve universities across North America to re-engineer a 2019

Chevrolet Blazer into a connected and automated vehicle. The competition goal is to design, prototype, test, and validate a SAE Level 2 advance driver assistance system.

This work outlines the development process of a SAE Level 2 perception system. The process began by defining system and component level requirements that iniated a sophisticated sensor and hardware selection process. Then to protoype, test, and validate the system, a V-model approach was followed, which included validation and verification of the system in multiple test environments. The role of each test environment in the validation process along with its advantages and shortcomings is discussed in detail, followed by the evolution of the perception system throughout Year 1 and Year 2 of the competition. Next, three case studies outlining the different subsystems in the perception controller: the I/O layer, the fault diagnostics, and sensor calbration are discussed. Each of these sub-algorithms used various modeling environment to increase the realiability and accuracy of the perception system.

This work serves as the foundation of the connected and automated vehicle perception system and will be vital in the implementation of advance driver assistance features such as adaptive , control, and lane change on demand in future years of this competition.

i Dedication

This work is dedicated to my parents who have constantly strived to inculcate the value of education and supported me in my passion to pursue engineering.

ii Acknowledgements

I would like to begin by thanking my advisor, Prof. Shawn Midlam-Mohler for giving me the opportunity to work on the OSU EcoCAR team, and for his constant guidance throughout my two years as a graduate student. I would like to thank my CAV Co-Lead: Akshra Ramakrishnan for her constant support on this document and in this research. Kristina Kuwabara, without whom, the work compiled in this thesis would have been incomplete. I would like to acknowledge Evan

Stoddart for being a great mentor and assisting me at the start of my EcoCAR journey.

I would like to thank Michael Schoenleb and Subash Chebolu, for their constant help and eagerness to learn and contribute to the development of the CAV Perception System. I thank Phillip Dalke and Kerri Loyd for their mechanical and electrical assistance in setting up the mule-vehicle for

CAV testing.

Finally, I would like to thank , Mathworks, and Argonne National Lab for giving me the opportunity to develop my leadership and engineering skills through the EcoCAR program.

iii

Vita

2014…………………………………………………...Delhi Public School RK Puram, New Delhi

2018…………………………………...B.Tech. Mechanical Engineering, VIT University, Vellore

2020………………………………….…M.S. Mechanical Engineering, The Ohio State University

Fields of Study

Major Field: Mechanical Engineering

iv Table of Contents

Abstract ...... i

Dedication ...... ii

Acknowledgements ...... iii

Vita ...... iv

Fields of Study ...... iv

List of Tables ...... ix

List of Figures ...... xi

Appendix: List of Symbols and Abbreviations ...... xvi

Introduction ...... 1

1.1 EcoCAR Mobility Challenge ...... 2

1.2 Vehicle Architecture ...... 4

1.3 Project Objective ...... 7

1.4 Project Timeline...... 7

Literature Review ...... 9

2.1 Introduction to Autonomous Vehicles ...... 9

2.2 Advance Driver Assistance Systems (ADAS) ...... 11 v 2.3 ADAS Features ...... 12

2.4 Challenges with ADAS ...... 15

2.5 Model Based Design ...... 19

2.6 XIL Testing ...... 19

Experimental Design and Process ...... 21

3.1 System Requirements ...... 21

3.2 Sensor Suite Selection ...... 22

3.3 Compute System Selection ...... 33

3.4 Software Development Process ...... 37

3.5 CAV System Overview ...... 41

XIL Testing Environment ...... 45

4.1 Introduction...... 45

4.2 Model-in-the-Loop (MIL) ...... 47

4.3 Hardware-in-the-Loop (HIL) ...... 54

4.4 Component-in-the-Loop (CIL) ...... 56

4.5 Vehicle-in-the-Loop (VIL) ...... 57

4.6 Evolution of Perception System in XIL...... 58

Input/Output Layer ...... 62

5.1 I/O Layer Requirements ...... 62

vi 5.2 Design of I/O Layer in Simulink ...... 63

5.3 Validation of Simulink I/O in HIL ...... 67

5.4 Shortcoming of Simulink I/O in CIL/VIL ...... 68

5.5 Change in Software Stack ...... 69

5.6 Design of I/O Layer in Python...... 69

5.7 Validation of Python I/O in CIL/VIL ...... 74

5.8 CAV- PCM Interface ...... 75

Fault Diagnostics ...... 77

6.1 Diagnostic Type ...... 77

6.2 Diagnostic Algorithm ...... 79

6.3 Sensor Diagnostic Overview ...... 81

6.4 Diagnostics in XIL ...... 84

6.5 Summary ...... 95

Sensor Calibration ...... 96

7.1 Motivation for Sensor Calibration ...... 96

7.2 Front Camera Calibration ...... 96

7.3 Sensor Calibration for the Front Radar ...... 100

7.4 Summary ...... 112

Conclusion and Future Work ...... 113

vii 8.1 Conclusion ...... 113

8.2 Challenges...... 114

8.3 Future Work ...... 114

Bibliography ...... 115

viii List of Tables

Table 1.1: CAV Sensor Layout ...... 6

Table 1.2: Compute Hardware Specifications ...... 6

Table 3.1: Overall System-Level Requirements [39] ...... 21

Table 3.2: Sensor Relative Performance [40] ...... 24

Table 3.3. Front Sensor FOV and Range Requirements ...... 25

Table 3.4: Evaluation Metrics [39] ...... 26

Table 3.5: Forward-Facing Sensor Placement Simulation Results [39] ...... 27

Table 3.6: Perception System Architecture [39] ...... 30

Table 3.7: Computational Hardware Solutions ...... 33

Table 3.8: Decision Matrix - Perception Hardware ...... 35

Table 3.9: Decision Matrix - Real-Time Optimization Hardware ...... 36

Table 4.1: Perception System Milestones ...... 59

Table 5.1: I/O Layer Requirements ...... 63

Table 5.2: ROS Messages for Forward-Facing Sensors ...... 73

ix Table 5.3: ROS Topics for Forward-Facing Sensors ...... 74

Table 6.1: Diagnostic Type Description ...... 77

Table 6.2: Diagnostic Truth Table ...... 78

Table 6.3: Front Camera Diagnostic Check – Interface Level ...... 81

Table 6.4: Front Camera Diagnostic Check – Component Level ...... 82

Table 6.5: Front Camera Diagnostic Check – Perception Level ...... 82

Table 6.6: Front Radar Diagnostic Check – Interface Level ...... 83

Table 6.7: Front Radar Diagnostic Check – Component Level ...... 83

Table 6.8: Front Radar Diagnostic Check – Perception Level ...... 84

Table 6.9: Diagnostics in XIL Environments ...... 85

Table 7.1: Reference Points for Calibration ...... 99

Table 7.2: Error Analysis - Effect of Calibration on Front Radar ...... 104

Table 7.3: Error Analysis - Effect of Calibration on Sensor Fusion ...... 109

x List of Figures

Figure 1.1: Driver, Vehicle, and Environmental Related Critical Reason for Crashes [2] ...... 1

Figure 1.2: Vehicle Architecture...... 4

Figure 1.3: OSU CAV Architecture...... 5

Figure 1.4: CAV Project Timeline ...... 8

Figure 2.1: SAE Level of Automation [5] ...... 9

Figure 2.2: Potential Growth of AVs [8] ...... 10

Figure 2.3: Increasing ADAS Demand Over the Years [13] ...... 12

Figure 2.4: Crash data of GM vehicles from 2013 to 2017 [16]...... 13

Figure 2.5: Sensor Comparison [24] ...... 16

Figure 2.6: XIL Testing - V Diagram [37] ...... 20

Figure 3.1: System Selection Process Overview [39] ...... 22

Figure 3.2: Relative Sensor Performance [40] ...... 23

Figure 3.3. Test Case to Evaluate Front Sensors – Merging Scenario [39] ...... 26

Figure 3.4: Highway Scenario - Eastbound on I-670 near John Glenn International Airport [39]

...... 29 xi Figure 3.5. Longitudinal Distance Error vs. Ground Truth [39] ...... 31

Figure 3.6: Average Percentage Error – Sensor 1 vs Sensor 2 [39] ...... 32

Figure 3.7: CAV Compute Hardware ...... 36

Figure 3.8: CAV Sprint Planning Board on Asana ...... 37

Figure 3.9: Git Repository Structure ...... 38

Figure 3.10: Version Control Process ...... 39

Figure 3.11: Data Logging Setup ...... 40

Figure 3.12: CAV Functional Diagram ...... 42

Figure 3.13: CAV Network Serial Diagram ...... 44

Figure 4.1: CAV Perception System Development in XIL – “V” Diagram ...... 46

Figure 4.2: XIL Testing Environment and Their Uses ...... 47

Figure 4.3: Perception System Design Overview in MIL ...... 48

Figure 4.4: Sensor Models in MIL ...... 49

Figure 4.5: CPC Testbench in Simulink ...... 50

Figure 4.6: Overview of Data Post-Processing in MIL ...... 52

Figure 4.7: Post-Processed Data with Noise ...... 53 xii Figure 4.8: Filtered Post-Processed Data ...... 54

Figure 4.9: Perception System Overview in HIL...... 55

Figure 4.10: ROS Publishing in Simulink - CAN Replay ...... 56

Figure 4.11: CIL Testbench ...... 57

Figure 4.12: Vehicle Setup Overview in VIL ...... 58

Figure 4.13: Evolution of CAV Perception System...... 61

Figure 5.1: CAV System I/O Overview in Simulink ...... 64

Figure 5.2: CAV Testbench in Simulink ...... 65

Figure 5.3: Acquiring Data using Custom S-Functions in Simulink ...... 66

Figure 5.4: C++ Wrapper for ‘ReceiveCANMsg’ S-Function ...... 66

Figure 5.5: I/O Validation in HIL ...... 68

Figure 5.6: CAV System Overview in Python ...... 70

Figure 5.7: Python Code for decoding CAN messages ...... 71

Figure 5.8: Snippet of Python Code to Import CAN Data ...... 72

Figure 5.9: Custom ROS Message for Front Camera ...... 73

Figure 5.10: CAN Data Acquired from Front Camera in Real-Time using CIL ...... 75 xiii Figure 5.11: CAV-PCM Interface...... 76

Figure 6.1: Diagnostic Flow Logic ...... 80

Figure 6.2: Virtual Testbench for MIL Testing ...... 86

Figure 6.3: Front Camera Diagnostics Algorithm in HIL ...... 87

Figure 6.4: Front Camera Diagnostic Stateflow ...... 88

Figure 6.5: Sensor Fault Diagnostics Strategy ...... 88

Figure 6.6: Feature-Level Fault Detection Strategy ...... 89

Figure 6.7: ROS Topics for Fault Diagnostics...... 90

Figure 6.8: Fault Diagnostics Validation in HIL for Following Lead-Car Scenario ...... 91

Figure 6.9: Code for Interface-Level Fault Detection in Python ...... 92

Figure 6.10: Fault Diagnostics Validation in CIL – Unplugging Sensor Harness ...... 93

Figure 6.11: Code for Component-Level Fault Detection in Python...... 94

Figure 6.12: Fault Diagnostics Validation in VIL – Disabling Transmission of Vehicle Signals 95

Figure 7.1: Mobileye Calibration Process ...... 97

Figure 7.2: Mobileye Camera Wiring Scheme ...... 97

Figure 7.3: Mobileye Measurements ...... 98 xiv Figure 7.4: Front Radar vs. GPS Ground Truth - Approach Test ...... 100

Figure 7.5: Break Points and Table Data for Longitudinal and Lateral Calibration ...... 101

Figure 7.6: Effects of Longitudinal Calibration on Front Radar ...... 102

Figure 7.7: Boxplots for Longitudinal Error Between Front Radar and GPS ...... 105

Figure 7.8: Boxplots for Lateral Error Between Front Radar and GPS ...... 106

Figure 7.9: Calibration Effect on Sensor Fusion for Approach Test Scenario ...... 107

Figure 7.10: Calibration Effect on Sensor Fusion for Stationary Test Scenario ...... 108

Figure 7.11: Boxplots for Longitudinal Error Between Sensor Fused Target and GPS ...... 110

Figure 7.12: Boxplots for Lateral Error Between Sensor Fused Target and GPS ...... 111

xv Appendix: List of Symbols and Abbreviations

ACC

ADAS Advance Driver Assistance System

ADT Automated Driving Toolbox

AV Autonomous Vehicle

AVTC Advanced Vehicle Technology Competition

CAD Computer Aided Design

CAN Controller Area Network

CAV Connected and Automated Vehicle

CPC CAV Perception Controller

CIL Component-in-the-Loop

DOE Department of Energy

FOV Field of View

GM General Motors

HEV Hybrid Electric Vehicle

xvi HIL Hardware-in-the-Loop

HSC Hybrid Supervisory Controller

I/O Input/Output

LCC Lane Centering Control

LCOD Lane Change on Demand

MIL Model-in-the-Loop

PCM Propulsion, Control and Modeling

ROS Robot Operating System

SAE Society of Automotive Engineers

V2X Vehicle-To-Anything

VIL Vehicle-in-the-Loop

VNT Vehicle Networking Toolbox

XIL X-in-the-Loop

xvii Introduction

The transportation industry is constantly evolving. Over the years, the transportation industry has seen automotive manufacturers investing millions to making driving safer and accident free through the implementation of cutting edge technology. Human error was reported as the primary reason for 57% car accidents whereas in 95% of the car accidents, it was a contributing factor [1].

In 2015, a traffic safety and crash report published by NHTSA showed that drivers were the critical reason attributed for 94% of the crashes at the national level [2].

Figure 1.1: Driver, Vehicle, and Environmental Related Critical Reason for Crashes [2]

1 Advance Driver Assistance Systems (ADAS) have the potential to prevent more than one-third of all passenger vehicle crashes [3]. To fully achieve the safety benefits associated with ADAS, robust testing and simulation is essential [3]. This work describes design, calibration, validation, and verification of a perception system using various modeling environments for Year 2 of the

EcoCAR Mobility Challenge.

This chapter will discuss the Ohio State University (OSU) EcoCAR vehicle architecture, the

Connected and Automated Vehicle (CAV) architecture, the project objective, and the project timeline.

1.1 EcoCAR Mobility Challenge

In 2019, the U.S. Department of Energy (DOE) introduced the EcoCAR Mobility Challenge

(ECMC) as the latest addition to the Advanced Vehicle Technology Competition (AVTC) series

[4]. This inter-collegiate competition is funded and supported by the DOE, General Motors (GM), and MathWorks. The Argonne National Laboratory (ANL) manages the competition which challenges 12 teams across the United States and Canada to re-engineer the 2019 Chevrolet Blazer into a Hybrid Electric Vehicle (HEV) with improved energy efficiency using advance engineering techniques in the areas of electrification, SAE level 2 automation and CAV technology. Teams are expected to build a vehicle that is not just energy efficient, but makes driving safer, reduces emission and is acceptable by consumers for the market.

The automotive industry is going through a transformation with customers all over the world looking to travel from place A to B more cost-effectively, safely, and conveniently. There is a

2 growing demand for more advanced solutions in transportation mobility. The long-established model of owning a personal vehicle is revolutionizing, and people are on the lookout for more shared mobility solutions that are consumed as a service known as (MaaS).

New connected and automated technologies are leading the way for innovative mobility solutions in the carsharing spectrum. These technologies indicate emerging trends in mobility [5].

The Ohio State EcoCAR team has a long-standing history of participating in the AVTC competition, dating back to 1990. The OSU team has placed in the top five teams of the previous competitions for the past ten years [6]. The OSU EcoCAR team comprises of around 75 graduate and undergraduate students combined and is a multidisciplinary program with a variety of academic backgrounds, including engineering, business, and communication.

The ECMC is a four-year competition in which teams compete and showcase the work done through presentations and dynamic events at the end of each year. In Year 1, the teams designed and selected the most appropriate powertrain and CAV architecture, backed up by simulation results. Year 2 was classified as the build year, where teams re-build the stock vehicle by replacing the stock components with the selected vehicle architecture. Year 3 will focus on calibration and refinement of their respective hybrid control strategies, as well as implementing automated longitudinal control strategies on their 2019 Chevrolet Blazer’s. In Year 4, the teams will work on the optimization of fuel economy using V2X information and performing SAE Level 2 CAV features such as Lane Centering Control (LCC), Lane Change on Demand (LCOD) and Driver

Monitoring.

3 1.2 Vehicle Architecture

The Ohio State University (OSU) team selected a parallel hybrid electric vehicle (HEV) with a

GM 2.0L turbocharged gasoline engine, assisted by a Denso 32kw belted alternator-starter, to power the front wheels and a Parker Hannifin GVM210-100 112kW rear electric machine (REM) mated to a BorgWarner EGearDrive rear-axle single speed gearbox to power the rear wheels. A

3.5 kW-h custom Li-ion battery from Hybrid Design Services (HDS) supplies the high voltage electricity. The vehicle architecture is shown in Figure 1.2.

Figure 1.2: Vehicle Architecture

4 The Connected and Automated Vehicle (CAV) system architecture, shown in Figure 1.3 included

Mobileye 6 as the Front Vision Camera, two Delphi Electronically Scanning Radar (ESR) for the front and rear, and four Aptiv Mid-Range Radar’s (MRR) for the side and corner to monitor blind spots.

Figure 1.3: OSU CAV Architecture

The sensor layout with the Field of View (FOV) and range of each sensor is shown in Table 1.1.

The sensor layout comprised of a 360o FOV around the ego vehicle that was essential to perform the three SAE Level 2 features – ACC, LCC and LCOD.

5 Table 1.1: CAV Sensor Layout

Sensor Name Field of View (FOV) Range Width 38o Front Vision (Mobileye 6 Series) 150m Height 30o Front Radar (Delphi ESR 2.5) MRR: 90o MRR: 60m Rear Radar (Delphi ESR 2.5) LRR: 22o LRR: 174m Left Corner Radar (Aptiv MRR) 90o Right Corner Radar (Aptiv MRR) Opening 160m Left Side Radar (Aptiv MRR) Angle Right Side Radar (Aptiv MRR)

The Intel Tank was chosen as the primary perception controller, while the Nvidia Drive PX2 was selected as the optimization controller. Table 1.2 shows the specifications and primary use of the selected CAV computational hardware. The high compatibility of the Intel Tank with the software workflow and extensive I/O with 8 CAN Channels made this the most appropriate controller for the perception system. The 12 CPU cores and two Pascal GPUs of the NVIDIA Drive PX2 made it suitable for performing the computationally heavy fuel economy optimization using V2X communication.

Table 1.2: Compute Hardware Specifications

Hardware Specs Primary Use Data Logging 2 Intel Quad-Core Processors Data Parsing and Filtering Intel Tank 8 CAN Channels, Sensor I/O 1 TB Solid State Drive Sensor Fusion and Tracking 12 CPU Cores V2X Communication NVIDIA 2 Pascal GPUs Dynamic Programming Drive PX2 2 Tegra SoC’s Human Machine Interface (HMI) 6 This work will focus on the design, implementation, validation, and verification of a perception system completed in Year 1 and Year 2 of the EcoCAR Mobility Challenge.

1.3 Project Objective

The primary objective of this project was to develop a robust process for the design, testing, and validation of the CAV perception controller using XIL testing environments. This development process needed to be efficient and adhere to strict competition timelines. The project goals needed to help meet competition guidelines of developing a sophisticated perception system capable of performing sensor fusion of obstacles around the ego vehicles using the forward-facing sensors.

1.4 Project Timeline

This project’s timeline can be broadly categorized in three main sections: initial system design, benchmarking, and testing & development, with intermediate checkpoints shown in Figure 1.4.

This timeline also classifies different aspects of the CAV Project for Year 1 and Year 2 of the

EcoCAR Mobility Challenge.

7

Figure 1.4: CAV Project Timeline

8 Literature Review

2.1 Introduction to Autonomous Vehicles

With current advancements in automotive technology, the industry is not far from a day when cars drive themselves. A self-driving car often referred to as an autonomous vehicle (AV) has the ability to drive itself by sensing its environment, without any human input. According to SAE standards, there are currently six levels of automation for on-road vehicles as shown in Figure 2.1.

Figure 2.1: SAE Level of Automation [5] 9 As per current legal standards, SAE Level 2 is the maximum allowed automated driving level available to the public. A SAE Level 2 vehicle requires the driver to monitor the road and environment at all time and be ready to take back control of the vehicle from the driver assistance feature [7]. AV development is a major area of research, and numerous companies are investing heavily in the development and testing of these vehicles, with the aim to get Level 4 and Level 5

AV’s on the road as soon as possible. The transportation industry is revolutionizing into a new era of smart cars and mobility. AVs have the capability to achieve a world with zero road-crashes and no congestion, and it is predicted that by 2050, AVs can become the primary means of transportation [8].

Figure 2.2: Potential Growth of AVs [8] 10 However, a major challenge is ensuring public acceptance of AVs. Surveys show that less than 30 percent of people feel safe riding in an AV [9]. Therefore, the driver’s perspective needs to change for mass implementation. Until this occurs, AVs will need to be designed and developed to share the road with human drivers.

2.2 Advance Driver Assistance Systems (ADAS)

Driver assist technologies in production vehicles range back to as early as 1978, when a Mercedes-

Benz W116 became the first production car to use an anti-lock braking system (ABS) [10]. There have been several improvements to these systems over the years, which has led to a mass reduction in road accidents and an increase in driver comfort and safety. Apart from ABS, the brake system features of traction control systems (TCS) and electronically stability control (ESC) have also increased overall road safety. The usage of more advanced technology will further increase safety, improve driver comfort and assistance [11].

ADAS is one of the most growing areas in the automotive industry. The total revenue of the ADAS market was estimated to be about 20.7 billion US dollars in 2015. This is expected to increase to

104.4 Billion USD by 2022 [12]. With continuous advancements in ADAS, there has been a sharp rise in the demand and use of software defined technology in the automotive industry. In 2007,

ADAS accounted for only 2% of the global automotive Electronic Control Unit (ECU) demand, but it will increase to almost 18% of all automotive related ECUs by 2023 [13]. Figure 2.3 showcases the rising trend in global demand for ADAS controllers.

11

Figure 2.3: Increasing ADAS Demand Over the Years [13]

With the growing popularity and demand for vehicles equipped with ADAS technology, there has been a noticeable transition from entirely mechanical systems to systems with electronic software in the automotive industry. In the future, the ADAS technology is expected to surpass all other softwares within a vehicle, and by 2030 nearly all cars are predicted to have enhanced ADAS features that are capable of efficiently handling all driver related tasks [14].

2.3 ADAS Features

Over the years, there has been a growing increase in vehicle equipped with ADAS features. As of

May 2018, 92.7% of new vehicles in the U.S. were equipped with at least one ADAS feature [15].

The following sub-section will discuss some of the common ADAS features. The University of

12 Michigan studied GM vehicle’s crash data from 2013 to 2017 and found that ADAS significantly reduced vehicle accidents [16]. Reverse automatic braking with rear cross traffic alert, rear vision camera, and rear park assist reduced backing crashes by 81%, while lane departure crashes decreased by 20% due to the lane keep assist with departure warning [16]. Figure 2.4 shows the impact of different ADAS features in reducing vehicle crashes.

Figure 2.4: Crash data of GM vehicles from 2013 to 2017 [16]

2.3.1 Adaptive Cruise Control

Adaptive Cruise Control (ACC) is a feature designed to assist drivers in following vehicles in front of them. It allows the vehicle to maintain a safe distance with the lead vehicle and automatically

13 adjust ego-vehicle’s speed to match the either the driver set speed or the lead vehicle speed, whichever is lower. ACC systems utilize radars sensors to detect and estimate the position and velocity of the lead vehicle. These systems interact with the engine and brake controllers to decelerate the vehicle based on the lead vehicle’s behavior [17].

2.3.2 Automatic Emergency Braking

Automatic Emergency Braking (AEB) technology enables the ego-vehicle to detect a forthcoming crash with the vehicle in front, and command brakes automatically to prevent the collision [18].

These systems first notify and alert the driver of the forthcoming crash. If the driver is unable to take actions to prevent the accident, AEB applies brakes to counteract or lessen the severity of the crash [19].

2.3.3 Forward Collision Warning

Forward Collision Warning (FCW) can be described as a technique that utilizes perception sensors to alert the driver through audible, haptic, visual, etc. feedback methods, if the time-to-collide with the lead vehicle decreases rapidly. A consumer report by the Insurance Institute for Highway

Safety (IIHS) has reported that around 1.9 million crashes could be avoided if FCW systems were present on all vehicles [20]. With the noticeable advancements and increasing benefits of AEB and

FCW systems, both the United States and the European Union have issued regulations to ensure that all vehicles be equipped with these systems by 2020 [21].

14 2.3.4 Lane Keep Assist

Lane Keep Assist (LKA) also known as Lane Centering Control (LCC) limits the ego-vehicle from drifting into the adjacent lanes by applying torque to the steering wheels. These systems utilize

Global Positioning Systems (GPS) and camera sensors to monitor and estimate the position of the ego-vehicle with respect to the lane boundaries.

2.3.5 Lane Departure Warning

Lane Departure Warning (LDW) feature warns or alerts the driver if they attempt to switch lanes without using the turn indicator [22]. These systems utilize camera sensors to identify and detect the lane boundaries and trigger an alert through different feedback mechanisms (haptic, audio, or visual) if the vehicle starts drifting in the adjacent lane.

2.4 Challenges with ADAS

Significant amount of testing is being conducted to make driver assistance systems safer for people around the world. Automotive manufactures are faced with the challenges of validating and verifying software safety for these complex systems, and implementing these technologies onto the vehicle in a cost effective manner, that is scalable and can work in a variety of operating conditions [23].

2.4.1 Sensor Suite Selection

An appropriate sensor suite must be selected based on the vehicle’s ADAS features. Each sensor has different benefits and drawbacks that should be considered during the sensor selection process.

15 A camera, for instance is better at object-detection and lane detection. It can also be used to detect road signs and distinguish between different obstacle types. However, camera performance is impacted severely in low-light/night and bad weather conditions. In such adverse weather conditions, radars and lidars typically perform better. Radars have the ability to measure the longitudinal velocity more accurately than the other two sensors, but they do not perform well for lateral detection. Lidars are considerably more expensive than camera and radar sensors, and they are aesthetically less appealing. Automakers are performing extensive simulations to carefully select and deploy sensors on their vehicles to reach higher levels of autonomy [24].

Figure 2.5: Sensor Comparison [24]

16 2.4.2 Adversity to Bad Weather

A major challenge with ADAS is the performance of perception related sensors in bad weather conditions. Sensor performance is significantly degraded in conditions of poor visibility, specially fog, heavy rain, and snow. Degraded sensor performance can lead to sensors producing inaccurate information for perception algorithms, which in turn could cause crashes [25]. For instance, rain causes signal attenuation and backscatter in the radar, thereby hindering its performance. Signal attenuation decreases the received power of useful signals and the clutter effect causes interference to shoot up at the receiver [26]. Thus, it is vital to select the right sensor suite that can perform well in all weather and environmental conditions.

2.4.3 System Complexity

ADAS comprises of different components and technologies based on the type of feature and service they offer. This increases the complexity of the vehicle system [27]. With the increase in the level of autonomy, there is a requirement to focus on the reliability of automotive electronics at the system and component level. Thus, there has been an introduction of systems with dual redundancy to enhance the reliability and safety. For instance, if the ADAS in a vehicle fails, the driver still needs to be able to control the steering and brakes. This dual redundancy, however, further increases the size, weight, and complexity of ADAS [28]. Traditionally, each ADAS function utilized a discrete electronic control unit (ECU) for its implementation and proper functioning. But this method is not scalable, and simple microcontrollers do not have the computational capability to process the large amounts of data from multiple sensors simultaneously. This further adds to the complexity in designing ADAS [23]. For validation and 17 verification of an ADAS, automotive manufacturers need to collect around 200,00 km to 1 million km of real-world sensor data through ADAS test vehicle equipped with multiple sensors [29].

Therefore, the ability to acquire and store all of this sensor data for replay purposes further adds to the system complexity.

2.4.4 Cost

Due to advancements in ADAS, driving has become safer and vehicles are expected to last longer.

However, these systems contribute to added vehicle complexity, leading to increase in cost for vehicle repairs. The addition of cameras, radars, lidars and other perception sensors enhance the accuracy and performance of ADAS, but they also generate new expenses for fleets because these sensors are expensive, and they need to be calibrated upon every vehicle service or repair. ADAS cameras installed on windshields add to the complexity and cost of the replacement and repair of the windshields. These sensors require advanced diagnostic tools and processes to detect malfunctions which further increases cost for automotive manufacturers [30]. Original Equipment

Manufacturers (OEMs) are constantly striving to reduce the cost associated with ADAS, so more and more people can afford vehicles equipped with these technologies [27].

2.4.5 Security Threat

Increased vehicle complexity not only raises cost, but also introduces software vulnerabilities.

With the advancements in wireless technologies for ADAS, there is a greater risk of comprising the safety and privacy of the vehicle [30]. These systems need to be protected from hackers that could infiltrate the software system and take charge of the vehicle. The different mediums through

18 which a hacker can gain control of the vehicle are Bluetooth, Wi-Fi, or GPS [27]. These security threats pose a significant challenge for automotive manufacturers, and there is a compelling necessity for strict data governance policies to be enforced to protect these systems from malicious activities [31].

2.5 Model Based Design

With the recent advancements to make ADAS’s safer and better, there has been an increase in system complexity [32]. Therefore, it is essential to have a robust design process that streamlines the testing and validation of these systems, thus making them safer and easier to deploy onto the vehicles. Model-based design (MBD) functions as an ‘executable specification’, owing to its ability to be simulated as part of design and acting as a documentation of the algorithms [33].

MBD facilitates model simulation, rapid prototyping, exploring design tradeoffs, and system verification through component-level testing. This framework is ideally suited for ADAS as they allow designing an integrated environment of components that have different sampling rates [32].

Testing of these systems allows the validation of both real and simulated data through a variety of test-cases that affect the ADAS feature under test [34].

2.6 XIL Testing

In the past few years, automotive manufacturers have invested heavily in virtual testing and X-in- the-Loop (XIL) techniques to make ADAS safer, reduce the cost associated with them, and also decrease the risk involved with testing [35]. Most MBD’s have the process of using XIL testing methods, where ‘X’ signifies different testing environments – model, component, software,

19 hardware, or vehicle [36]. These testing methods are valuable for developing and testing the system in initial design phases and allow the simulation of low to high fidelity models, with fault insertion within the physical hardware. The primary environments used for ADAS testing are

Model-in-the-Loop (MIL), Software-in-the-Loop (SIL), Hardware-in-the-Loop (HIL), and

Vehicle-in-the-Loop (VIL) and an example of utilizing these environments for ADAS testing via a systems-engineering approach is shown in Figure 2.6.

Figure 2.6: XIL Testing - V Diagram [37]

XIL simulation allows for the slow and easy transition of testing a component to be incorporated into the entire system. ADAS’s need to be tested and verified before they can be deployed on vehicles [38]. Thus, utilizing XIL testing for the design, calibration, validation, and verification is essential to ensure that these systems are robust and safe for vehicle deployment.

20 Experimental Design and Process

The work detailed in this section is driven by competition guidelines to design a perception system that was capable of performing the three SAE Level 2 features: Adaptive Cruise Control (ACC),

Lane Centering Control (LCC) and Lane Change on Demand (LCOD). For this, the team defined overall system-level requirements that could meet these guidelines. The design process focusses on selecting the appropriate sensor suite and computer hardware to meet the system-level requirements. An overview of the software development process for the perception system with various tools will also be provided.

3.1 System Requirements

The overall system-level requirements shown in Table 3.1 were formulated to meet competition guidelines and team goals.

Table 3.1: Overall System-Level Requirements [39]

ID Requirement Description The ego-vehicle shall use sensors and software to detect and localize target vehicles, 1.1 pedestrians, and other obstacles present in the 360° driving environment. The ego-vehicle sensors shall be able to measure target trajectories (position, velocity, 1.2 and acceleration) over time. The ego-vehicle sensors shall update target vehicle measurements to perform automated 1.3 driving tasks in real-time. The CAV features shall be operable in the Mobility as a Service (MaaS) home region, 1.4 including downtown Columbus, airport, and Ohio State campus area.

21 3.2 Sensor Suite Selection

The sensor selection process was driven by a model-based design process as shown in Figure 3.1.

A model-based design process was followed for sensor selection to validate sensor supplier choice and sensor placement on the vehicle. A Computed Aided Design (CAD) space claim was done to ensure that the sensors could be mounted in different mounting positions on the 2019 Chevy

Blazer. Simulations were run for different mounting positions and orientations using MATLAB’s

Automated Driving Toolbox (ADT). The sensors were modelled in Simulink and more than 20 different scenarios were simulated to evaluate the most appropriate sensor suite per the system- level requirements. To further investigate the performance of each sensor, the team acquired sensor sets for experimental testing and to benchmark them against the supplier specified Field of View

(FOV). The comparison of simulated and experimental data of individual sensors helped to validate the sensor suite’s performance in full vehicle simulated environments.

Figure 3.1: System Selection Process Overview [39]

22 The sensor suite selection process comprised of three stages:

1. Sensor Type Selection: Evaluating the sensor type to be used.

2. Component Evaluation: Comparing supplier models with the mounting positions.

3. Architecture Evaluation: Evaluating the most appropriate components against the must-see region.

3.2.1 Sensor Type Selection

The first stage of the sensor selection process consisted of evaluating the measurement techniques and the relative performance of all available sensors. The team performed initial research into performance of different sensors. Figure 3.2 shows the relative performance of different sensors.

Figure 3.2: Relative Sensor Performance [40] 23 To select sensors suited appropriately for the CAV perception system, the team formulated a table using literature and background information to compare the relative performance of different sensors for aspects that were most applicable to the EcoCAR Mobility Challenge. Table 3.2 shows a comparison of sensor types for different performance aspects. Camera were considered essential to detect lane-lines and perform lane centering control, and radars were donated components by the competition, and are often used in production vehicles for ADAS applications. Lidars were deemed expensive, and surplus to requirements due to the increased complexity in processing lidar data, while ultrasonic sensors did not meet the overall system-level requirements highlighted in

Table 3.1. Thus for these reasons and the inferences from Table 3.2, the team decided that a forward-facing vision system and radars would make up the 360º perception system.

Table 3.2: Sensor Relative Performance [40]

Performance Aspect Camera Radar Lidar Ultrasonic Object Detection Average Good Good Average Distance Estimation Average Good Good Good Lane Tracking Good Poor Poor Poor Range Average Good Average Poor Resilience to Bad Weather Poor Good Average Average Cost Good Average Poor Good Visual Appeal Good Good Poor Good

24 3.2.2 Component Evaluation

The second stage of the sensor selection process consisted of comparing sensor models in various mounting positions. The overall system-level requirements 1.1 through 1.4 were decomposed into component-level requirements 2.1 through 2.3. Component-level FOV and range requirements for the forward-facing sensors are listed in Table 3.3.

Table 3.3. Front Sensor FOV and Range Requirements

ID Component-Level FOV and Range Requirements 2.1 A front sensor shall span the lateral FOV to locate target vehicles in adjacent lanes. A front sensor shall span the vertical FOV to locate obstacles above host vehicle such 2.2 as signs and overpasses. A front sensor shall span the longitudinal FOV to locate target vehicles in host lane 2.3 for ACC.

The component virtual validation methods were sub-divided into component modeling and component placement selection.

3.2.2.1 Component Modelling

Each component-level requirement was validated using either a European New Car Assessment

Program (Euro NCAP) standard or team-designed scenario. Figure 3.3 shows a test-case scenario developed in MATLAB’s ADT to evaluate the forward-facing sensors when the vehicle in the adjacent lane merges into the host lane in front of the ego vehicle.

25 .

Figure 3.3. Test Case to Evaluate Front Sensors – Merging Scenario [39]

Sensor models were developed using MATLAB’s ‘Radar Sensor Specification’ and ‘Vision

Sensor Specification’ data types. Noise error was kept constant across all models for consistency.

Test case scenarios were designed using MATLAB’s ADT and selected to evaluate components’ mounting position, range, FOV, and other intrinsic parameters. Each model was simulated in three different mounting positions to evaluate both sensing and positioning parameters. Detection specific metrics, like hit-rate and miss-rate, are shown in Table 3.4 and were used to evaluate these sensor models.

Table 3.4: Evaluation Metrics [39]

Metric Formula Description of Metric TP Hit- Rate Evaluates how often the sensors detect actors that exist TP + FP FP Miss- Rate Evaluates how often the sensors detect actors that did not exist TP + FP TP-True Positive; FN-False Negative; FP-False Positive

26 3.2.2.2 Component Placement

The component selection was based on virtual simulation results that provided the optimum sensor placement and orientations for all the components. The simulation results for the forward-facing sensors are shown in Table 3.5. The sensor placement with the highest hit-rate was chosen as the optimum sensor placement location and orientation and is italicized in Table 3.5.

Table 3.5: Forward-Facing Sensor Placement Simulation Results [39]

Sensor Placement 1 Placement 2 Placement 3 Centered near Chevy Offset on Front Centered above Location Badge Bumper Windshield (3.85 m, 0m, 0.8 m, (3.85 m, -0.1 m, 0.5 (1.5 m, 0 m, 1.7 m, Sensor 1 Mounting 0°, 0°, 0°) m, 0°, 0°, 0°) 0°, 0°, 0°) Hit-Rate: 0.893 Hit-Rate: 0.901 Hit-Rate: 0.844 Result Miss-Rate: 0.107 Miss-Rate: 0.099 Miss-Rate: 0.156 Centered near Chevy Offset on Front Centered above Location Badge Bumper Windshield Sensor 2 (3.85 m, 0 m, 0.8 m, (3.85 m, -0.1 m, 0.5 (1.5 m, 0 m, 1.7 m, Mounting 0°, 0°, 0°) m, 0°, 0°, 0°) 0°, 0°, 0°) Hit-Rate: 0.829 Hit-Rate: 0.840 Hit-Rate: 0.833 Result Miss-Rate: 0.171 Miss-Rate: 0.160 Miss-Rate: 0.167

Similar simulations was performed for all other sensors to determine the optimum placement location and orientation.

27 3.2.3 Architecture Evaluation

The final stage of the sensor selection process involved evaluating the most-appropriate components against the must-see region. The virtual validation method for architecture evaluation was further sub-divided into architecture modelling and architecture selection.

3.2.3.1 Architecture Modelling

Components and mounting positions were selected for architectures based on their performance in the component-level simulations discussed previously. Although these sensors performed best at a component-level, additional validation was needed to ensure that the collection of sensors met the overall system-level requirements 1.1 through 1.4. For this, additional test-cases were created based on driving scenarios in Columbus, Ohio. These test cases provide an additional layer of validity as they validate the architectures in real environments. Figure 3.4 shows the ego vehicle driving next to another vehicle as they approach a turn in a highway scenario. This is an example of a Highway Driving scenario in the home region of Columbus, Ohio.

28

Figure 3.4: Highway Scenario - Eastbound on I-670 near John Glenn International Airport [39]

3.2.3.2 Architecture Selection

The team constructed two perception architectures, shown in Table 3.6 using sensors discussed in the component-level evaluation. The result of the FOV simulations showed that both architectures covered the must-see region. Therefore, the team decided to conduct experimental tests with the two forward-facing radars.

29 Table 3.6: Perception System Architecture [39]

Perception Architecture 1 Architecture 2 Architecture

Birds-Eye-Plot

Legend ■ Ego-Vehicle ■ Radar ■ Camera ■Must-See Region

Mobileye 6 above windshield with 0° Mobileye 6 above windshield with pitch 0° pitch Sensor-Set Bosch MRR Front on front bumper Delphi ESR 2.5 on front bumper With Mounting Bosch MRR Corner at 75° yaw Aptiv MRR on corners at 60° yaw Positions Aptiv MRR side at 90° yaw Aptiv MRR on sides at 113° yaw Bosch MRR Rear on rear bumper Delphi ESR 2.5 on rear bumper Results – FNR: 0.00 FNR: 0.00 Highway TPR: 1.00 TPR: 1.00 Scenario

30 The team benchmarked the two front-facing radars through dynamic FOV and range tests by collecting data simultaneously from both forward-facing radar sensors. The relative distance between the target vehicle and the ego vehicle was tracked using a Global Positioning System-

Inertial Measurement Unit (GPS-IMU) with 9 mm precision and used as ground-truth. The normalized error between longitudinal distance and GPS for two forward-facing sensors is shown in Figure 3.5. The data for this analysis is obtained from a straight-line test and has been normalized due to supplier confidentiality agreements. As the relative distance between the target vehicle and ego-vehicle increased, the median error increased in magnitude. Thus, the Sensor 1 gave more accurate results.

Figure 3.5. Longitudinal Distance Error vs. Ground Truth [39] 31 Further testing was done to compare the FOV of the two radars. Figure 3.6 shows the normalized average percentage error for the longitudinal and lateral FOV for both radars. The plots have been normalized due to confidentiality. Sensor 1 had a lower average percentage error and covered a greater FOV.

Figure 3.6: Average Percentage Error – Sensor 1 vs Sensor 2 [39]

Apart from the above results, Architecture 2 had several advantages not captured in the simulations and experimental analysis. Supplier 1 supplied the team with two full sensor sets: one for the competition Blazer and another for a mule vehicle. This allows the team to have a dedicated mule vehicle for data collection and calibration, aiding the development process for the four years of the competition. Therefore, due to the improved accuracy, more accurate and wider FOV, and greater availability, the OSU team chose Architecture 2 as the primary CAV architecture.

32 3.3 Compute System Selection

The CAV computing functions for the EcoCAR Mobility Challenge can be broadly classified into three main categories: perception, human machine interface (HMI), and energy optimization. The team investigated five automotive-grade controllers with specifications shown in Table 3.7.

Table 3.7: Computational Hardware Solutions

Model OS Description I/O Cameras, 6x Ethernet, 2 2-core NVIDIA dSPACE WLAN, 2x USB, Bluetooth, Linux for Tegra Denver CPU Embedded BroadR-Reach, CAN, LIN, from NVIDIA 4 ARM A57 CPUs SPU LTE, GNSS, and mass data 1 Pascal GPU storage 16 GB RAM, 6 MB CAN, CAN FD, LIN, K- dSPACE memory exclusively for Line,/L-Line, FlexRay, - MicroAutobox MABx and PC Ethernet and Bypass communication Interfaces Ubuntu Linux Quad-core Intel USB, PCIe, and RS-232, Intel Tank Windows 8 Core i5 or i7 wireless and Ethernet Embedded I7 Clock: 2.4 GHz 2 separate Linux 8x ARM Cortex- 12 Camera, USB, 6x CAN, Drive for Tegra A57 CPUs LIN, FlexRay, Ethernet, PX2 desktops 2 Pascal GPUs BroadR-Reach Dual-core Denver CPU, Camera, USB, 6x CAN, Jetson Linux for Tegra ARM A57, FlexRay, Ethernet, BroadR- TX2 from NVIDIA 8GB memory, 32GB Reach storage, 1 Pascal GPU

33 The CAV perception system requires the controller to communicate with different sensors via

CAN. It should have the ability to read and process large amount of data and perform mathematical operations such as data filtering. The controller must perform functions such as fusion and tracking to detect objects in real-time. The optimization controller shall have parallel computing capabilities to perform real-time optimization using Vehicle-to-Everything (V2X) information from the

Dedicated Short Range Communication (DSRC) modules. Additionally, the HMI system requires the controller to process digital and analog inputs. The controller platforms were evaluated using a decision matrix shown in Table 3.8 and Table 3.9 for the perception controller and V2X optimization controller, respectively. The team considered factors such as sensor I/O, toolchain compatibility, support, cost, and durability, and then selected the controller with the highest score.

The dSPACE Embedded SPU uses RTMAPs as the standalone software suite. RTMAPs enables real-time sensor processing and designing sensor fusion algorithms with a graphical interface.

These algorithms are compiled directly onto the controller enabling supplier code for the sensor

I/O. However, the major drawback for this option is the cost. The team would have to purchase the hardware and additional software licenses.

The dSPACE MicroAutobox (MABx) serves as the Hybrid Supervisory Controller (HSC). The

MABx has a limit of six CAN channels, however the CAV perception system requires eight CAN channels, causing the controller to not meet the sensor I/O requirements. Also, this option is relatively expensive compared to the other potential CAV controllers.

The Intel AIoT is a powerful machine with Intel-specific tools. If chosen, this component will be donated to the team. With an intel i7 processor with 8 GB system memory, models can easily be 34 compiled using Simulink compilation tools with the Catkin toolchain. The controller has a high compatibility with the software workflow, and extensive I/O for data throughput. Additionally, the

1 TB hard disk drive enables ample space for data storage during operation.

Finally, two NVIDIA controllers were considered. The NVIDIA Jetson TX2 was the primary CAV controller in EcoCAR 3. It consists of a Dual-core Denver CPU, with 8GB memory, and a Pascal

GPU. The NVIDIA Drive PX2 is preinstalled with dual Linux desktops, and has a high compatibility with Linux-based APIs. It uses NVIDIA CUDA for optimized GPU acceleration supported by most machine-learning and graphics framework.

Table 3.8: Decision Matrix - Perception Hardware

Design Embedded dSPACE Intel Tank NVIDIA NVIDIA Drive Criteria SPU MicroAutoBox AIoT Jetson TX2 PX2 Sensor I/O 5 1 5 0 4 Toolchain 2 5 3 3 3 Compatibility Support 3 5 4 2 2 Cost 1 1 5 2 5 Durability 5 5 4 3 3 Total 16 17 21 10 17

35 Table 3.9: Decision Matrix - Real-Time Optimization Hardware

Design Embedded dSPACE Intel Tank NVIDIA NVIDIA Criteria SPU MicroAutoBox AIoT Jetson TX2 Drive PX2 Sensor I/O 4 1 4 0 4 Toolchain 3 5 1 3 5 Compatibility Support 3 5 4 2 2 Compute 4 0 2 3 5 Performance Cost 1 4 3 5 1 Total 15 15 14 13 17

The selected compute system along with their primary uses are shown in Figure 3.7. The Intel

AIoT was chosen as the primary perception controller, while the NVIDIA Drive PX2 was chosen as the primary optimization controller.

Figure 3.7: CAV Compute Hardware 36 3.4 Software Development Process

The software for the perception system was developed for a team of 15 students, comprising of undergraduate and graduate students. The following sub-sections discuss in detail the different tools and software that the team used for the software development process.

3.4.1 Asana

The team used Asana to assign tasks and milestones using an agile software development approach. The software development process is iterative and managed actively through bi-weekly sprints. These sprints are tracked using Asana boards as shown in Figure 3.8. Burn-down charts were created in Asana to track progress of each student and manage deliverable and milestone timelines.

Figure 3.8: CAV Sprint Planning Board on Asana

37 3.4.2 Git Version Control

Git was utilized as the version control software to keep track of changes made to code over time

[41]. A separate repository was created for the CAV sub-team on GitHub to facilitate easy access and collaboration from different team members. The repository structure is shown in Figure 3.9.

Figure 3.9: Git Repository Structure

GitHub Issues was used to document issues with the code. Regular version releases were performed to ensure that the team could revert to a previously developed software if required. The version control process is shown in Figure 3.10. These procedures enabled quick and rapid CAV software development.

38

Figure 3.10: Version Control Process

3.4.3 Software Stack

The team utilized two primary software stacks for different testing methods of the perception system. MATLAB/Simulink was utilized for Model-in-the-Loop (MIL) and Hardware-in-the-

Loop (HIL) testing while Python 3.5.1.2 was used for Component-in-the-Loop (CIL) and Vehicle- in-the-Loop (VIL) testing. Since the perception system needed to be real-time implementable,

Python was utilized as the primary software stack to deploy code onto the CAV controller.

3.4.4 ROS Framework

Robot Operating System (ROS) is a framework used for expediting research surrounding robotics and commercial application development [42]. The team used ROS as the primary framework for the perception system due to its high modularity between different software languages. Since ROS has a very large user base, it is stable and reliable [43]. The use of ROS allowed to team to develop 39 and test algorithms using already-available software packages and tools which saved development time.

3.4.5 Data Logging Setup

Sensor data logging was essential in the development and testing of the perception system. This was done using Vector CANoe. The software allowed the team to record data at the sensor specified baud rate in multiple file formats like .mat, .blf and .asc. The team made use of VN 1630 log to acquire data from all sensors and the GM CAN channels simultaneously. Figure 3.11 shows the data logging setup used by the team to collect sensor data.

Figure 3.11: Data Logging Setup

40 3.5 CAV System Overview

The CAV perception system was designed to meet the overall system-level requirements, as well as the competition guidelines. The following sub-sections will discuss the different functions performed by the CAV perception system along with the communication network used for implementation of the CAV system onto the vehicle.

3.5.1 CAV Functional Diagram

The CAV Functional Diagram shown in Figure 3.12 provides an overview of the different functionalities performed by the CAV Perception System. The input layer consisted of two layers:

CAN Interface and ROS Interface. The team developed a CAN Interface on the perception controller to acquire data from all sensors in real-time simultaneously. This was done using

SocketCAN and 2 Kvaser PCIe CAN Adapters on the Intel Tank. ROS allowed the team to propagate data to different layers of the perception system. All sensors were assigned a unique

ROS node that were connected to the ROS Network. Once the raw CAN data was propagated to the application layer, diagnostics were performed to ensure validity of data transmitted by the sensors. Valid sensor detections were then calibrated to enhance accuracy of obstacle positions reported by the sensors. Calibrated sensor messages and signals were then used by the sensor fusion algorithms to perform feature specific action. For Year 2 of the competition, the team focused primarily on fusion and tracking vehicles for ACC. Detections from the front-facing camera and radar were fused together to generate tracks. The track information consisted of target position and velocity. This information along with the active feature status was propagated to the

HSC alert the driver about the availability of CAV features in real-time. 41

Figure 3.12: CAV Functional Diagram

42 3.5.2 CAV Serial Network Diagram

The CAV serial network diagram shown in Figure 3.13 consisted of a CAV Perception Controller

(CPC), a CAV Optimization Controller (COC) and a Hybrid Supervisory Controller (HSC). The

CAV software was separated from the propulsion software to allow adequate hardware and software resources for the perception system. All perception related sensors communicated with the CPC via 7 private CAN buses. This was done to ensure that there was no CAN bus overload, and no CAN ID conflict. The main task of the CPC was to perform sensor fusion, and transmit sensor fused information to the HSC. The HSC was responsible for performing the three SAE level

CAV features namely Adaptive Cruise Control (ACC), Lane Centering Control (LCC) and Lane

Change on Demand (LCOD). Data logging is crucial for vehicle development, thus the raw CAN data as well as algorithm output will be collected for offline playback and debugging. The CPC will communicate with the rest of the vehicle through the HSC over a private CAN bus. This interface allows the CPC to transmit sensor fused target information to the HSC to perform all

CAV related features. This interface also monitors the status of the CPC and informs the vehicle and driver if an error has occurred in any of the CAV related component of software. The NVIDIA

Drive PX2 is chosen as the optimization controller. It communicates with the Cohda MK5, DSRC module over a private CAN bus to perform V2X features. The main responsibility of the COC is to transmit V2X information to the CPC via ethernet, which is then sent to the HSC over CAV-

PCM interface CAN bus to increase fuel economy during ACC.

43

Figure 3.13: CAV Network Serial Diagram

44 XIL Testing Environment

This chapter will describe the different modelling environments used for the design, testing, validation and verification of the CAV perception system. Each modeling environment performed specific functions in the development of the CAV system through Year 1 and Year 2 of the

EcoCAR Mobility Challenge. This chapter will serve as the foundation for the discussion on the development of different layers of the CAV Perception Controller (CPC) and will conclude with the milestones and the evolution of the CAV perception system in XIL (X-in-the-Loop) testing.

4.1 Introduction

The CAV perception system was designed, validated, and verified using a system’s engineering approach. The system development process in different XIL environments is outlined in Figure

4.1. The system-level requirements were designed based on competition guidelines and the team’s goal to design a SAE Level 2 perception system for the Mobility as a Service (MaaS) target market in Columbus, OH. These requirements were further broken down into subsystem and component- level requirements. The subsystem-level requirements were outlined during the architecture selection process and aided in selecting the sensor suite. Algorithm-level requirements were derived from the subsystem-level requirements to design perception algorithms to perform the three SAE Level 2 features: Adaptive Cruise Control (ACC), Lane Centering Control (LCC) and

Lane Change on Demand (LCOD). A Model Based Design (MBD) approach was followed for the development of the CAV perception system. Sensor models were simulated using synthetic data 45 to design CAV features and algorithms in the Model-in-the-Loop (MIL) environment. The perception system was then tested iteratively in Hardware-in-the-Loop (HIL), Component-in the-

Loop (CIL) and Vehicle-in the-Loop (VIL) environments to validate and verify the perception system.

Figure 4.1: CAV Perception System Development in XIL – “V” Diagram

Different XIL testing environments were utilized to design, develop, and validate the CAV perception system. The four different testing environments and their uses are shown in Figure 4.2.

The CAV perception system used MATLAB and Simulink as the software stack in the MIL and

HIL environment, while Python was utilized for CIL and VIL testing.

46

Figure 4.2: XIL Testing Environment and Their Uses

4.2 Model-in-the-Loop (MIL)

The primary purpose of MIL environment was to aid in the design and development of CAV features and algorithms. The team utilized MIL simulations in two ways: Perception System

Design in Simulink and Offline Data-Post Processing in MATLAB. The following sub-sections will outline the role of each MIL environment in the design, testing and implementation of the

CAV perception system.

47 4.2.1 Perception System Design in Simulink

The overview of the perception system design in MIL is shown in Figure 4.3. The perception system comprised of two key environments: the plant and the controller. The MIL plant environment used MATLABs Automated Driving Toolbox (ADT) to generate 100+ test-scenarios using the European New Car Assessment Program (Euro NCAP) and team-defined guidelines. The driving scenario designer in ADT used a cuboid simulation environment where vehicles and other actors were represented as simple box shapes. This environment generated synthetic detections using low-fidelity radar and camera sensors. The scenarios and sensor data were imported into

Simulink to test the controller and the perception system.

Figure 4.3: Perception System Design Overview in MIL

Driving scenarios were imported into Simulink using the Scenario Reader block, and information about actors and lane boundaries from the scenario environment were propagated to the sensor models. Models were designed for all six radars, and the front camera individually. An overview 48 of the sensor model’s organization and structure is shown in Figure 4.4. Sensor models were developed using MATLAB’s ‘Radar Sensor Specification’ and ‘Vision Sensor Specification’ data types. Detection generator blocks in Simulink were used to generate detections from camera and radar measurements taken by vision and radar sensors mounted on the ego vehicle. A statistical model generated measurement noise, true detections, and false alarms. Noise error was kept constant across all models for consistency. All sensor detections were then propagated to the CPC, where the team developed algorithms performed sensor fusion.

Figure 4.4: Sensor Models in MIL

49 The CPC was divided into different layers: Sensor I/O, Fault Detection, and Perception. An overview of the CPC testbench designed in Simulink is shown in Figure 4.5. The Sensor I/O and

Fault Detection sub-systems are discussed in detail in Chapter 5 and Chapter 6, respectively.

Figure 4.5: CPC Testbench in Simulink

The perception subsystem in MIL consisted of the initial sensor fusion algorithm that concatenated detections from all seven sensors. The concatenated detections were fed to a Multi-Object Tracker

(MOT) which was used to track multiple objects simultaneously and assign individual tracks to them. The Simulink MOT block was responsible for performing the initialization, confirmation, prediction, and deletion of the tracks of moving tracks. A Global Nearest Neighbor (GNN) criterion was used to generate and assign tracks for detections from multiple sensors. An Extended

Kalman Filter (EKF) was utilized to estimate the state vector and the state vector covariance matrix

50 for each track. These state vectors predicted a track’s location in each frame and determined the likelihood of each detection being assigned to a track. The sensor fused targets and tracks were then visualized using the Birds-Eye Scope in Simulink.

The perception algorithms were evaluated using two industry standard metrics: Hit-Rate and Miss-

Rate. The hit-rate and miss-rate calculations were computed by comparing the object detection signal log from Simulink against the test-case scenario ground truth. Based on the simulation results, the perception algorithms were modified and calibrated to improve the accuracy of sensor fusion.

4.2.2 Data Post-Processing

Once the team completed the initial design of the CAV perception system using synthetic data in

Simulink, sensor data collected from real-world testing was post-processed using team-designed

MATLAB scripts. This allowed the team to understand the Controller Area Network (CAN) data structure of the radars and the front camera. Raw detections from the sensors were visualized using the Birds-Eye-Plot in MATLAB. This enabled the team to modify and calibrate the perception system algorithms. Figure 4.6 shows an overview of the data post-processing in MIL environment.

51

Figure 4.6: Overview of Data Post-Processing in MIL

The offline data post-processing allowed the team to benchmark the sensors based on their respective Field of View (FOV) and detections. Detections from the radar and front camera were plotted on a Birds-Eye Scope next to a raw video footage captured using a USB camera as shown in Figure 4.7.

52

Figure 4.7: Post-Processed Data with Noise

An example use case for the MIL post processing environment was to remove and filter raw sensor data noise before it could be used in the perception algorithm. Figure 4.7 shows the detections from the front camera and the front radar for a following lead car scenario. The mule-vehicle was made to follow the lead car around a curved road with guard rails. The radar detected metal guard rails and picked up noise from the ground which is highlighted in Figure 4.7. CAN messages from the supplier provided .dbc file for the front radar were inspected, and two particular signals:

CAN_TX_TRACK_STATUS and CAN_TX_TRACK_ONCOMING were monitored. Using this

MIL environment, it was realized that filtering the radar data based on the ‘updated targets’ status, the noise from the front radar was removed. Figure 4.8 shows the filtered detections and raw video footage for the following lead car scenario.

53

Figure 4.8: Filtered Post-Processed Data

4.3 Hardware-in-the-Loop (HIL)

MIL simulations used ideal synthetic data for the development and design of the perception algorithms. The primary objective of HIL simulations was to design the sensor I/O layer in

Simulink and test the CAN and ROS interface. Recorded sensor data from real-world testing was collected using radar and camera sensors and CAN replay was utilized to playback sensor data in

Simulink. An overview of the HIL simulation process is shown in Figure 4.9.

54

Figure 4.9: Perception System Overview in HIL

Recorded playback of sensor data was achieved using Mathwork’s Vehicle Networking Toolbox

(VNT). This toolbox allowed the team to simulate recorded sensor data via Virtual CAN. The CPC contained the I/O layer in Simulink, and this layer managed the CAN and ROS interface. The CAN interface performed the function of decoding the raw CAN data into sensor messages and signals using supplier specified .dbc files. The ROS interface in Simulink was primarily used for data visualization. ROS enabled the team to plot detections from the sensors on a birds-eye plot. This was extremely useful in the evaluation of the individual sensor performance and the perception system algorithms.

Detections and status of active faults from both forward-facing sensors were published as ROS topics in Simulink. These ROS topics contained information about the longitudinal and lateral obstacle position, as well as the number of active faults, and a fault log for both forward-facing sensors. An example of publishing detections from the forward-facing sensors using ROS in

Simulink is shown in Figure 4.10.

55

Figure 4.10: ROS Publishing in Simulink - CAN Replay

The detections from both the forward-facing sensors were plotted using a team-designed python script. This script utilized ROS to subscribe to the ROS topics published from Simulink, and plot detections on a birds-eye plot.

4.4 Component-in-the-Loop (CIL)

CIL testing involved simulating real sensors on a physical testbench. This provided the team a platform to use real sensors to transmit information to the CPC in real-time. CIL was primarily

56 used for monitoring data latency issues and validation of sensor fault diagnostics. The CIL testbench comprised of the two forward-facing sensors: the Mobileye Camera and the Delphi ESR radar that transmitted data to the Intel Tank as shown in Figure 4.11.

Figure 4.11: CIL Testbench

4.5 Vehicle-in-the-Loop (VIL)

VIL simulations utilized a mule vehicle for testing and validating the perception system using real- world testing in real-time. The two-forward facing sensors transmitted data to the CPC, which was responsible for performing perception related tasks such as fault diagnostics, sensor fusion, and assigning lanes to detections in front of the ego-vehicle. The CPC was installed in the rear compartment of the mule vehicle, and a monitor was mounted in the back seat for data visualization in real-time. The vehicle setup overview for VIL testing is shown in Figure 4.12.

57

Figure 4.12: Vehicle Setup Overview in VIL

4.6 Evolution of Perception System in XIL

The team utilized four different modeling environments for the development of the CAV perception system. The four environments were used iteratively to calibrate, validate, and verify different layers of the perception system. The performance of the CAV perception system was assessed by evaluating the sensor fusion and tracking algorithm. True Positive Rate (TPR) was used as a metric to evaluate the accuracy of the sensor fusion and tracking algorithm. Table 4.1 outlines the four milestones that were used for the development of the perception system. The team utilized the MIL environment for the sensor selection process and the initial design of the CAV perception system. Scenarios were generated using MATLABs ADT, and they were imported into

Simulink to develop CAV features and algorithms. To test and validate the I/O layer for the CAV perception system, the team transitioned to HIL environment as it allowed for simulation of real-

58 sensor data using CAN replay. The third major milestone in the CAV system development was transmitting sensor data in real-time. Real-sensors on a physical testbench were used in a CIL environment to validate the real-time data transmission of sensor data. The last milestone involved testing and validating the deployment of the CAV perception system on the competition vehicle.

The CAV system development involved making major modifications to the CPC in different XIL environments to achieve the perception system milestones, and develop a robust perception controller.

Table 4.1: Perception System Milestones

# Event MIL HIL CIL VIL 1 Initial design and development of CAV ✓ features and algorithm using ADT 2 CAN and ROS interface in Sensor I/O layer ✓ Development tested Phase 3 Sensor data transmitted in real time ✓ 4 Test and validate the accuracy of the CAV ✓ perception using in-vehicle testing A SF Algorithm – Developed new algorithm to ✓ be compatible with team sensor I/O B Software Stack - Switched from MathWorks Major products to Python to meet computational ✓ Modification time requirements C Calibration – Performed sensor longitudinal and lateral calibration to meet TPR ✓ ✓ requirements SF – Sensor Fusion; ADT – Automated Driving Toolbox; TPR – True Positive Rate

59 Figure 4.13 shows the evolution of the CAV perception system during Year 2 of the EcoCAR

Mobility Challenge. In Phase 1, the CAV system met the perception system-level requirement of having a TPR greater than 0.8. This was done in MIL environment which utilized synthetic data for the initial design of the CAV features and algorithms. HIL testing was required to validate the sensor I/O using playback of sensor data collected through real-world testing. The CAV perception system did not meet the desired requirements when transitioning from MIL to HIL environment and therefore the CPC I/O layer had to be modified to be compatible with the sensors. This major modification A, outlined in Table 4.1, allowed the system to be HIL compatible. To achieve

Milestone 3, the team transitioned from HIL to CIL environment, and the sensors could not transmit data in real-time. Thus, major modification B occurred that required the software stack to be switched from MATLAB to Python. By switching the software stack, the perception system was able to meet the computational time requirement and sensor data was transmitted in real time.

During Phase 4, the team transitioned from CIL to VIL testing to test the accuracy of the perception system. When the perception system did not the TRP requirement, the team performed the final major modification C, which involved the longitudinal and lateral sensor calibration discussed in

Chapter 7. This modification improved and validated the accuracy of the perception system and allowed the team to achieve the final milestone.

60

Figure 4.13: Evolution of CAV Perception System

61 Input/Output Layer

A primary objective for the development of the CAV system was the design of a reliable

Input/Output (I/O) layer for the CAV perception system. This was essential for deploying perception algorithms on the vehicle. This is the first case study that will demonstrate the usage of different testing environment to aid in the development of the perception system.

Originally, the I/O layer was designed in Simulink due to prior software experience and the toolchain that MathWorks provides. The team switched its software stack from Simulink to Python in January 2020 due to issues in real-time implementation of the perception algorithms. This chapter will discuss the testing and validation of the I/O layer in different testing environments, including the reasons for changing the software stack.

5.1 I/O Layer Requirements

The design of the CAV Perception Controller (CPC) began with outlining the requirements of the

I/O layer, which can be found in Table 5.1. The requirements were derived from component-level specifications and competition guidelines. These requirements were crucial in designing, testing, and validating the I/O layer, which in turn was vital for the development of the CAV perception system.

62 Table 5.1: I/O Layer Requirements

ID Description 5.1 The input layer shall acquire CAN data from the front radar and the front camera every 50 ms and 100 ms, respectively. 5.2 The input layer shall acquire CAN data from all seven sensors simultaneously ensuring no CAN ID conflicts or CAN bus overload 5.3 The output layer shall follow the CAV-PCM interface guidelines outlined by the competition.

5.2 Design of I/O Layer in Simulink

To design the perception system, the first step was to model the interface used to acquire data from the sensors. The perception system I/O layer performed the functions of acquiring data, decoding raw CAN messages, using ROS to visualize sensor detections, and transmitting the sensor fused targets to the Hybrid Supervisory Controller (HSC). Different tools and add-ons from the Simulink library were used to achieve this functionality with a high-level overview of the CAV perception system I/O shown in Figure 5.1.

63

Figure 5.1: CAV System I/O Overview in Simulink

Data was transmitted from the sensors to the CPC using the Linux SocketCAN library, and imported into Simulink using team developed S-functions. MATLABs Vehicle Networking

Toolbox (VNT) was used to decode the sensor data and propagate it to the different layers of the

CPC for performing sensor fusion and data visualization. The I/O layer is discussed in detail in the following sub-sections.

To model and simulate the I/O for the CAV system, a CAV Testbench was developed in Simulink.

This model shown in Figure 5.2 was designed to acquire data from all seven sensors simultaneously, process this data, and propagate it to different layers of the perception system.

Figure 5.2 shows different layers of the CPC with the sensor input layer and the CAV-PCM interface highlighted. The sensor input layer comprised of the CAN Interface and the ROS

Interface and will be discussed in the following sub-sections.

64

Figure 5.2: CAV Testbench in Simulink

5.2.1 CAN Interface

All sensors communicated with the CPC via CAN. To acquire data from the sensors, MATLABs

VNT was used. The sensors transmitted data to the CPC via SocketCAN. Custom S-functions were developed in Simulink to bring in data into the software. Different CAN channels were accessed by specifying the CAN channel number in the constant block. An overview of the Simulink logic is shown in Figure 5.3.

65

Figure 5.3: Acquiring Data using Custom S-Functions in Simulink

Two custom S-functions written in C++: ‘GetSocket’, and ‘ReceiveCANMsg’ were used in obtaining data. An example of the wrapper written to receive CAN data from the Mobileye Camera is shown in Figure 5.4.

Figure 5.4: C++ Wrapper for ‘ReceiveCANMsg’ S-Function

66 Once the CAN data was acquired, VNT’s CAN Unpack blocks decoded the raw data using supplier provided .dbc files. These blocks decoded CAN data to sensor messages and signals at each timestep. These sensor messages and signals were then transmitted to different layers of the CPC.

5.2.2 ROS Interface

The team utilized Robot Operating System (ROS) to visualize sensor data and sensor fused tracks.

Simulink’s ROS toolchain was used, specifically the ROS Blockset. Throughout the model, local

ROS networks were called upon by Simulink blocks to send and receive messages. The ROS node was first initialized in Simulink, and then the required sensor messages or controller signals were published to the ROS network. These messages were then monitored and visualized using ROS subscribe.

5.3 Validation of Simulink I/O in HIL

The MIL environment was not used to validate the I/O layer, because it used synthetic data as the input to the system. Therefore, the Simulink I/O layer was first validated in HIL testing. This was done by playing back recorded sensor data using the CAN Replay Blockset in MATLAB’s VNT.

Shown in Figure 5.5, recorded sensor data for an approach test scenario was transmitted using virtual CAN through different layers of the CPC and the sensor detections were plotted on a Birds-

Eye Plot using ROS.

67

Figure 5.5: I/O Validation in HIL

5.4 Shortcoming of Simulink I/O in CIL/VIL

After validating the I/O layer in HIL, the team transitioned to testing and verifying the I/O layer in CIL and VIL. In CIL, real sensors were deployed on a physical testbench. While performing

CIL testing, sensor data could not be acquired in real-time. The I/O layer was transmitting the front radar and camera data slower than 50 and 100 ms, respectively. Therefore, the Simulink I/O layer did not meet Requirement 5.1. Thus, the CAV perception system designed in Simulink could not be validated in CIL and VIL. 68 5.5 Change in Software Stack

The primary objective of the CAV perception system was to have the ability to run perception algorithms in real-time. Within the CIL and VIL testing environments, the Simulink I/O layer proved it was not real-time implementable. Thus, there was a need to migrate to a different software stack. The team decided to switch the software stack from Simulink to Python to aid in the real-time implementation of the perception system. The application of ROS in the perception system made the transition of software stack easier. The majority of the algorithms for the perception system had been developed and tested in Simulink in different XIL environments. This further helped the team make the transition within three weeks.

5.6 Design of I/O Layer in Python

The Python I/O Layer was similar to the one designed in Simulink. Sensor data was acquired by the CPC via SocketCAN. Since Python is an open-source platform, it comes with a variety of existing libraries for perception applications. The Python library ‘CANtools’ was used to receive sensor data. Once the raw CAN data was decoded using supplier provided .dbc files, sensor messages were published as ROS topics. These ROS topics were subscribed by the perception layer in the CPC to perform functions such as fusion and tracking, lane assignment, and velocity estimation of sensor fused targets. The sensor fused target information was then sent to the HSC via the CAV-PCM interface. The system overview of the CPC in Python is shown in Figure 5.6.

69

Figure 5.6: CAV System Overview in Python

5.6.1 CAN Interface

All sensors communicated with the CPC via CAN. Each sensor was allocated a private CAN bus to ensure that there was no CAN bus overload and no CAN ID conflicts. Two Kvaser PCIe CAN

Adapters available on the Intel Tank were used to interface and obtain data from the sensors via

SocketCAN in Linux.

5.6.1.1 CAN Decoding

Once raw CAN data from the sensor was accessible, it was decoded using a supplier provided .dbc file. The .dbc file converted raw CAN data into different CAN messages. These CAN messages contained information about obstacle detection, status of the sensor, and active faults present in the sensor. Each CAN message has a unique identifier in the form of a particular frame ID. Figure

5.7 shows a snipper of the Python code which uses frame ID to decode the raw CAN data and associate it with the correct CAN message.

70

Figure 5.7: Python Code for decoding CAN messages

5.6.2 ROS Interface

Once data from all the sensors was obtained in the controller, ROS was used to propagate it through different layers of the perception system. Each sensor was assigned a separate ROS node to ease implementation.

5.6.2.1 ROS Initialization

A ROS network was established by initiating ROS Master using the command ‘roscore’. This registered service to the rest of the nodes in the ROS system. Subsequently, the Python library

‘CANtools’ was used to import CAN data from SocketCAN. An example of the ROS initialization code and the code to import CAN data into Python is shown in Figure 5.8.

71

Figure 5.8: Snippet of Python Code to Import CAN Data

5.6.2.2 ROS Message Layout

ROS uses a simplified messages description language for describing the data values that ROS nodes publish [44]. These message descriptions assign a data type associated with that ROS message. The different data types that can be used are:

• int8, int16, int32, int64 • float32, float64 • string • time, duration

For the front radar, a standard float32 multi-array ROS message was used, while a custom ROS message was developed for the front camera. Table 5.2 provides a description of the ROS messages used for the two forward-facing sensors.

72 Table 5.2: ROS Messages for Forward-Facing Sensors

Sensor ROS Message Description Contains obstacle information such as range, Front Radar Float32 Multi-Array azimuth, velocity Contains obstacle information such as range, velocity, obstacle ID, and lane information such Front Camera Custom ROS Message as distance to left and right lanes, curvature, lane width etc.

5.6.2.3 Developing a custom ROS Message

A ROS message can be generated by creating a new file with a .msg file extension. The custom

ROS message can include multiple data types. After adding the required data types in the custom

ROS message, the ROS environment is sourced. Then, ROS packages are updated and configured using the command ‘catkin_make’. Figure 5.9 shows the custom ROS message developed for the front camera. It includes four different data types used to publish sensor information. Use of a custom ROS message allows the system to update and transmit CAN messages with different data types at the same timestep thus eliminating data latency issues.

Figure 5.9: Custom ROS Message for Front Camera 73 5.6.2.4 ROS Publishing

Valid CAN messages were then published onto ROS master as ‘ROS Topics’ so they can be retrieved and used by any other ROS node. Different ROS topics for the two forward-facing sensors are specified in Table 5.3.

Table 5.3: ROS Topics for Forward-Facing Sensors

ROS Topic Description Data Type Contains all CAN messages and signals Float32 Multi- FrontESR_Detections related to obstacle detection Array Contains all CAN messages and signals Float32 Multi- related to obstacle detection and lane Array information MBE_Custom_Detection Contains all CAN messages and signals Int16 Multi-Array related to error codes Contains all CAN messages and signals String Multi-Array related to mobileye status and active faults

5.7 Validation of Python I/O in CIL/VIL

Due to time constraints in regards with competition deadlines, the I/O layer in Python could not be validated in MIL or HIL environment. The team transitioned directly to CIL testing to validate and verify the Python I/O layer.

74 Sensor data was acquired from both forward-facing sensors simultaneously in real-time, and the front radar and the front camera were able to transmit CAN data to the CPC in 50 ms and 100 ms, respectively. This validated the I/O layer using CIL simulation and both requirements of the input layer were satisfied. Shown in Figure 5.10 is the CIL environment validation with the raw CAN data obtained from the front camera in real-time and the data updated every 100 ms. The same results were seen on the vehicle during final validation.

Figure 5.10: CAN Data Acquired from Front Camera in Real-Time using CIL

5.8 CAV- PCM Interface

The primary objective of the perception system was to perform CAV features such as Adaptive

Cruise Control (ACC), Lane Centering Control (LCC) and Lane Change on Demand (LCOD).

These algorithms were designed on the Hybrid Supervisory Controller (HSC). The Intel Tank was the primary CAV perception controller, while the HSC acted as the primary Propulsion Control

75 and Modelling (PCM) controller. In order to perform the above mentioned features, sensor fused track information was required to be transmitted from the CAV controller to the HSC. The CAV-

PCM interface served as the foundation for bringing the CAV and PCM system together. This interface was responsible for transmitting and receiving data from both controllers simultaneously via CAN. Figure 5.11 shows a high-level overview of the CAV-PCM interface. This interface will be tested and validated in Year 3 of the competition to satisfy Requirement 5.3 concerning the output layer of the CPC.

Figure 5.11: CAV-PCM Interface

76 Fault Diagnostics

This case study describes the development of fault diagnostics for the CAV perception system.

Diagnostics perform two primary functions: fault detection and fault mitigation. This section will discuss the different diagnostic types and the diagnostic algorithm used. An example of sensor diagnostics for the front camera and front radar will be outlined, which will be followed by the testing, validation, and verification of the diagnostic algorithm in different modeling environments.

6.1 Diagnostic Type

To understand the impact of different faults on the functionality and safety of the perception system, a CAV hazard analysis was performed. This analysis was the foundation for the design of the diagnostics. Diagnostics were classified into three types with their severity and description provided in Table 6.1.

Table 6.1: Diagnostic Type Description

Diagnostic Severity Description Examples Type Interface- High Detect and isolate faults caused by Power on/off, CAN Level hardware interface failures Disconnected Component- Medium Detect and isolate fault flags caused by Validity Bits, Fault Level internal sensor failures. Flags, Rolling Counter Perception- Low Detect and isolate faults caused by Measurement Level inaccuracies that are beyond expected Inaccuracies errors

77 Interface and component-level diagnostics account for the highest probability failures in the system. Interface diagnostics detect communication errors between sensors and the controller.

These faults have a high severity since they make the sensor or system inoperable. Component- level diagnostics monitor built-in sensor diagnostics and propagate error warnings. Perception level diagnostics monitor the accuracy of data produced by the sensors and prevent inaccurate data to propagate through the perception system.

To develop a robust fault mitigation strategy for the perception system, a truth table shown in

Table 6.2 was designed to identify the sensors needed for each CAV’s feature. If a sensor needed for a specific feature was inoperable, a fault flag was generated for that feature. This information was then transmitted to the Hybrid Supervisory Controller (HSC) and a LED light was used to alert the driver of the unavailable CAV feature.

Table 6.2: Diagnostic Truth Table

CAV Features Sensors Adaptive Cruise Lane Centering Lane Change on Control (ACC) Control (LCC) Demand (LCOD) Front Camera R R R Front Radar R N R Left Corner Radar N N R Right Corner Radar N N R Left Side Radar N N R Rear Radar N N R R - Required N – Not Required

78 6.2 Diagnostic Algorithm

The flow logic for the perception system’s fault diagnostic algorithm is shown in Figure 6.1. First, the system checks for interface-level faults by monitoring incoming CAN data from the sensors using an if-else conditional statement. If no data is received, a fault flag for the sensor is generated.

Based on the truth-table shown previously in Table 6.2, the affected CAV feature is flagged as unavailable. This information is transmitted to the HSC. If data is received from the sensors, then the algorithm checks for a component-level fault based on the sensor’s status messages. If an internal sensor fault is reported, a fault flag specific for the sensor is generated and the similar process of flagging the affected CAV feature unavailable is followed. If there is not an active sensor fault present, the sensor data is published to the Robot Operating System (ROS) node. The last fault check is for a perception-level fault. Published sensor data is subscribed via ROS, and the detections from the front camera and the front radar are compared. If the error in the detections between the two forward-facing sensors is less than 3 m, the perception system performs sensor fusion to generate sensor fused tracks. This information is transmitted to the HSC, which is responsible for performing the CAV features. If the error is greater than 3 m, detections from both sensors are monitored for three consecutive timesteps. During those three consecutive timesteps, the obstacle distance reported by the front camera is considered as the sensor fused target distance.

If the error persists for three successive timesteps, a fault flag is generated, and the information about the CAV feature being unavailable is transmitted to the HSC. All faults detected are logged with a timestep using a ROSbag.

79

Figure 6.1: Diagnostic Flow Logic 80 6.3 Sensor Diagnostic Overview

Sensor diagnostics were developed, tested, and implemented for the two forward-facing sensors.

This sub-section details an interface, component, and perception level diagnostic for the front camera and the front radar. The sensor diagnostics overview provides an example of the fault with its effect and description. This is followed by the test plan developed to induce the fault. Finally, the pass criteria for the diagnostic test and the system’s response to the fault occurring during ACC is specified.

6.3.1 Front Camera Diagnostics

Table 6.3-Table 6.5 shows a detailed breakdown of an interface, component and perception-level diagnostics developed for the front camera.

Table 6.3: Front Camera Diagnostic Check – Interface Level

Fault Loss of CAN Communication from the HSC Loss of CAN communication from the HSC, which will stop the transmission of Effect the vehicle speed signal required for the functioning of front camera. Monitoring the signal ‘Error_Code.’ If ‘Error_Code = 32’, it signifies that the Diagnostic camera is not receiving vehicle information from the HSC. In the case of a fault, Description the CAV Perception Controller (CPC) sets ‘Mobileye_Fault_Active = 1’ and logs the specific fault with a timestamp using ROS. Unplug the CAN connection from the HSC to the CPC and monitor the signal Test Plan ‘Mobileye_Fault_Active.' Pass When ‘Error_Code = 32’, the diagnostic algorithm sets ‘Mobileye_Fault_Active Criteria = 1’ and logs the fault with a timestamp. Response The CPC will transmit ‘ACC_Unavailable = 1’ to the HSC. An interface level during fault would be a latching fault and require a vehicle restart to re-enable ACC. ACC

81 Table 6.4: Front Camera Diagnostic Check – Component Level

Fault Obstruction of Camera Lens Inability to detect objects and produce valid data for sensor fusion due to the Effect obstruction of the camera lens. Detecting and isolating the maintenance flag signal. ‘Maintenance = 1’ signifies Diagnostic that the component’s lens is obstructed or there an internal sensor failure. In the Description case of a fault, the CPC sets ‘Mobileye_Fault_Active = 1’ and logs the specific fault with a timestamp using ROS. Simulate CAN message from the front camera on a virtual testbench using the Test Plan Vehicle Networking Toolbox (VNT) in a MIL environment and insert ‘Maintenance = 1’ signal to induce a fault. Pass When ‘Maintenance = 1’, the diagnostic algorithm sets ‘Mobileye_Fault_Active Criteria = 1’ and logs fault with a timestamp. The CPC will transmit ‘ACC_Unavailable = 1’ to the HSC. The CPC will Response continue to monitor the ‘Maintenance’ signal and if ‘Maintenance = 0’ for three during consecutive timesteps, the CPC will transmit ‘ACC_Unavailable = 0’ to the HSC. ACC This would allow the driver to re-enable ACC.

Table 6.5: Front Camera Diagnostic Check – Perception Level

Fault Invalid Lane Line Information Unable to detect lane-lines which is required for classifying if obstacle is in the Hazard ego-vehicle’s path. Detecting and isolating the ‘LDW_Off’ signal. ‘LDW_Off = 1’ signifies that Lane Departure Warning (LDW) is unavailable because the camera cannot detect Diagnostic lane-lines. If ‘LDW_Off = 1’ for < 10 seconds, the fault handler is not triggered, Description and the GPS/IMU is used along with previously forecasted lane information. If ‘LDW_Off = 1’ for > 10 seconds, ‘Mobileye_Fault_Active’ is set to ‘1’. During vehicle testing, the fault conditions will be further calibrated. Replay data from an urban scenario with varied road signs and road markings in Test Plan HIL environment to evaluate lane line information produced by the camera. When LDW_Off = 1’ for < 10 seconds, the fault handler is not triggered. When Pass LDW_Off = 1’ for > 10 seconds, the algorithm sets ‘Mobileye_Fault_Active’ to Criteria ‘1’ and logs the fault with the timestamp. The CPC will transmit ‘ACC_Unavailable = 1’ to the HSC. The CPC will Response continue to monitor ‘LDW_Off’ signal, and if ‘LDW_Off = 0’ for > 10s, the CPC during will transmit ‘ACC_Unavailable = 0’ to the HSC. This would allow the driver to ACC re-enable ACC.

82 6.3.2 Front Radar Diagnostics

Table 6.6-Table 6.8 shows a detailed breakdown of an interface, component and perception-level fault performed for the front radar.

Table 6.6: Front Radar Diagnostic Check – Interface Level

Fault Sensor Harness Unplugged Effect The front radar becomes inoperable due to the loss of CAN communication. Monitoring CAN data from the front radar to the CPC at each timestep. If there Description is no incoming CAN data for 60 ms, ‘FrontRadar_Interface_Fault’ will be set to ‘1’ and a flag for the front radar fault active will be triggered. Test Plan In a CIL testbench, unplug the connector between the front radar and the CPC. When CPC does not receive CAN data from the front radar, the diagnostic Pass algorithm sets ‘FrontRadar_Interface_Fault’ to 1, generates a flag for the front Criteria radar fault active, and logs the fault with the timestamp using ROS. The CPC will transmit ‘ACC_Unavailable = 1’ to the HSC. The CPC requests Response the HSC alert the driver to take control and warn of the feature’s disabled state. during Faults of this type would be a latching fault and would require a vehicle restart ACC to re-enable ACC.

Table 6.7: Front Radar Diagnostic Check – Component Level

Fault Radar Malfunction Effect The inability to detect objects leads to invalid data for sensor fusion. Detect and isolate the XCVR_Operational flag signal. ‘XCVR_Operational = 0’ Description signifies that the radar is not operating correctly. In the case of a fault, the CPC sets ‘FrontRadar_Fault_Active = 1’ and logs the fault with a timestamp. Simulate front radar CAN messages on a virtual testbench using VNT in a MIL Test Plan environment and insert ‘XCVR_Operational = 0’ to induce a fault. Pass When ‘XCVR_Operational = 0’, the diagnostic algorithm sets Criteria ‘FrontRadar_Fault_Active = 1’ and logs the specific fault with the timestamp. The CPC will transmit ‘ACC_Unavailable = 1’ to the HSC. The CPC requests Response the HSC alert the driver to take control and warn of the feature’s disabled state. during The CPC will continue to monitor the ‘XCVR_Operational’ signal, and if ACC ‘XCVR Operational = 1’ for three consecutive timesteps, the CPC will transmit ‘ACC_Unavailable = 0’ to the HSC and allow the driver to re-enable ACC. 83 Table 6.8: Front Radar Diagnostic Check – Perception Level

Fault Inaccurate Longitudinal Detections Inability to produce accurate obstacle detections, which is required to perform Effect sensor fusion. Comparing the longitudinal detections between both forward-facing sensors. An error threshold region of 3 m is defined. If the error is greater than three meters, detections from both sensors are monitored for three consecutive timesteps. If the error is intermittent, the obstacle distance reported by the Description front camera is considered as the sensor fused target distance. However, if the error persists for three successive timesteps, a fault flag is generated, the information about the unavailability of the CAV feature is transmitted to the HSC, and the fault is logged with the timestep. These thresholds and conditional statements will be further calibrated during vehicle testing. Replay recorded sensor data in HIL from various test scenarios and monitor Test Plan the longitudinal range values reported by both the forward-facing sensors. When the error between the reported range value from both sensors is greater Pass than three meters for three consecutive timesteps, the diagnostic algorithm sets Criteria both the ‘FrontRadar_Fault_Active’ and ‘Mobileye_Fault_Active’ to ‘1’ and logs the specific fault with a timestamp. The CPC will transmit ‘ACC_Unavailable = 1’ to the HSC. The CPC requests the HSC alert the driver to take control and warn of the feature’s disabled Response state. The CPC will continue to monitor longitudinal range values reported by during ACC the two forward-facing sensors. If the error value is less than three meters for three consecutive timesteps, the CPC will transmit ‘ACC_Unavailable = 0’ to the HSC and allow the driver to re-enable ACC.

6.4 Diagnostics in XIL

Since CAV is an active safety system, the primary objective was to develop robust diagnostics to ensure driver’s safety. To achieve high level of robustness, diagnostics were designed and tested in various XIL environment.

84 Table 6.9: Diagnostics in XIL Environments

Simulation Environment Fault Injection Method Model-in-the-Loop (MIL) Spoofing Faulty CAN Messages Hardware-in-the-Loop (HIL) Recorded Sensor Data Playback Component-in-the-Loop (CIL) Unplugging Sensor Harness from Controller Vehicle-in-the-Loop (VIL) Disabling Transmission of Sensor Checksums

Table 6.9 specifies the different XIL simulation environments along with the fault injection method. These simulation environments were used iteratively to develop, test, implement, and validate diagnostics.

6.4.1 Diagnostic in MIL

A virtual testbench was developed to design and implement the fault diagnostic requirements for the front sensors. The MIL environment for the front camera is shown as an example in Figure 6.2.

The CAN messages and signals from the front camera were simulated on a virtual bus to CPC using MathWork’s VNT. Fault diagnostics were implemented after the CAN messages were decoded.

Figure 6.2 highlights the design and implementation of one component-level diagnostic: detecting and isolating the front camera’s maintenance flag signal. The maintenance flag signifies that the component’s lens is obstructed, or the sensor may be damaged or unable to perform properly. The testbench also includes additional component-level diagnostics to illustrate how multiple component faults were organized.

85

Figure 6.2: Virtual Testbench for MIL Testing

The maintenance signal fault logic is as follows: the maintenance signal is “0” if there are no faults present. In this case, the CPC propagates the camera data to fusion and application algorithms. In the case of a fault the signal is set to “1”, a fault handler is triggered which increments a system fault counter, logs the specific fault with a timestamp, and enters a fail state.

6.4.2 Diagnostic in HIL

After developing initial sensor diagnostics in MIL, real sensor data was used to perform HIL simulations. Sensor data was collected from various on-road test scenarios and replayed using the

VNT in MATLAB. This method allowed the team to improve the diagnostic algorithms developed in MIL in a higher fidelity environment. Figure 6.3 shows an example of the diagnostics

86 implemented for the front camera in HIL. Sensor faults were detected by monitoring status messages from the front camera and then the detected sensor faults were logged and visualized using ROS.

Figure 6.3: Front Camera Diagnostics Algorithm in HIL

The fault logic used to detect these faults made use of StateFlows. Figure 6.4 shows an example of the Stateflow designed for the front camera diagnostics based on the signal ‘Error_Active’. This signal was taken as the input state, while Error and No Error were chosen as the two output states.

The Stateflow was initialized in the No Error state, corresponding to ‘Error_Active = 1’ based on the supplier’s provided .dbc file.

87 The Stateflow transitioned into the Error state when ‘Error Active = 0’ for three consecutive timestep i.e. 300 ms. In this state, the Stateflow outputs Error = True and No Error = False. This state signified that there was an active fault present in the front camera. The input CAN signal was monitored at all timesteps. The Stateflow remained in the Error state until Error_Active = 1 for 3 consecutive timesteps, which caused the Stateflow to transition to the No Error state.

Figure 6.4: Front Camera Diagnostic Stateflow

If an active fault was present, the fault diagnostic algorithm combined the interface, component, and perception-level faults and generated an active fault flag, deeming the sensor inoperable.

Figure 6.5 shows the logic for the fault diagnostics strategy to detect sensor faults.

Figure 6.5: Sensor Fault Diagnostics Strategy 88 A key objective of the CAV fault diagnostics was to transmit information about the availability of

CAV features to the HSC. For this, feature-level fault detection strategies were implemented as shown in Figure 6.6. Based on the truth table shown previously in Table 6.2, active faults from the front camera and the front radar were coupled to determine if ACC was available to the driver.

This information was transmitted to the HSC at all timesteps to ensure appropriate actions were taken to alert the driver about the availability of the CAV features.

Figure 6.6: Feature-Level Fault Detection Strategy

MIL simulations used synthetic data, and therefore it was crucial to test and validate the implemented diagnostics in HIL using the playback of recorded real sensor data. These diagnostics were visualized using different ROS topics as shown in Figure 6.7. These ROS topics included the sensor’s active fault, specifically the number of faults and the description of the fault. Additionally, information about the availability of CAV features provided.

89

Figure 6.7: ROS Topics for Fault Diagnostics

6.4.2.1 Validation of Fault Diagnostic in HIL

The fault diagnostics were validated in HIL through the fault logs for the front radar and the front camera, which gave information about the active fault status of these sensors as well as the type and description of active faults. The individual sensor-level faults combined to generate feature- level faults, which were monitored using separate ROS topics.

Recorded sensor data from a following-lead car scenario was played back, and the detections and fault logs were visualized using ROS. The diagnostic algorithm was tested to observe if the sensors operated in a normal state and plotted detections accurately. This was done to ensure that the diagnostic algorithm did not cause any false positives faults and provided accurate information about the availability of CAV features.

Figure 6.8 shows a screenshot of the Birds Eye Plot (BEP), the raw video footage, the fault log of the two-forward facing sensors, and the availability of longitudinal control. Detections plotted on 90 the BEP closely matched the vehicle position in the raw video footage and there were no active faults present in the two sensors. This verified that the diagnostic algorithm did not trigger a false positive during replay of recorded sensor data. Thus, the diagnostics were validated using HIL testing.

Figure 6.8: Fault Diagnostics Validation in HIL for Following Lead-Car Scenario

91 6.4.3 Diagnostic in CIL

The team switched the software stack from MATLAB to Python for real-time implementation of the CAV perception system. The primary objective of CIL testing was to detect data latency and faults in sensor I/O. In CIL, the interface-level diagnostics were tested to validate and verify the fault diagnostics algorithm. The transmission of CAN data from the front camera to the CPC was monitored to detect and isolate interface-level faults with the Python code shown in Figure 6.9.

Figure 6.9: Code for Interface-Level Fault Detection in Python

6.4.3.1 Validation of Fault Diagnostic in CIL

The fault diagnostics algorithm was tested in CIL by connecting the front camera to the CPC on a physical testbench. The incoming CAN data from the sensor was monitored, and a fault flag was generated if new data was not updated every 100 ms. The sensor harness connecting the front camera to the CPC was disconnected and the fault diagnostics log was visualized using ROS as shown in Figure 6.10.

92

Figure 6.10: Fault Diagnostics Validation in CIL – Unplugging Sensor Harness

Upon unplugging the sensor harness, the front camera fault active flag was generated. This validated that the diagnostic algorithm was able to detect and isolate faults in CIL.

6.4.4 Diagnostic in VIL

The next step in the development of CAV diagnostics was the validation and verification of the diagnostic algorithm using in-vehicle testing. The 2019 Chevy Blazer was utilized as the mule vehicle to perform VIL testing to validate the diagnostics algorithm. For this, status signals such as Maintenance, Error Active and Error Code were monitored to detect and isolate any active component-level faults in the front camera. The fault diagnostic algorithm in Python is shown in

Figure 6.11.

93

Figure 6.11: Code for Component-Level Fault Detection in Python

6.4.4.1 Validation of Fault Diagnostic in VIL

The front camera requires the transmission of vehicle signals such as vehicle speed, brakes, turn signals, and high beam for its proper functioning. Failure to transmit these signals affect the sensor’s performance, and prevents it from detecting obstacles, thus making it unusable. To test the fault diagnostic algorithm in VIL, the transmission of these signals was disabled, and the diagnostic log was visualized using ROS as shown in Figure 6.12. The failure to receive vehicle information introduced an error in the Mobileye camera, which was observed by monitoring the

Error Code signal. The diagnostics algorithm generated an active fault flag for the front camera, and thus, the fault diagnostics algorithm was validated in VIL.

94

Figure 6.12: Fault Diagnostics Validation in VIL – Disabling Transmission of Vehicle Signals

6.5 Summary

The fault diagnostic algorithm was designed and tested in MIL, and was validated using HIL, CIL, and VIL through different fault injection methods. Since the CAV perception system is an active safety system, verifying that faults were detected and diagnosed robustly in different XIL testing environments was vital in the development of the CAV system. The team will continue to test, develop, and validate more robust diagnostic methods and standards for the CAV perception system to ensure driver safety throughout years 3 and 4 of the competition.

95 Sensor Calibration

This chapter is the last case study and it focuses on the sensor calibration algorithm. The calibration process for the two forward-facing sensors and the effect of the calibration process on the fusion and tracking algorithm will be discussed. The purpose of sensor calibration was to increase the accuracy of the CAV perception system, and ensure the safety of the passenger in engaging CAV features such as ACC.

7.1 Motivation for Sensor Calibration

The chief objective of utilizing Advance Driver Assistance Systems (ADAS) is to prevent accidents and ensure driver safety. Thus, it is crucial to calibrate the sensors governing these systems to correctly estimate the obstacle distance and velocity to enhance the performance of

CAV features and reduce the risk of accidents [45].

7.2 Front Camera Calibration

The calibration of the Mobileye camera ensures that the camera detects obstacles and lane lines accurately. The camera uses a software technique that interprets a video feed in real-time to detect obstacles and lane lines within its field of view (FOV). It then utilizes the detected obstacles and lane lines to estimate distance and velocity that are useful in performing functions such as Lane Departure Warning (LDW) and Forward Collision Warning (FCW).

The seven major steps in the Mobileye calibration process can be found in Figure 7.1.

96

Figure 7.1: Mobileye Calibration Process

The initial Mobileye calibration step involved setting up the components by following the wiring scheme shown in Figure 7.2. The required vehicle parameters for the calibration process such as speed, turn signals, brakes and high beam were provided to the camera via CAN.

Figure 7.2: Mobileye Camera Wiring Scheme 97 Once, the camera had been wired according to the wiring scheme, the software calibration process was initiated using the Mobileye Setup Wizard. The camera is mounted to the vehicle windshield, while the EyeWatch is attached to the dashboard. The camera’s FOV is fixed by adjusting the camera lens position, and measurements from the camera to various reference points on the vehicle are taken as shown in Figure 7.3.

Figure 7.3: Mobileye Measurements

The measurements played a crucial role in the calibration process, and thus the distance to the reference points had to be measured accurately. The distance from the center of the camera lens to various reference points were measured using a measurement tape and is shown in Table 7.1. These reference points included the camera height, distance to front bumper, vehicle width, left windshield edge, and right windshield edge.

98 Table 7.1: Reference Points for Mobileye Calibration

Reference Point Description Distance [m] Camera height Vertical distance from the ground up to the camera 1.34 Distance to front Horizontal distance from camera to the vehicle front 1.85 bumper bumper Vehicle width Distance between the outer edges of the front wheel 1.60 Camera to left Lateral distance from camera to left windshield edge 0.82 windshield edge Camera to right Lateral distance from camera to right windshield edge 0.54 windshield edge

7.2.1 Automatic Calibration

The Mobileye camera’s automatic calibration process utilized physical on-road driving to enhance accuracy of the sensor [46]. The calibration technique allows the system to calibrate itself within

5-10 minutes of on-road driving and is influenced by the road and driving conditions, so it is advised to perform the on-road calibration on a straight road with clear lane line markings [47].

During the calibration process, the EyeWatch produces short beeps every five seconds and displays

“CL” with a calibration competition percentage. Upon reaching 99% calibration, the mobileye system restarts itself and resumes normal operation. The team does not have access to the camera’s computer vision software, and there was not a noticeable offset in the camera detections post calibration, therefore this automated supplier calibration process was deemed acceptable for the perception system.

99 7.3 Sensor Calibration for the Front Radar

The sensor calibration process for the front radar was done to increase the accuracy of obstacle detection and enhance the performance of sensor fusion. After post-processing the front radar’s data, there was a noticeable offset in the longitudinal and lateral obstacle reported by the front radar. This offset can be observed in Figure 7.4, which shows the longitudinal and lateral distance reported by the front radar compared with the Global Positioning System (GPS) ground truth during a straight-line approach test.

Figure 7.4: Front Radar vs. GPS Ground Truth - Approach Test

100 7.3.1 Calibration in MIL

Measurements from the GPS calibrated the front radar data using Model-in-the-Loop (MIL) testing. A 2-D lookup table was developed for longitudinal and lateral distances as shown in Figure

7.5. The lookup table correlated and transformed the front radar data to the corresponding GPS measurements to remove the error. The look-up table implemented in Simulink had breakpoints and table data. The breakpoints vector corresponded to values reported by the front radar, while

Table Data defined the associated set of output values that are reported by the GPS. The range of the input data was based on the perception system’s FOV. When the look-up table input values fell between breakpoint values, the output value was interpolated using neighboring break points.

Figure 7.5: Break Points and Table Data for Longitudinal and Lateral Calibration

101 7.3.2 Validation of Calibration in MIL

To test and validate the calibration of the front radar in MIL, data collected from various vehicle tests conducted at the Transportation Research Center (TRC) were post-processed using team developed MATLAB scripts. This involved comparing the longitudinal and lateral distances from the front radar with the GPS ground truth. The following sub-sections will discuss the effects of calibration on the accuracy of the front radar and the sensor fusion algorithm.

7.3.2.1 Effect of Calibration on the Accuracy of Front Radar

The calibration of the front radar increased the accuracy of obstacle detection. The longitudinal and lateral distances obtained from the front radar post-calibration were closer to the distances acquired from the GPS. Figure 7.6 shows the effect of calibration on the longitudinal distances reported by the front for a straight-line approach test and a lane cut-in scenario.

Figure 7.6: Effects of Longitudinal Calibration on Front Radar 102 To further evaluate the effect of calibration, a statistical analysis was performed to measure the error between the longitudinal and lateral range values before and after calibration. The statistical analysis metrics were the Mean Absolute Error (MAE) and the Root Mean Square Error (RMSE).

These two metrics were regularly employed in model evaluation studies to draw conclusions about the statistical data [48]. The MAE measures the average magnitude of errors, without considering their direction and weighs each timestep error equally. The RMSE measures the average magnitude of the error. Since the error was squared, a greater weight was assigned to larger errors.

The formulas used for calculating the MAE and the RMSE are shown in Equation (1) and (2).

MAE = Value Reported by Front Radar – Value Reported by GPS (1)

푅푀푆퐸 = √(푓 − 표)2 (2) where f is the expected value (GPS) and o is the observed value (Front Radar)

Table 7.2 shows the error analysis for both the longitudinal and lateral calibration for four different test-scenarios: a stationary test from 50 m, an approach test from 10 m, a lane cut-in scenario, and a lane cut-out scenario. The data for all these tests was obtained from vehicle testing conducted at

TRC with ± 9 mm accurate GPS. The data before calibration used raw distances reported by the front radar, while the after calibration data utilized the 2-D lookup tables to add an offset based on the GPS data. From Table 7.2, it can be observed that the average mean absolute error decreased by 0.49 m and the average RMS error decreased by 0.30 m for longitudinal distances, while a 0.83 m and 0.70 m decrease was observed in the MAE and RMSE, respectively for lateral calibration.

103 Table 7.2: Error Analysis - Effect of Calibration on Front Radar

Longitudinal Calibration Lateral Calibration Test MAE [m] RMSE [m] MAE [m] RMSE [m] Scenario Before After Before After Before After Before After Cal Cal Cal Cal Cal Cal Cal Cal Stationary 0.58 0.06 0.56 0.09 1.34 0.15 1.36 0.22 Test – 50 m Approach 0.88 0.48 0.91 0.54 0.81 0.43 0.83 0.46 Test – 10 m Lane Cut In 0.59 0.09 0.63 0.44 1.03 0.06 1.08 0.31 Lane Cut 0.76 0.23 1.09 0.91 0.85 0.05 0.94 0.44 Out Average 0.70 0.21 0.79 0.49 1.00 0.17 1.05 0.35 Error

To further investigate the effect of calibration on the performance of the front radar, the team utilized boxplots to measure the longitudinal and lateral error between the radar and the GPS values. These boxplots, as shown in Figure 7.7 and Figure 7.8, were used to obtain the median error, the upper and lower quartiles, and the maximum and minimum error to further realize the impact of calibration.

104

Figure 7.7: Boxplots for Longitudinal Error Between Front Radar and GPS

105

Figure 7.8: Boxplots for Lateral Error Between Front Radar and GPS

From Figure 7.7 and Figure 7.8, after calibration there was a decrease in the median error and the inter-quartile range for both longitudinal and lateral error reported by the front radar, therefore quantifying the calibration improvement on the accuracy of the radar’s detections.

106 7.3.2.2 Effect on Calibration on the Fusion and Tracking Algorithm

Next, the effect of the longitudinal and lateral calibration on the fusion and tracking algorithm was determined. The fusion and tracking algorithm fused the detections from the front camera and front radar perform sensor fusion. The longitudinal and lateral distance of the sensor fused target was compared with the distance from the GPS for an approach test and a stationary test scenario as shown in Figure 7.9 and Figure 7.10, respectively. The distance to the sensor fused target was closer to the GPS ground truth values after calibration.

Figure 7.9: Calibration Effect on Sensor Fusion for Approach Test Scenario

107

Figure 7.10: Calibration Effect on Sensor Fusion for Stationary Test Scenario

To analyze the effect of calibration on the performance of the sensor fusion algorithm, a similar error analysis was performed using Mean Absolute Error (MAE) and Root Mean Square Error

(RMSE) as the metrics.

Table 7.3 shows the error analysis to observe the effect of longitudinal and lateral calibration on the fusion and tracking algorithm for four different test scenarios.

108 Table 7.3: Error Analysis - Effect of Calibration on Sensor Fusion

Longitudinal Calibration Lateral Calibration Test MAE [m] RMSE [m] MAE [m] RMSE [m] Scenario Before After Before After Before After Before After Cal Cal Cal Cal Cal Cal Cal Cal Approach 0.66 0.47 1.62 1.49 0.53 0.13 0.59 0.20 Test – 10 m Approach 0.84 0.53 2.20 1.56 0.27 0.14 0.37 0.23 Test – 50 m Lane Cut 0.76 0.42 1.87 1.18 0.40 0.15 1.73 1.27 Out Following 1.26 0.64 2.63 1.58 0.88 0.27 0.95 0.20 Lead Car Average 0.88 0.51 2.08 1.45 0.52 0.17 0.91 0.47 Error

The average MAE and the average RMSE was reduced by 0.37 m and 0.63 m longitudinally, while a 0.35 m and 0.44 m decrease was observed in the MAE and RMSE values, laterally. The MAE and RMSE values were higher for the following lead car scenario, since it demonstrated a more complex real word scenario with the lead car being tracked around a circular track.

The error analysis to study the performance of the fusion and tracking algorithm was completed by comparing the error between the sensor fused target distance and the GPS values, before and after calibration using boxplots. These boxplots shown in Figure 7.11 and Figure 7.12 provided an accurate and visual representation of the longitudinal and lateral errors from four different test scenarios.

109

Figure 7.11: Boxplots for Longitudinal Error Between Sensor Fused Target and GPS

110

Figure 7.12: Boxplots for Lateral Error Between Sensor Fused Target and GPS

From these boxplots, it can inferred that there was a decrease in both the longitudinal and lateral error of the sensor fused track as a result of calibration. After calibration, there was a reduction in the median error and the inter-quartile range. Therefore, the calibration resulted in an improvement of accuracy on the fusion and tracking algorithm through testing multiple scenarios in a MIL testing environment. 111 7.3.3 Validation in HIL, CIL and VIL

Although, there was a decrease in the error of the sensor fused tracks obtained from the fusion and tracking algorithm due to calibration in MIL, it is necessary to test and implement the calibration algorithm in higher fidelity environments. Validation of this algorithm through HIL, CIL and VIL testing could not be completed due to project deadlines and this work will be continued in future years of the competition. The team will continue to further improve the accuracy of the fusion and tracking algorithm through more robust testing of the sensor calibration algorithm, which will enhance the performance of the CAV perception system, and increase the reliability and accuracy of CAV features such as ACC.

7.4 Summary

Sensor calibration was performed for both the forward-facing sensors, which resulted in an increase in the sensor performance and the sensor fusion algorithm. This improvement in the fusion and tracking algorithm was crucial in the vehicle deployment of the CAV perception system. The accuracy of the fusion algorithm plays a major role in the overall performance of the CAV perception system, and this increased accuracy would further aid the deployment of the ACC algorithm on the vehicle.

112 Conclusion and Future Work

8.1 Conclusion

This work outlined the development, testing, and validation of a SAE Level 2 perception system in different XIL simulation environments. An overview of the development process is given. The

I/O layer, fault diagnostics, and sensor calibration were utilized as example algorithms to demonstrate the development process implementation. Each of these algorithms were required for the deployment of Advanced Driver Assistance Systems (ADAS).

The literature review summarized the current advancements in the research and development of

Autonomous Vehicles (AV) and ADAS. Furthermore challenges associated with designing, testing, and deployment of common ADAS features were explored.

The experimental design and process is discussed, which outlines the requirements for the perception system and the selection process for the CAVs system hardware. The development of the CAV perception system in XIL modeling environments – Model-in-the-loop (MIL), Hardware- in-the-loop (HIL), Component-in-the-loop (CIL), and Vehicle-in-the-loop (VIL) are discussed in detail, emphasizing the plant testing environments. The three subsequent chapters explore the design, testing and validation of the I/O, fault diagnostics and sensor calibration layers in the CAV perception controller in the form of case studies. These case studies provide an in-depth analysis of the CAV perception system development through XIL modeling environments. This was done to validate and verify the system through higher fidelity simulations, that helped in increasing the robustness, safety, and accuracy of the system. 113 8.2 Challenges

Several challenges were faced during the development and validation of the perception controller.

The main challenge was transmitting sensor data in real-time to the CAV perception controller.

This led to switching the software stack from MATLAB to Python in late stages of the development phase. This transition was challenging due to the lack of team experience with

Python. Another challenge was constantly adapting to changing competition guidelines and rules for the CAV system. This required the team to modify various layers of the perception controller to meet the competition requirements. These challenges were documented and reflected upon to avoid repeat issues and find potential solutions for future years of the EcoCAR Mobility Challenge.

8.3 Future Work

This work will serve as a foundation for the implementation of SAE Level 2 features such as

Adaptive Cruise Control (ACC) and Lane Centering Control (LCC) in Year 3 of the EcoCAR

Mobility Challenge. The team will further refine, calibrate and improve the CAV perception system algorithms such as fault diagnostics, sensor calibration and the sensor fusion algorithm through continuous XIL testing. The development process will be expanded to aid in the development of the CAV-PCM interface, ACC algorithm, and Vehicle-to-Anything (V2X) algorithms. As the competition continues the team focus will be shifted to employing Vehicle-to-

Anything (V2X) technology using the Cohda MK5 radios through CIL and VIL testing and utilizing this V2X technology to develop an energy management strategy aimed at increasing fuel economy.

114 Bibliography

[1] A. Sathyanarayana, P. Boyraz, and J. H. L. Hansen, “Information fusion for robust ‘context and driver aware’ active vehicle safety systems,” Inf. Fusion, vol. 12, no. 4, pp. 293–303, 2011, doi: https://doi.org/10.1016/j.inffus.2010.06.004.

[2] N. Highway Traffic Safety Administration and U. Department of Transportation, “TRAFFIC SAFETY FACTS Crash • Stats Critical Reasons for Crashes Investigated in the National Motor Vehicle Crash Causation Survey,” 2015.

[3] “Advanced Driver Assistance Systems Prediction & Planning | Ansys.” https://www.ansys.com/about-ansys/advantage-magazine/volume-xiv-issue-1- 2020/engineer-perception-adas (accessed Jun. 03, 2020).

[4] “» EcoCar Mobility ChallengeAVTC I Advanced Vehicle Technology Competitions.” https://avtcseries.org/ecocar-mobility-challenge/ (accessed Jun. 03, 2020).

[5] V. Tech, K. Wahl, and A. Org, “WHY IT’S IMPORTANT SPONSORS VISIONARY SUSTAINING CONTRIBUTOR LEADERSHIP.”

[6] “About | Ohio State EcoCAR.” https://ecocar.osu.edu/team (accessed Jun. 03, 2020).

[7] “Active Safety for the Mass Market.” https://www.sia.fr/publications/56-active-safety-for- the-mass-market (accessed Jun. 04, 2020).

[8] “Driverless Cars Could Reduce Traffic Fatalities by Up to 90%, Says Report.” https://www.sciencealert.com/driverless-cars-could-reduce-traffic-fatalities-by-up-to-90- says-report (accessed Jun. 15, 2020).

[9] “Autonomous Vehicles | GHSA.” https://www.ghsa.org/issues/autonomous-vehicles (accessed Jun. 04, 2020).

[10] K. Bengler, K. Dietmayer, B. Färber, M. Maurer, C. Stiller, and H. Winner, “Three Decades of Driver Assistance Systems Review and Future Perspectives,” 2014.

[11] “Indepth article on advanced driver assistance systems (ADAS).” https://www.oxts.com/what-is-adas/ (accessed Jun. 04, 2020).

[12] “This Car Runs on Code - IEEE Spectrum.” https://spectrum.ieee.org/transportation/systems/this-car-runs-on-code (accessed Jun. 04, 2020).

[13] R. C. Lanctot, “ADAS and Autonomous Update.” 115 [14] A. Shaout, D. Colella, and S. Awad, “Advanced driver assistance systems - Past, present and future,” in ICENCO’2011 - 2011 7th International Computer Engineering Conference: Today Information Society What’s Next?, 2011, pp. 72–82, doi: 10.1109/ICENCO.2011.6153935.

[15] “Car companies have gone overboard naming their high-tech features - The Verge.” https://www.theverge.com/2019/1/29/18200592/driver-assistance-adas-names-aaa-report (accessed Jun. 13, 2020).

[16] “Study Shows Automated Safety Features Prevent Crashes | UMTRI - University of Michigan Transportation Research Institute.” http://www.umtri.umich.edu/what-were- doing/in-the-news/study-shows-automated-safety-features-prevent-crashes (accessed Jun. 04, 2020).

[17] “Top 7 ADAS Technologies that Improve Vehicle Safety.” https://www.einfochips.com/blog/top-7-adas-technologies-that-improve-vehicle-safety/ (accessed Jun. 04, 2020).

[18] “Driver Assistance Technologies | NHTSA.” https://www.nhtsa.gov/equipment/driver- assistance-technologies#technologies-explained-automatic-emergency-braking (accessed Jun. 04, 2020).

[19] “Self-Driving Cars and Advanced Driver Assistance Systems (ADAS) | Teen Driver Source.” https://www.teendriversource.org/learning-to-drive/self-driving-cars-adas- technologies (accessed Jun. 04, 2020).

[20] “Forward-Collision Warning Systems Would Make New Cars Safer - Consumer Reports.” https://www.consumerreports.org/cro/news/2015/06/forward-collision-warning-systems- would-make-new-cars-safer/index.htm (accessed Jun. 04, 2020).

[21] “McKinsey Report Exposes ADAS Future Opportunities and Challenges.” https://www.rsipvision.com/adas-future-opportunities/ (accessed Jun. 04, 2020).

[22] “LDW: what is Lane Departure Warning and how does it work? | RoadSafetyFacts.eu.” https://roadsafetyfacts.eu/lane-departure-warning-ldw-what-is-it-and-how-does-it-work/ (accessed Jun. 04, 2020).

[23] “Complex Trends and Challenges in Designing ADAS Systems - Edge AI and Vision Alliance.” https://www.edge-ai-vision.com/2014/09/complex-trends-and-challenges-in- designing-adas-systems/ (accessed Jun. 04, 2020).

[24] A. Ziebinski, R. Cupek, H. Erdogan, and S. Waechter, “A Survey of ADAS Technologies for the Future Perspective,” Int. Conf. Comput. Collect. Intell., vol. 2, pp. 135–146, 2016, doi: 10.1007/978-3-319-45246-3.

116 [25] C. Dannheim, C. Icking, M. Mader, and P. Sallis, “Weather detection in vehicles by means of camera and LIDAR systems,” in Proceedings - 6th International Conference on Computational Intelligence, Communication Systems and Networks, CICSyN 2014, Mar. 2014, pp. 186–191, doi: 10.1109/CICSyN.2014.47.

[26] G. P. Kulemin, “Influence of propagation effects on a millimeter-wave radar operation,” in Radar Sensor Technology IV, Jul. 1999, vol. 3704, pp. 170–178, doi: 10.1117/12.354594.

[27] “ADAS Market Size, Share, Growth, Forecast & Trends by 2027 | COVID-19 Impact Analysis | MarketsandMarkets.” https://www.marketsandmarkets.com/Market- Reports/driver-assistance-systems-market-1201.html (accessed Jun. 04, 2020).

[28] “How Advanced Driver-Assistance Systems (ADAS) Impact Automotive Semiconductors.” https://www.wevolver.com/article/how.advanced.driverassistance.systems.adas.impact.aut omotive.semiconductors (accessed Jun. 04, 2020).

[29] N. Kalra and S. M. Paddock, “Driving to Safety: How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability?”

[30] “ADAS Complexity Will Create New Maintenance Costs and Vulnerabilities for Fleets - Market Trends - Automotive Fleet.” https://www.automotive-fleet.com/318592/adas- complexity-will-create-new-maintenance-costs-and-vulnerabilities-for-fleets (accessed Jun. 04, 2020).

[31] R. Bergau-white paper, “Solving the storage conundrum in ADAS development and validation.”

[32] A. Kim, T. Otani, and V. Leung, “Model-Based Design for the Development and System- Level Testing of ADAS,” Springer, Cham, 2016, pp. 39–48.

[33] M. Anthony and M. Behr, “Model-Based Design for Large High Integrity Systems: A Discussion on Data Modeling and Management,” 2010.

[34] L. Raffaëlli et al., “Facing ADAS validation complexity with usage oriented testing,” Feb. 2016, Accessed: Jun. 04, 2020. [Online]. Available: http://arxiv.org/abs/1607.07849.

[35] J. Zhou, R. Schmied, A. Sandalek, H. Kokal, and L. del Re, “A Framework for Virtual Testing of ADAS,” SAE Int. J. Passeng. Cars - Electron. Electr. Syst., vol. 9, no. 1, pp. 66– 73, May 2016, doi: 10.4271/2016-01-0049.

[36] G. Tibba, C. Malz, C. Stoermer, N. Nagarajan, L. Zhang, and S. Chakraborty, “Testing automotive embedded systems under X-in-the-loop setups,” in IEEE/ACM International Conference on Computer-Aided Design, Digest of Technical Papers, ICCAD, Nov. 2016, 117 vol. 07-10-November-2016, pp. 1–8, doi: 10.1145/2966986.2980076.

[37] “A handle on the future - Virtualized testing and XiL for automated driving.”

[38] P. Kaur and R. Sobti, “Current challenges in modelling advanced driver assistance systems: Future trends and advancements,” in 2017 2nd IEEE International Conference on Intelligent Transportation Engineering, ICITE 2017, Oct. 2017, pp. 236–240, doi: 10.1109/ICITE.2017.8056916.

[39] A. N. Ramakrishnan, S. Goel, T. Kirby, and W. Perez, “Design and Evaluation of Sensor Suite and Algorithms for Semi-Autonomous Vehicle Operation with a Focus on Fuel Economy Improvement,” 2020.

[40] “Tesla & Google Disagree About LIDAR -- Which Is Right? | CleanTechnica.” https://cleantechnica.com/2016/07/29/tesla-google-disagree-lidar-right/ (accessed Jun. 04, 2020).

[41] E. M. Stoddart, S. Midlam-Mohler, and G. Rizzoni, “Computer Vision Techniques for Automotive Perception Systems,” 2019.

[42] C. Guettier, B. Bradai, F. Hochart, P. Resende, J. Yelloz, and A. Garnault, “Standardization of Generic Architecture for Autonomous Driving: A Reality Check,” Springer, Cham, 2016, pp. 57–68.

[43] “ROS, the Robot Operating System, Is Growing Faster Than Ever, Celebrates 8 Years - IEEE Spectrum.” https://spectrum.ieee.org/automaton/robotics/robotics-software/ros- robot-operating-system-celebrates-8-years (accessed Jun. 04, 2020).

[44] “msg - ROS Wiki.” http://wiki.ros.org/msg (accessed Jun. 04, 2020).

[45] “Issues Persist with Uncalibrated ADAS Systems | 2019-04-29 | FenderBender.” https://www.fenderbender.com/articles/12567-issues-persist-with-uncalibrated-adas- systems (accessed Jun. 08, 2020).

[46] “Mobileye 6-Technical Installation Guide,” 2007. Accessed: Jun. 08, 2020. [Online]. Available: www.mobileye.com.

[47] “601-Lesson 6-System Calibration and Configuration procedure Lesson 6 System Calibration and Configuration procedure.”

[48] T. Chai, T. Chai, and R. R. Draxler, “Ozone health and ecosystem impacts View project Root mean square error (RMSE) or mean absolute error (MAE)?-Arguments against avoiding RMSE in the literature,” Geosci. Model Dev, vol. 7, pp. 1247–1250, 2014, doi: 10.5194/gmd-7-1247-2014.

118