<<

Statistical Issues in Autonomous Vehicles 1 David Banks Duke Uniffirsity and SAMSI 1. Introduction

The intersection of statistics with transportation science is extensive. Causal inference, experimental design, risk analysis, deep learning, 2 spatial processes, and many other tools all apply.

Currently, autonomous vehicles are especially topical. They have huge potential benefits, but pose challenges and risks. This talk describes statistical methods for assessing and balancing those features. Benefits

Some facts: • Americans drove 3.4 trillion miles last year. US DOT •

3 Risk of dying in a vehicle injury are 1/77. Contrast this to firearms (also 1/77), falls (1/83), suicide (1/63), heart disease (1/4), alcohol and drugs (1/34). CDC • Motor vehicles account for 75% of carbon monoxide pollution, 1/3 of all air pollution, and 27% of greenhouse gases in the US. EPA • Average commute time is 52.2 minutes/day. US Census Bureau The potential gains from autonomous vehicles include: • Safety: AI driving systems are not distracted or impatient. • The environment: Better safety means lighter vehicles. Joint control means less braking. 4 • Congestion: Under joint control, one can have seven times as many vehicles on the road. • Quality of life: Commuting would become work time or nap time. • Independence: Seniors and children would have more mobility. The 2050 problem refers to the fact that in 30 years, the world population will reach its maximum (9.8 to 11.2 billion). We are currently at 7.8 billion, and the carrying capacity of the planet is about 1 billion.

Global warming is harder to forecast, but climate scientists say that in

5 2050, parts of North Africa, the Middle East, India and South Asia will regularly experience summer temperatures between 120 and 125 degrees.

Autonomous vehicles are one of the few technologies on the horizon that have the possibility to meaningfully reduce carbon emissions while maintaining relatively high standards of living. 3. Challenges

But there are many legitimate concerns about moving to autonomous vehicles. Some people argue that: • People won’t want to give up control.

6 • The “mixed fleet” period will hazardous. • The regulatory and insurance implications have not been thought through yet. • Cybersecurity—if the vehicle’s software can be hacked, then there is a single point of failure. • It would cause economic disruption. To complicate the analysis, there are six levels of vehicle automation: 1. No automation. The human has only standard assistance (mirrors, rear-view cameras). 2. Software assistance. Adaptive , lane keep assist. Widely available after 2018. 3. Partial automation. The driver must be ready to take control, but

7 the car controls speed and holds its lane. Tesla Autopilot. 4. Conditional automation. Hands off the wheel, but still ready to control. Useful for limited access highways and good driving conditions. Experimental. 5. High automation. Driver can sleep after inputting destination. is testing such. Must stay on traditional roads. 6. Full automation. Years away. Some History. Autonomous vehicles are staples in science fiction, but Red Whittaker at Carnegie Mellon went a long way towards making robot cars a reality. In 1995, he programmed a small truck, 5, that drove from Pittsburgh to San Diego; 98% of the journey was autonomous. He also built robots for antarctic exploration, clean-up of Three Mile Island and Chernobyl, and mapping mines.

8 Sebastian Thrun worked with Whittaker at CMU. He led the development of Google’s self-driving car.

Waymo is owned by Alphabet, and was spun off from Google. It runs a commercial fleet of level 4 vehicles in Phoenix. Volvo, Tesla and Audi are also testing.

29 states have passed laws permitting autonomous vehicles. 4. The Statistical Questions

The first question is how to assess the safety of an autonomous vehicle. Some strategies are: • Have something like the current DMV test, but more stringent. 9 • Use empirical data on accidents per 100,000 miles. • Place the car in accident situations from the SHRP2 Naturalistic Driving Study run by the Transportation Research Board and see if it outperforms humans. • Stress-test vehicles to see if they can evade deliberate collision or bad drivers or deer. It is likely that the first few generations of autonomous vehicles will need conditional regulation. For example, I would not be surprised if testing found that an autonomous vehicle is always safer than a human driver at night, and under good driving conditions. But if there is rain or snow, then the human should take over. 10 This could lead to the vehicle asking Google Home about the weather conditions, and then possibly refusing to drive.

The US DOT needs to put guidance in place to cover autonomous vehicles. Starting in 2017, NHTSA began to produce annual Visions of Safety. 11 In the US, there are 1.18 fatalities per 108 human driven miles.

There have been four fatalities with level 2 autonomous vehicles (Tesla), one with a level 3 vehicle (Uber) and none with level 4 or 5 vehicles. The number of miles driven by some level of autonomous vehicle is about 109. 12

If autonomous vehicles drove as safely as humans, one would expect 12 deaths rather than 5.

Autonomous vehicles appear to be safer, but there are still concerns about weather conditions and other driving conditions. 13 All the autonomous vehicle developers use deep learning to train their vehicles. For example, the Tesla Autopilot Hardware v2+ uses Drive PX 2 hardware, 8 camera input, and Inception 1 architecture to train a convolutional neural network. 14

This requires a lot of training data, and even then one cannot guarantee good performance. There are famous cases in deep learning where changing a handful of pixels can seriously confuse the AI. This implies that training needs to be robust, and that is still an art, not a science. Szegedy et al. (2014) showed that image classification neural networks that perform as well as humans can be deceived by deliberately constructed fakes. These fakes are found by applying an optimization procedure to search for an input x˜ near a training data value x such that the model output is very different from that produced by the neural x 15 network at .

The following figure shows an adversarial example using GoogLeNet with an picture from ImageNet. GoogleNet classified the left image as a panda with 57.7% confidence. The middle image was added to the left image to produce the right image. GoogLeNet classified the right image as a gibbon with 99.3% confidence. 16 5. Other Questions

Besides risk assessment and regulation, there are other issues with autonomous vehicles. One concerns insurance. When a vehicle is involved in an accident, who should pay? 17 Presumably the manufacturer is the party responsible for such failures, if the autonomous vehicle is level 4 or 5. But most autonomous vehicles have shared control, and that is complicated. Perfecting automatic emergency braking in level 3 is key.

If the benefit to society is sufficiently great, one might adopt the Vaccine Injury Compensation Program model. Another consideration is cybersecurity. It is essential that the software in autonomous vehicles get regular updates and patches. But if a malicious actor can develop a method to upload malware, then there may be major public safety problems.

18 Cybersecurity is an arms race. Some strategies are: • Have an uncounterfeitable return address on updates. • Enforce the Three Laws of Robotics. • Roll out updates slowly. • Improve software verification and validation. The mixed fleet scenario will be a problem. If some cars are autonomous and others are not, then many of advantages from autonomous vehicles will not be realized.

Ideally, all cars in an area communicate constantly to ensure balletic 19 coordination. In the mixed fleet, that doesn’t happen.

At a minimum, we should implement a law requiring RFID tags on all vehicles, autonomous or not, and perhaps in all cell phones. That will enable autonomous vehicles to “see” regular vehicles and most pedestrians with even greater reliability. The economic dislocation caused by self-driving trucks and vehicles will be an issue. Uber is experimenting with autonomous vehicle service, and it may be that one of the many consequences is that people stop owning cars. 20 AI is predicted to substantially disrupt the economy, but also create new wealth. The 2018 report by the McKinsey Global Institute forecast that radiologists, cashiers, garbage collection, recycling and many other workers will be replaced by AI systems. At the same time, it forecasts that AI will boost the U.S. economy by 16%. 6. Conclusions

• Autonomous vehicle technology is transformative, and has the capability to ease the 2050 problem in several critical ways. • NHTSA and states need to step up quickly in crafting regulation, 21 assessing safety, and developing relevant law and insurance. That will involve statistics. • There are nightmare scenarios and the precautionary principle needs to be considered. • Autonomous vehicles will change society in many foreseeable ways, and no doubt some that are less obvious.