<<

EBOOK Autonomous Vehicle Guide Top Breakthroughs & Resources in Autonomous and Connected Vehicles AUTONOMOUS VEHICLE ENGINEERING GUIDE

5 Contents

SENSORS 7 3 For Lidar, MEMs the Word 5 New Performance Metrics for Lidar 7 Detecting Pedestrians

SOFTWARE AND AI/ LEARNING 12 9 Blocks for AV Systems 12 New Mobility’s Mega Mappers 15 Can Autonomous Vehicles Make the Right ‘Decision?’

19 CONNECTED VEHICLE 17 Expanding the Role of FPGAs 19 Electronic Get Smart

ABOUT ON THE COVER

Autonomous Vehicle Engineering covers the In the article Software Building Blocks constantly-evolving field of autonomous and for AV Systems, Sebastian Klaas explores connected vehicles from end to end — covering how Elektrobit's unique software frame- key and applications including is designed to smooth develop- sensor fusion, artificial intelligence, smart cities, ment of automated functions. and much more. This Autonomous Vehicle Automated valet parking is an example Engineering Ebook is a compilation of some of of a practical application of the software the magazine's top feature articles from thought framework. Read more on page 9. leaders, and is your guide to designing the next (Image: Elektrobit) generation of vehicles.

2 AUTONOMOUS VEHICLE ENGINEERING SAE EBOOK SENSORS For Lidar, MEMS the Word by Charles Chung, Ph.D.

Tiny gimballed on chips are being developed that could improve the form factor and cost of automotive lidar.

As automakers and their partners develop lidar sensors to enable SAE Level 4-5 autonomous driving, some systems designers believe MEMS micromirrors have the potential to reduce overall size and cost—two major hurdles to widespread lidar adoption. A basic lidar includes a light source, a scanning , and a light receiver. While the light source and the light detectors use semicon- ductor components, scanning of light still relies on traditionally man- ufactured scanning or rotating mirrors, which are often the bulkiest and costliest lidar component.

MEMS—an acronym for micro-electro-mechanical systems—is a Charles Chung A typical MEMS micromirror system is composed of the silicon micromirror chip, ASIC type of device incorporating non-electronic compo- (Application Specific ) electronics to actuate and control the mirror, the nents. They are used in a plethora of product applications. Automo- package that protects the micromirror and ASIC, and software. biles are rich with MEMS devices—new vehicles typically include over 30 MEMS chips. MEMS devices are included in accelerometers for simultaneous needs: a large (2-4-mm) micromirror, with a wide (30- deployment, gas sensors for monitoring, pressure sen- 60-degrees) angular range of motion, that can pass rigorous auto- sors for tire pressure monitoring, yaw-rate sensors for vehicle stability motive testing and validation. control, and many more. Typical development times and costs for a new custom MEMS de- In the MEMS lidar application, a tiny mirror directs a fixed vice are 18-24 months and $1 million to $3 million to a final prototype. beam in multiple directions. The micromirror, moving rapidly due to To reach full production, it typically takes three to five years and $10 its low , can execute a two-dimensional scan in a million to $20 million, experts explain. fraction of a second. It can replace the traditional lidar’s bulky scan- Validating MEMS micromirrors for automotive includes, as SAE ning component with a chip that measures about 5-mm square and readers know, meeting many durability and reliability requirements costs on the order of dollars to tens of dollars, developers claim. dictated by established standards, such as the AEC-Q100. These in- clude vibration, temperature, , electrical shock, mechanical In search of a new chip shock, and chemical resistance. Moreover, the volume of chips is ap- The ideal MEMS micromirror for lidar is still in development. Exist- proximately 100M per year, with the quality level at parts-per-million ing micromirrors were designed for other applications, such as pro- (or lower). jection displays or optical switching. They lack lidar’s three Micromirrors have been in commercial production for more than 20 years. The most development has been in optical switching and displays. Among displays, there are two types, digital and analog. Digital mirrors between only two positions. Analog mirrors have a continuous set of positions. Analog micromirrors for displays bear the closest resemblance to lidar micromirrors. Like lidar, these MEMS micromirrors have a single mirror that moves in analog in two directions. But, the mirror is smaller (<1 mm) than those typically used in lidar. And perhaps most importantly, display micromirrors were typically engineered for consumer applications, whose qualification requirements are less challenging than for automotive.

Design challenges There are a multitude of implications of the increased mirror

Lawrence Livermore National Laboratory LLNL-JRNL-702806 National Laboratory Livermore Lawrence size and automotive qualification requirements. Automotive tempera- A hexagonal MEMS micromirror measuring approximately 0.5 mm per side. ture ranges are wide (-40°C to +150°C), and mismatches in the coeffi-

AUTONOMOUS VEHICLE ENGINEERING SAE EBOOK 3 SENSORS

Electronics and software communicate with the system and mon- itor and control the mirror. The electronics are typically embodied in the ASIC (Application Specific Integrated Circuit) and are packaged together with the MEMS chip. The ASIC is mixed-signal and has a digital portion for programmable command and communication with the system and an analog section for direct control and readout of the mirror. The ASIC needs to be engineered for automotive qualification to tolerate adverse conditions such as water intrusion, electrical shock, and electromagnetic engine noise.

MEMS lidar packaging and test After silicon chips are fabricated, they are placed into “packaging” that protects the chip from the environment, while allowing electrical and optical signals to pass to and from the MEMS micromirror. The packaging is a critical part of all MEMS devices, particularly so for au- tomotive applications, since it will protect the MEMS device from water, dirt, oil, particles, and other contaminants that can cause the device to fail. While the packaging protects the MEMS chip, it can also adversely

Charles Chung affect the chip. Many MEMS device failures are associated with the Typical micromirror chip with mirror on two gimbals (torsional springs) that enable rota- tion about the X and Y axes. An actuator is typically integrated underneath the mirror to packaging. For example, packages are typically made of , avoid interference with the light. metal, and/or , and over an automobile’s wide temperature range there can be a strong CTE mismatch between the package and cient of thermal expansion (CTE) are a common source of device fail- the silicon chip. This can cause inaccuracies, errors, and outright fail- ure. This implies that a mirror which is composed of a single material, ures. such as silicon, will have the lowest failure rates due to CTE mismatch. Ceramic packages have the closest CTE match to silicon but are Mechanical shock and vibrational requirements require that the the most expensive. Plastic packages are the most cost effective but gimbal be able to withstand high , reject motion in the have the largest CTE mismatch and cannot hermetically protect the unwanted translational and rotational directions, while being pliable MEMS device. The cost sensitivity of consumer applications requires enough to rotate about the desired directions. that they use plastic packaging, and many of the techniques used in Automotive humidity requirements require that the device tolerate those applications can be adapted to manage the CTE mismatch for 0% to 100% humidity. One implication is that the device must tolerate automotive applications. water droplets. A common cause of failure in MEMS devices is “stic- After the chip is packaged, the device is calibrated and tested to tion,” which occurs when the moving parts of a MEMS device stick to ensure that it meets requirements. With approximately 100 million the stationary parts. When this happens, the moving parts can no automobiles produced every year, testing every device is time con- longer move, and the MEMS device no longer functions. suming and costly. As a result, efficient testing requires a strategy The surface tension of a water droplet can small gaps in the that tests devices at multiple points in the process, as MEMS chips. As the water droplet evaporates, it draws together the well as custom test capabilities that can evaluate every device effi- two sides of the gap, stick them together, and the micromirror is un- ciently. ■ able to scan. To prevent this, one method is to avoid small gaps in the design, which electrostatic actuation typically requires. Another Charles Chung, Ph.D., has over 25 years of experience with MEMS devices and method is to reduce the droplet’s surface tension with a chemical microsystems. He is a member of the University of Pennsylvania’s Singh Nan- otechnology Center’s Advisory Board, and a recipient of the Gates Global Grand treatment often used in MEMS microphones. Hermetic packaging is Challenges Grant. He has developed multiple MEMS devices, including wireless a third solution for keeping moisture out. These methods may be sensor nodes, DNA sequencing chips, microphones, , accelerome- combined to work together to increase overall reliability. ters, and pressure sensors.

4 AUTONOMOUS VEHICLE ENGINEERING SAE EBOOK SENSORS New Performance Metrics for Lidar by Indu Vijayan Frame-rate measurement is so yesterday. Object-revisit rate and instantaneous resolution are more relevant metrics, and indicative of what a lidar system can and should do, argues a revolutionary in the artificial-perception space.

How do you measure the effectiveness of an intelligent, lidar-based From frame rate to object-revisit rate perception system for autonomous vehicles? Conventional evaluation As opposed to the conventional metric of “frame rate of xx Hz,” metrics favor frame rate and resolution as the ideal criteria. However, AEye proposes a new measurement of object revisit rate. This is the experts at Pleasanton, Calif.-based artificial perception company time between two shots at the same point or set of points. AEye believe that these criteria are inadequate for measuring the “Defining single-point detection range alone is insufficient because unique capabilities of more advanced lidar systems, nor do they ex- a single interrogation point, or shot, rarely delivers enough confidence plicitly address real-world problems facing autonomous driving, such to ascertain a hazard—it is only suggestive,” explained AEye chief sci- as hazard detection and tracking. entist and former DARPA chief scientist, Dr. Allan Steinhardt. “There- “Makers of automotive lidar systems are frequently asked about fore, we need multiple interrogations at the same point to validate or their frame rate, and whether or not their technology has the ability comprehend an object or scene.” to detect objects with 10% reflectivity at a specified range, frame rate But the time it takes to validate an object is dependent on many and resolution,” said the company’s co-founder and senior VP of en- variables, including distance, interrogation pattern and resolution, re- gineering, Dr. Barry Behnken. “These metrics were established back flectivity, and the shape of the objects being interrogated. AEye ex- in 2004 during the first DARPA Grand Challenge and haven’t been perts believe what is missing from the conventional metric is a more altered since. Unfortunately, they fail to adequately rate a perception fine-tuned definition of time. system’s ability to perform safely on public , as the AVs back “The time between the first detection of an object and the second then were not being built for widespread use.” is critical, as shorter object revisit rates can keep processing times As perception technology improves and more real-world chal- low for advanced algorithms that need to correlate between multiple lenges present themselves (such as the false-positive detection of moving objects in a scene,” noted Dr. Steinhardt. “Too long of an ob- other vehicles and signals that plague conventional systems), ject revisit rate at fast velocities could be the difference between de- AEye experts believe new metrics must be established to assess the tecting an object in a timely manner and the loss of life.” performance of lidar-based perception systems. The company is rec- Having an accelerated revisit rate increases the likelihood of hitting ommending revised metrics that it believes are not only more advan- the same target with a subsequent shot, due to the decreased likeli- tageous for AV development but will oblige more robust safety hood that the target has moved significantly in the time between standards across the . But what are the new metrics? shots. This helps solve the “correspondence problem” (determining AEye Fusing agile lidar and a low-light HD creates a new data type called Dynamic Vixels. By overlaying 2D real-world color data (R,G,B) on 3D data (X,Y,Z) at the sensor, Dynamic Vixels produce a true color point cloud that enables faster, more accurate perception processing at the sensor. These images were taken on Las Vegas Boulevard during CES 2019.

AUTONOMOUS VEHICLE ENGINEERING SAE EBOOK 5 SENSORS

which parts of one “snapshot” of a dynamic scene correspond to which parts of another snapshot of the same scene), while simulta- neously enabling the quick build of statistical measures of confidence, generating aggregate information that downstream processors might require (such as object velocity and ). “While the correspondence problem will always be a challenge for autonomous systems, the ability to increase revisit rate on points of interest can significantly aid higher level inferencing algorithms, al- lowing path-planning systems to more quickly determine correct movements,” said Dr. Steinhardt. When you’re driving, the world can change dramatically in a tenth of a second. In fact, two closing at a mutual speed of 124 mph AEye Six iDAR-enabaled rooftop sensors on a Jaguar I-Pace are used by AEye engineers to (200 kph) are 18 feet closer after 0.1 seconds. That is why AEye was gather critical data and to test the performance of their sensors in real-world driving determined to expedite the process. The achievable object revisit rate conditions. of AEye’s iDAR system for points of interest currently is microseconds to a few milliseconds; this compares to many tens or hundreds of mil- mount to ignoring safety. Because multiple detects at the same point liseconds between visits with conventional lidar systems. are required to fully comprehend an object or scene, measuring object Using revisit rate as a standard metric for automotive lidar perform- revisit rate is a more useful and critical metric for automotive lidar than ance would compel the industry to cut down on latency, ultimately static frame rate. Additionally, quantifying fixed (angular) resolution making all perception systems much safer, AEye experts believe. is not enough. It is more important to measure instantaneous resolu- tion because intelligent and agile resolution in scanning is more effi- On to instantaneous resolution cient and provides greater safety through faster response times. In contrast to the conventional metric of fixed (angular) resolution However, the sum of these two metrics (and the way in which AEye over a fixed Field of View, AEye’s proposed second metric, instanta- has combined them) is greater than the two parts alone. Object revisit neous (angular) resolution, provides insight into the driving be- rate and instantaneous resolution, are not only more relevant and in- hind iDAR: more information, less data. dicative of what a lidar perception system can and should do, but “The assumption behind the use of resolution as a conventional they are also synergistic in combination, allowing the industry to de- metric is that it is assumed the Field of View will be scanned with a fine new constructs, such as Special Regions of Interest. constant pattern. This makes perfect sense for less intelligent, more For example, when points or objects of interest have been iden- ‘traditional’ sensors that have limited or no ability to adapt their col- tified, AEye can “foveate” its system in space and/or time to gather lection capabilities,” Dr. Behnken stated. “Also, the conventional met- more useful information about it. Let’s say the system encounters a ric assumes that salient information within the scene is uniform in jaywalking pedestrian directly in the path of the vehicle. Because space and time, which we know is not true.” the path is lateral, current and coherent lidars will have trou- Because of these assumptions, conventional lidar systems indis- ble recognizing the threat (e.g. lateral velocity vs radial velocity vec- criminately collect gigabytes of data from a vehicle’s surroundings, tor). However, because iDAR enables a dynamic change in temporal sending those inputs to the CPU for decimation and interpretation, sampling density and spatial sampling density within a Special Re- where an estimated 70-90% of this data is found to be useless or re- gion of Interest, it can focus more of its attention on this jaywalker— dundant, and thrown out. “It’s an incredibly inefficient process,” ob- and less on irrelevant information, such as parked vehicles along the served Dr. Behnken. side of the . Ultimately, this allows iDAR to more quickly, effi- AEye’s iDAR technology was developed to break these assump- ciently, and accurately identify critical information about the jay- tions and inefficiencies. The AEye team believes that intelligent and walker. agile scanning provides greater safety through faster response times “iDAR swiftly provides the most useful, actionable data to the do- and higher quality information. Additionally, agile lidar, which enables main controller to help determine the best timely course of action,” faster object revisit rates, enables dynamic foveation. Foveation is said Dr. Behnken. “This unique ability is critical to the development where the target of a gaze is allotted a higher concentration of retinal and universal adoption of autonomous vehicles.” cones, allowing objects to be seen more vividly. As more AVs are tested on public roads, it’s imperative that tech- Humans do this naturally. We don’t “take in” everything around us nologists help automakers maintain the highest standard of safety. equally. Rather, our visual cortex filters out irrelevant information Modernized metrics for assessing the performance of lidar-based per- (such as an flying overhead) while simultaneously focusing ception systems is a valuable start. ■ the eyes on a particular point of interest such as a the road. This allows other, less important objects to be pushed to the periphery. Enabling dynamic foveation in artificial perception can Indu Vijayan is a specialist in systems, software, algorithms and perception for self-driving cars. As the technical prod- change the instantaneous resolution throughout the Field of View, al- uct manager at AEye, she leads software development for lowing for the targeted collection of the most relevant data. the company’s artificial perception system for AVs. Prior to AEye, Indu spent five years at Delphi (later Aptiv) iDAR’s actionable data where, as a senior software engineer on the Autonomous Driving team, she played a major role in bridging ADAS AEye argues that object revisit rate is a more meaningful metric than sensors and algorithms, and extending them for mobility. frame rate alone, as the time between object detections cannot be ig- She holds a BS, Technology in Science from India’s Amrita University, nored at the cost of vehicle reaction time to hazards. This is tanta- and an MS in Computer Engineering from Stony Brook University.

6 AUTONOMOUS VEHICLE ENGINEERING SAE EBOOK SENSORS

Detecting Pedestrians by Terry Costlow

Safety of vulnerable road users is driving new technologies as pedestrian deaths rise worldwide.

Cameras with greater acuity and processing are vital for fast and accurate pedestrian identification. ZF

Automakers and legislators are focusing on pedestrians, aiming (VRU). The World Health Organization said that nearly half the 1.25 to reduce injuries that are rising as more people worldwide migrate million people killed in traffic accidents in 2016 (the most recent data to cities. Camera systems have surpassed passive technologies to be- available) are VRUs. In the U.S., VRU deaths reached the highest lev- come the key technology for protecting people on foot who venture els in more than a decade. NHTSA said that in 2016, pedestrian deaths into harm’s way with cars and on the roadway. increased by 9% to 5,987. By comparison, motorcyclist fatalities in- The death rates are already soaring for pedestrians, cyclists and creased by 5.1% to 5,286 and bicyclist deaths rose 1.3%, reaching 840. motorcyclists—an aggregate group called Vulnerable Road Users The auto industry is racing to help reduce this trend, leveraging the rapid advances in advanced safety systems. Legislators are also ramp- ing up their efforts. “We’re focused intensely on VRUs—pedestrians, cyclists and mo- torcyclists,” said Kay Stepper, Vice President of Driver Assistance and Automated Driving for Chassis Systems Control at Bosch North Amer- ica. “European regulations call out VRUs, and in the U.S. they’re look- ing at back-over avoidance legislation.” Government interests are going beyond forcing automakers to react. Many cities are beginning to monitor pedestrian movement as part of their “smart cities” programs. Understanding patterns that cause ac- cidents can help urban planners make changes to enhance safety. “Downtown Las Vegas is an district; we have lidar sys- tems at intersections to detect whether anyone is in the intersection,” said Joanna Wadsworth, Program Manager for the City of Las Vegas. “We’ll work with Cisco on dashboarding to make that data available. BMW Small pedestrians such as children are more difficult for human drivers and advanced ve- That data’s also helpful to us as planners; it’s important to know how hicle sensor systems to see and identify. many people are crossing where.”

AUTONOMOUS VEHICLE ENGINEERING SAE EBOOK 7 SENSORS

Camera systems with higher resolution are evolving rapidly.

The need to look very closely “Automotive engineers and technologists are moving away from passive safety systems that lift the vehicle’s hood on im- pact and deploy exterior that cushion pedestrians. Those systems posed many design challenges but offered min- ” imal benefits. “Market interest has shifted to avoidance and mitigation in- stead,” said Aaron Jefferson, Director, Product Planning, ZF Global Electronics. “It was difficult to package [exterior] airbags and there was the question of what happened to the pedestrian after the collision.” The change comes as and are becoming com- monplace, helping stop vehicles before accidents occur. Tweaking systems to spot VRUs in addition to cars leverages ZF is developing a system called X2Safe that existing technology, reducing costs and space requirements. alerts both drivers and pedestrians, through Camera systems are evolving rapidly as developers increas- their mobile phones, when accidents seem likely.

ingly use them to spot people who may be in danger of being ZF hit. It’s harder to see and identify comparatively small VRUs than cars, so higher resolution is important. Humans move more freely pose significant challenges—one person can move in a different di- than cars; newer systems are also looking to the side to see people rection, breaking away from the others to enter harm’s way. who may drift in front of the vehicle. Those two approaches can be “In urban environments, it’s not just one pedestrian or cyclists; at odds with each other. they’re often in groups,” Stepper said. “The trick is to be able to re- “It’s always an engineering compromise, looking at range or field solve groups of humans down to an individual human being. Another of view,” Stepper said. “When it comes to VRUs, there’s great advan- problem is that it’s not as obvious to predict the movement of pedes- tage to increasing resolution. There’s talk of a minimum camera res- trians, they can move in 360 degrees. The trick here is to use predic- olution of one megapixel, and there’s already a lot of effort to go to tions based on recent data sets. We are using artificial intelligence to two megapixels, four megapixels and beyond.” support us on this research.” Developers are also finding ways to spot people when cars are turning. Right-hand turns are often dangerous, particularly in busy Looking forward cities. People may walk in front of the while the driver is watching As safety and autonomous systems evolve, more technologies will traffic coming from the left. be used to protect VRUs. Artificial intelligence (AI) can help systems “Today, typical forward-looking cameras have a field of view of discern a human from a pole or perform other complex recognition around 50 degrees, focusing on vehicles and pedestrians who step tasks. Making these decisions in all light conditions can be a problem out in front of the car,” noted Andy Whydell, Vice President, Systems for current-generation camera systems. Planning and Strategy at ZF. “Next-generation systems increase that “The technology of image recognition is being improved with AI,” to around 100 degrees, which lets them see pedestrians when cars said Takayuki Nagai, Director of Advanced Driver Assistant Systems are turning around corners. As vehicles pull around for a right-hand for Denso International America. “Additionally, each sensor’s perform- turn, cameras detect pedestrians and slow the vehicle down.” ance is improving every day. For example, we now produce a vision VRUs are far more unpredictable than vehicles, in terms of their sensor that can recognize pedestrians at night.” next move, because they have more degrees of freedom. Crowds also Recognizing VRUs quickly and accurately is no simple task. It is dif- ficult to avoid false positives, for example. Avoiding unexpected stops will be important in crowded urban areas since quick stops may result in rear-end collisions. That’s prompting researchers to look at ways to use multiple sensors to ensure that potential safety threats are in- deed real. Radar may augment cameras. “We have proven that it’s possible implementations can use solely radar to detect pedestrians and still meet NCAP requirements,” Step- per said. Developers are also exploring ways to alert pedestrians when they’re entering a dangerous area. Cell phones may become part of an alert scheme designed to protect people. “We have an R&D program for a cloud-based warning system,” ZF’s Jefferson said. “It allows bidirectional messages to be sent to pedes-

Camera suppliers are developing products Denso trians who are about to step into the path of a vehicle. The system that can spot pedestrians in low light, such as this one from Denso. will send a message to both the pedestrian and the vehicle.” ■

8 AUTONOMOUS VEHICLE ENGINEERING SAE EBOOK Automated valet parking is an ex- ample of a practical application of the software framework. Elektrobit Software Building Blocks for AV Systems by Sebastian Klaas

Elektrobit’s unique software framework is designed to smooth development of automated driving functions.

When the development of ADAS functions or even automated tors. The actual driver assistance systems and their handling may dif- driving up to SAE Level 4 strives for increased functionality, it also in- fer between OEMs, but the underlying basics such as the environment creases the complexity of the software environment—and the related model and the positioning do not. development processes. Why? Still, efforts to bring these components to perfection tie up re- Typically, the number of involved ECUs is growing. Existing func- sources that could otherwise enable development of components tional and system architectures were in many cases not defined with that allow for much more differentiation, such as the HMI. This poses Level 3 or 4 in mind. Although functions targeted at dif- a dilemma, especially as the HMI’s functions are of increasing impor- ferent automation levels basically need the same or very similar fea- tance in cars with Level 3 or Level 4 automation because the driver tures, a reuse is often difficult or even impossible. will actually spend more time with these functions when handing over Similar problems arise when existing software modules should be the driving task to the vehicle. reused over multiple generations of models of vehicles. Elements like But the environment model or positioning functions play an impor- sensors, actuators or components for fusion, function, and control, tant role for the safety and reliability of automated vehicles—aspects need to be incorporated from a hardware as well as from a software for which the OEMs are directly responsible. So, functional safety as well perspective. All of this leads to a non-linear growth in the functional as security must be respected with an extremely high commitment. complexity of such projects. There are additional obstacles. For example, when multiple part- Speeding development within a framework ners are involved in the development, the necessary interfacing and All described parameters speak clearly in favor of using a software coordination causes additional overhead. Consider also the strategic framework in the development of automated driving functions. Such aspect: Automated driving requires software components like an en- a framework can be viewed as a set of building blocks for system de- vironment model and a highly precise positioning function. But both sign and development. The typical of such a framework, only allow for a relatively low degree of differentiation over competi- shown in Fig. 2, is known as EB robinos.

AUTONOMOUS VEHICLE ENGINEERING SAE EBOOK 9 SOFTWARE AND AI/MACHINE LEARNING Elektrobit Fig. 2—Overall architecture of the EB robinos framework for automated driving. Elektrobit

Fig. 3—The environment model consists of four main components: Object fusion, freespace and obstacle fusion, road fusion, and traffic-rules fusion.

The reference architecture shown supports all automation levels This again can greatly reduce complexity, effort, and costs. from SAE Levels 1 to 4. The according concepts will also play an im- In addition to off-the-shelf modules like the environment model, portant role in developing future Level 5 solutions. However, for full sensor fusion, and positioning, the framework typically includes diag- Level 5 solutions questions like system and software architecture nostic components such as a safety monitor that will constantly check might need heavy changes and the number of highly dynamic situa- the sensors and software modules and ensure that they are functional tions will substantially increase. Additional technologies must then and deliver sensible values. Also, a supervisor module can provide and be refined and facilitated before such projects can become a reality. control algorithmic redundancy and oversee the execution of vital The framework is characterized by a consistent functions—for example checking planned trajectories for freedom of which extensively supports reusability and scalability. Standardized collision or checking the correct operation of motion management. modules and clearly defined interfaces also help to reduce the over- Software building blocks for AVs can be distinguished between dif- head of complex development projects. As these modules are de- ferentiating and non-differentiating functionalities. For the former, signed to be hardware- and sensor-agnostic, they can be easily and OEMs will most likely leverage in-house know-how and their specific quickly adapted to an OEM’s chosen hardware platform. look and feel. On the other hand, non-differentiating software ele- Independently from the actual hardware, the software will support ments such as parts of the environment model can greatly profit from necessary functions such as sensor fusion. This standardized and standardization and scaling factors. New components can be rapid modular approach defines a functional architecture and open inter- prototyped on a PC and subsequently be run in environments such faces between software components as well as to external interfaces. as Adaptive AUTOSAR or Linux.

10 AUTONOMOUS VEHICLE ENGINEERING SAE EBOOK SOFTWARE AND AI/MACHINE LEARNING Elektrobit

Fig.4—The framework also allows to upgrade from Level 1 functions up to the more advanced Level 3 or 4 functions.

Also, components can be added or removed depending on cus- basic functions can additionally be complemented by an electronic tomers’ needs. This makes the reuse of software building blocks pos- horizon which provides highly accurate and up-to-date information sible, while minimizing development, application and testing efforts. about the road ahead for predictive driver assistance functions. This allows the integration of features like curve-speed warnings, adaptive Building blocks for developers curve lights, traffic-sign display, range determination, lane keeping, An overview of some typical functional modules will provide a bet- or -efficient driving and provides useful information for SAE Lev- ter understanding of the functional scope and interworking of the els 3 and 4 functions. particular building blocks. The described software modules of the framework will typically run The environment model (Fig. 3) consists of four main components. on a vehicle network architecture that is based on multi-processing Object fusion combines objects recognized by the radar and lidar sen- controllers. Elektrobit’s EB corbos software suite combines essential sors as well as the cameras and possibly additional sensors based on elements for running multi-processing controllers enabling safe high tracked bounding box models. It outputs modelled objects with their performant computing. Further, these elements provide a runtime en- relative position, size, movement, a classification (for example other vironment, software capabilities, and embedded security. It consists cars, cyclists, and pedestrians), and additional information. of a hypervisor that permits running multiple operating systems such Freespace and obstacle fusion describes the free space as well as static as the AUTOSAR Runtime for Adaptive Applications (AUTOSAR obstacles such as guard rails or traffic signs around the car based on a Adaptive) and a high-performance Linux based . polygon curve. For this purpose, this module works with the raw data and modelled objects of the environment sensors and extracts informa- Evolutionary levels and standardization tion about where the car could drive. Road fusion extracts a road or lane In order to facilitate an easy integration into existing or newly de- geometry both from the camera signals (identifying lane markings) as signed system architectures, the EB robinos software framework for well as from the road “furniture” delivered by the freespace and obstacle automated driving includes a software toolchain. It permits the easy fusion. It combines this information with the trajectories of dynamic ob- configuration and adaption of its software blocks in various develop- jects on the road as well as data coming from digital in order to ment and testing stages including research, pre-development or con- deliver an overall representation of the road and lane geometry. cept work, the development process, and mass production. Traffic-rules fusion takes care of traffic-rules-related information The centralized functional architecture of the framework supports like traffic signs, traffic lights, or pedestrian crosswalks recognized by service oriented and security-supporting development, as well as the camera(s) or available from digital maps. It outputs rule-based training of the development staff. With its modular and functional de- information such as speed limits, no passing, or right-of-way rules. sign focusing on the increasing automation levels starting at Level 1 As the environment model is modular and upgradeable, its imple- and currently extending to Level 4, the framework also distinctly sup- mentations can be adapted from Level 1 functions up to the more ad- ports the evolution of functions and designs in the same or multiple vanced Level 3 or 4 functions (Fig. 4). However, regardless of the generations of vehicle models over time. supported automation level, it can be used mostly off-the-shelf—this Another important aspect of providing easy integration - model requires only few adaptions to specific hardware such as sen- nisms and interfaces is the formation of industry-wide standards of sors as well as other elements of the system. interfaces between the functional models of a software framework as The positioning module works closely together with the environ- well as with external components. ment model. It provides highly accurate information about the vehi- Currently, several standardization initiatives in the automotive in- cle’s position based on odometry, , accelerometer, and GPS dustry are targeted at establishing and defining these interfaces and signals. It outputs the local position of the car and trajectory infor- specifications.Elektrobit intensively supports these efforts and par- mation and thus is the base of all geo-referenced functions such as ticipates in the according standardization bodies and organizations. lane following, path planning, and automated parking. This module The further spread of software frameworks will help developers and also delivers the required positioning information for other applica- OEMs to reduce time to market, enable scaling factors for non-differ- tions such as vehicle to vehicle communication, eCall, or . entiating components of automated driving, greatly reduce complexity, An extrapolation component is another important part of the po- effort, and costs, and allows higher competitiveness and thus better sitioning module. It estimates a position in the near future based on functions for the consumer while enabling Level 5 driving capability. ■ the current vehicle movement. This position can be used to compen- sate for delays in the fusion itself as well as in the application. These Sebastian Klaas is Product Manager EB robinos, Elektrobit Automotive GmbH.

AUTONOMOUS VEHICLE ENGINEERING SAE EBOOK 11 SOFTWARE AND AI/MACHINE LEARNING New Mobility’s Mega-Mappers by Stephen Baker

Most believe ultrahigh-definition mapping is crucial to make high-level automated driving possible. Developing these maps is a huge undertaking — one that’s enjoying a massive investment of money and talent.

A yellow sign on a mountain shows an S-shaped curve. An entire industry is rising up to create this new breed of , a This is a primitive map, and hardly a faithful representation of the fundamental technology for the nascent autonomous industry. After road. Instead it delivers a simple signal to the driver: Get ready for all, the purpose of the vehicle is to reach a destination. The map tells turns. where it is and how to get there, the AV’s connection to the physical Road cartography has evolved over centuries with a unifying pur- world. pose: to guide human beings from point A to point B. Complexity Creating these maps requires precise three-dimensional recording often gets in the way. “You don’t want too much detail,” says Wei Luo, of every and byway—itself no mean feat. But it also requires formerly a product manager at Google Maps and now chief operating muscular layers of artificial intelligence (AI) to interpret what it en- officer at Deepmaps, a Palo Alto, California-based startup. “That can counters along the way and then to respond appropriately. Often confuse people.” within a fraction of a second. At the same time, though, the cartographer counts on the map’s It’s a massive undertaking that feeds this growing field of research. user to fill in many of the missing pieces—and respond to changes. Google’s Waymo, the industry’s AI behemoth, is developing maps for After all, the user is a fellow human being. Maps, like language, are its autonomous fleets. It’s joined by a host of start-ups, including ven- symbols that bridge human minds. ture-funded DeepMind and Carmera in the U.S. and European-led Here Technologies, which is backed by Daimler, Volkswagen and other New-age cartography for autonomy automakers. The winners in this market will be positioned to run the But this is changing. The newest field of cartography—creating maps world’s geo-platforms, tracking and guiding much of the movement for autonomous vehicles—is designed for a different user: a software on our planet. “It’s a very hot field for research,” says John Dolan, a program. Unlike a person, the navigation program demands specifics— professor at Carnegie Mellon University. every squiggle, every raised curb, every passing lane, all of them cali- brated by the centimeter. At the same time, and far more challenging, Dealing with change automated navigation must adapt to immediate unknowns. How A central challenge for autonomy-centric mapping is adapting to should it provide guidance to the destination if a fallen tree lies in its change. “The system actually has to be 4D, says Deepmap’s Luo. path? While a human driver might swear under their breath and im- “That’s 3D plus time.” To incorporate time into the map, each system provise, most software programs will require detailed guidance. must devise a method for harvesting reliable, up-to-the-minute data. Here Technologies Here

Deepmap Here Technologies’ maps contain detailed roadway information crucial to precision au- Deepmap’s data-point “view” of the world. tomated navigating.

12 AUTONOMOUS VEHICLE ENGINEERING SAE EBOOK SOFTWARE AND AI/MACHINE LEARNING

Some, like Waymo, use the sensors on their own fleets of AVs. Others is it more significant than other changes, like falling leaves or fresh look to crowdsourced data or piggyback on the onboard LIDAR and puddles? A human being might immediately recognize the white other sensors. space as a parked , and not give it a second thought. The soft- Once the sensors are in place and sending back streams of reports, ware, however, lacking human experience and intuition, must probe the data-gathering part of job is straightforward. “You start with a for clues. Is there more data to corroborate the observation? How very rich base map,” says Ro Gupta, founder and CEO of Carmera, a many times have objects, like a tree, gone missing before? Is there New York start-up. “That’s not trivial,” he says, “but it’s somewhat a any correlation in such cases to accidents or other troubles? Is traffic solved problem.” continuing unimpeded? It’s the flood of data itself that creates immense challenges. Each In responding to changes, time is of the essence. One logical ap- AV, says Luo, generates about one petabyte per hour of navigational proach would be to reduce data flows and associated latency by pro- data. Software must sift through this avalanche of data to find the gramming the sensor vehicles to report only when they detect fragments that are meaningful and then “decide” whether to take ac- changes from the base map. If the traffic is flowing on the usual three tion. This is an enormous cognitive enterprise—and requires strong lanes on Broad Street, why add to system “noise” by reporting it? The doses of AI. trouble, though, says Carmera’s Gupta, is that unperceived changes The initial challenge is simply to spot a change. As the data pours will be missed. “You lose the false negatives,” he says. in, the base map is certifying that everything is matching. Stop sign? Check. Left-turn lane? Check. Cloud or no cloud? Then it encounters something new: A white space at a street corner Updating this new variety of map raises all manner of issues re- where there used to be a pine tree. The system notes a change. But garding data management. How much of the geo-data, for example,

Just say the words: what3words

If you’re standing in the middle of And it’s blazingly precise. The New York’s Rockefeller Center or on torch of the Statue of Liberty, for a beachside road outside Cape example, is “toned.melt..” in Town, South Africa, how do you tell the what3words universe. The a rideshare where to pick you up? nearby cafe on the island is “puz- Most places on our planet, from zle.pies.ties,” and the flagpole fields and beaches to parking lots, plaza is “corn.camps.spite.” In this lack an address. And sometimes the scheme, every place on earth is number is on a front that’s many places, each with its own ad- hard to find, or around the block. dress. Mercedes-Benz is the first automaker to incorporate what3words into its nav- Our current address system, de- igation-system programming. Speak the 3-word “address” and the system will What3words is positioning it- veloped in the 18th century for postal direct the driver to that exact 3-square-meter place on the earth. self as the navigation app for the delivery, is frightfully imprecise. next phase of mobility—whether A decade ago, this imprecision for locating dockless scooters in was causing problems for a British Santa Monica, summoning a flying concert promoter named Chris drone taxi in Dubai, delivering Sheldrick. He struggled to tell deliv- food to famine victims in Yemen, ery companies exactly where to even finding a parked car at Dis- drop off drums sets or to direct mu- ney World. sicians from their hotel to the cor- The key, as in most digital tech- rect backstage door. nologies, is for the app to entrench didn’t have this problem, Sheldrick Developers of what3words believe the system will offer vast new possibilities itself as a standard. Already, the knew. With geo-mapping, they for navigation of unmanned aerial vehicles (UAVs). company has deployed the app in could pinpoint any place on earth by 26 languages. And what3words al- its longitude-latitude coordinates—New York’s Statue of Liberty, for ready has scored a production-vehicle coup with its installation on example, was 40.6892° N, 74.0445°. But try giving longitude and the 2018 Mercedes Benz A-Class, where it’s linked to the speech- latitude coordinates to an acid-rock drummer and see if he gets to enabled navigation system. Just say “Gosh.weds.lost,” and the ve- the show. hicle navigation system will direct you to the centerfield gate at The trick, Shedrick saw, was to create a navigation system for the Chicago’s Wrigley Field. post-postal economy. This is the almost magical idea of But in coming years, mapping will need to cover more than 57 what3words. trillion squares of global real estate. How to tell a drone, for exam- The company has mapped the world as a grid of 57 trillion ple, to deliver dim sum to a 14th-story balcony in Hong Kong? squares. Each one is three square meters and each has its own “We’re looking at vertical,” says the company’s chief marketing offi- “tag”—a combination of three words separated by periods. Aston- cer, Giles Rys Jones. The challenge is to add a new dimension of ishingly, the whole scheme requires just 38,000 words (less than a complexity while staying true to the human-friendly formula—still, quarter of the 171,000 words in the Oxford English Dictionary) to preferably, with just three words. cover the entire planet. —SB what3words

AUTONOMOUS VEHICLE ENGINEERING SAE EBOOK 13 SOFTWARE AND AI/MACHINE LEARNING Here Technologies Here Map-developer Here Technologies envisions a variety of useful near-term services, such as guidance about parkingavailability, springing from its rich map data. should the vehicle itself interpret and what proportion should be up- Arizona and California. Carmera, which has agreements with compa- loaded to cloud-based AI systems? nies that operate delivery fleets, is modeling New York City, San Fran- On one hand, the cloud can harvest from multiple sources, match cisco and retirement villages in Florida, where its partner, Voyage, is them with historical patterns, and provide expanded intelligence. operating autonomous shuttle services. The exception is Here Tech- But even with ultra-speedy 5G cellular networks expected to be nologies, which is harvesting anonymized data throughout much of widespread within three years, the back-and-forth of data transfer Europe and North America from sensors on hundreds of thousands raises latency concerns. What’s more, since network connections of vehicles manufactured by European automakers. are never guaranteed, autonomous vehicles must be equipped to interpret deviations from the base map for themselves and respond Monetization matters appropriately. One problem, particularly for the venture-backed startups, in- In these early days, most of the mapping companies are focusing volves timing. While they’re making large investments now, the wide- on small samples of the earth’s roadways. Naturally, many concen- spread use of fully-automated vehicles (SAE Level 4 and 5) may be trate on the areas where autonomous driving tests and services are a decade away, or perhaps longer. In the meantime, they’re searching underway. Waymo and Deepmap, for example, are busy in parts of for intermediate markets for their next-generation maps. “With this transition taking place, how can we use this data to help the driver [now]?” asks Matthew Preyss, a product marketing manager at Here Technologies. Preyss suggests the new maps will enhance current navigation services, like Waze, Google Maps and TomTom, with more up-to-date road status and course corrections. But the maps could also feed new services, such as augmented reality and parking availability, providing detailed information on the route in both audio and video. The chal- lenge, as always when it comes to maps and human beings, will be to provide helpful data while culling distracting detail. However, keeping humans in the loop during this period of devel- opment also has an advantage: the maps themselves can learn from the drivers’ responses to the data—and focus the AI on significant changes along the route—the ones that demand a response. In this

Here Technologies Here way, we human drivers, over the next decade, will be “educating” the City streetscape represented by Here’s HD mapping software. navigation poised to replace us. ■

14 AUTONOMOUS VEHICLE ENGINEERING SAE EBOOK SOFTWARE AND AI/MACHINE LEARNING Can Autonomous Vehicles Make the Right ‘Decision?’ by Jennifer Dukarski

The famous “Trolley Problem” might not really be the problem automated-vehicle ethics have to solve.

The impact of artificial intelligence (AI) on society is on the rise in front of your vehicle, you are making a moral decision. We expect and many are beginning to question the “ethics” of these systems, AVs to be able to make that same decision. Although the goal of de- leading companies to jump to action. signing these systems would be to avoid all collisions—and the use Microsoft’s president met with Pope Francis to discuss industry of sensors and technology seek to make the vehicle far safer than the ethics, Amazon is helping to fund federal research into “algorithm reaction speed of a human driver—we still are drawn to the question: fairness,” and Salesforce has hired an “architect’ for ethical AI practice how is an AV programmed and what set of priorities does it have? and a “chief ethical and human use officer.” The need for introspection in ethical decision-making in the auto- A system, three laws safe! mated-vehicle (AV) space is just as critical. With every AV failure and Isaac Asimov, in his well-know 1942 short story Runaround, intro- fatality, the public questions how the vehicle arrives at the “deci- duced the “Three Laws of Robotics” which state: sions” it makes. Industry watchdogs call for greater transparency as • A may not injure a human being or, through inaction, allow a design teams work to apply AI, machine learning and other to human being to come to harm. AV software in the most appropriate manner. • A robot must obey the orders given to it by human beings, except As we make decisions to employ AI, it’s important to think about where such orders would conflict with the First Law. ethics and the potential legal impact of using AI in design. • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Can be moral? Many have turned to Asimov’s laws as a good framework on dri- As traditional drivers, we regularly encounter moral dilemmas. verless-vehicle AI ethics. This approach follows a top-down ethical When you slam on the to avoid hitting a pedestrian who steps theory where an “Ethical Governor” determines the ethical policy and kentoh/Shutterstock

AUTONOMOUS VEHICLE ENGINEERING SAE EBOOK 15 SOFTWARE AND AI/MACHINE LEARNING

serve life” directive in Asimov’s laws and with many of the base assumptions that Ethical Gov- ernors would likely choose. Another area of interest that can be seen in applying the trolley problem is the influence of culture and geographic differences in decision- making. North American participants, for exam- ple, indicated preference for inaction and remaining in the lane. Both Asian and South American respondents had a higher preference for having the vehicle take action.

What happens when the AV fails? Ultimately, the most direct impact of ethical decisions made by engineers and designers is most felt when things go wrong. When a fatality occurs, investigators and victims will ask why the vehicle made the decision it did and why it failed to take an alternative course of action. Eventually, this will likely find its way to the court system in the construct of a product-liability lawsuit that suggests the vehicle has a design defect in the logic and programming behind the AI software.

The Moral Machine Team Moral The Design defects in the automotive sector often One of the “Trolley Problem” AV ethics scenarios presented by MIT’s Moral Machine project. are evaluated on whether the foreseeable risk of harm created by a product is greater than then selects abstract ethical principles. When functioning, the agent that of a product with a reasonable alternative design (RAD). The (i.e., the vehicle) chooses between the best possible actions arising RAD is such a critical concept that a plaintiff cannot prevail even if from the situation by applying the selected rules. Technically, the ve- the risk exceeds the utility unless there is an alternative design (un- hicle would make a decision that is minimally unethical with respect less the design itself was so questionable that no reasonable person to the ethical policy created by the designers. As an example, base would ever sell the product). priorities could include: Engineers and designers in this sector then are left with a challeng- • Rule 1: Do not harm people ing question: what is a RAD in an AI system that makes ethical choices • Rule 2: Do not harm animals impacting life and death? • Rule 3: Do not damage self Is it a design that considers cultural expectancies and biases in pro- • Rule 4: Do not damage property gramming the logic of decision-making? Perhaps a system that is These rules could be prioritized differently or could have additional designed to appreciate the cultural uniqueness of the geographic re- rules to create a perceived groundwork for an AI system to make its gion it operates in is sufficient to be a reasonable alternative design. decisions. And these often form the for the most famous Is it a system that was developed by a team that was aware of the discussion in autonomous vehicles. nature of gender bias in algorithms and data sets? The decisions made by a diverse team that spans age, gender, and other cultural Ethics and the hypothetical trolley norms might demonstrate a reasonable, comprehensive design. Or is The modern version of the AV world’s now-famous “trolley prob- the true reasonable alternative design—the human mind? lem” was created by Philippa Food in 1967, but the origins of the eth- Perhaps, we ourselves are the yardstick to compare ethical deci- ical thought experiment are far older. sions against. In 1905, the University of Wisconsin asked undergraduates to de- Experience would suggest we seek to incorporate many types of cide whether to sacrifice one person (your child) by pulling a to diversity into our designs. We need to understand the ethical issues, divert a runaway trolley or to let it proceed and kill five people; MIT even if the Trolley Problem is merely a diversion away from seeking a grabbed this example and established the MIT Moral Machine, where design that makes the best decisions based on information available individuals complete a survey to determine what actions an AV should at the time. And we need to prepare to answer these questions for a take when different people and situations were encountered. The ex- general public that is hoping for an autonomous system that is at least periment allowed individuals to decide the survival of different gen- “three laws safe.” ■ ders, ages, positions (in vehicle or on the road), levels of law-abiding and even species (animals or humans). A self-described “recovering engineer” with 15 years of One recent study published in Risk Analysis: An International Jour- experience in automotive design and quality, Jennifer Dukarski is a Shareholder at Butzel Long, where she fo- nal, found that survey respondents generally preferred for the ve- cuses her legal practice at the intersection of technology hicle to remain in its lane and attempt to perform an emergency and communications, with an emphasis on emerging and stop, whether or not that was a feasible option. When asked to “stay disruptive issues that include cybersecurity and privacy, or swerve,” more people (often as many as 85%) chose to stay. This infotainment, vehicle safety and connected and au- tonomous vehicles. preference itself potentially conflicts at times with the simple “pre-

16 AUTONOMOUS VEHICLE ENGINEERING SAE EBOOK CONNECTED VEHICLE ELECTRONICS Expanding the Role of FPGAs by Terry Costlow

New demands for on-vehicle data processing, and over-the-air updating, are expanding the use of these programmable in production vehicles. The recent Daimler-Xilinx linkup shows the way forward.

The increasingly varied nature of data tied to safety systems with expectations that usage could grow as and connected cars and trucks is altering electronic architectures, (AI) and over-the-air updating become more commonplace. putting more emphasis on adaptability during design phases and FPGAs are semiconductor devices that are based around a ma- after new vehicles enter the field. Field Programmable Gate Ar- trix of configurable logic blocks, connected via programmable in- rays (FPGA) are increasingly seeing use in production vehicles, terconnects. They can be reprogrammed to desired application or Xilinx

FPGAs can be used in a range of vehicle sensors and in safety modules.

AUTONOMOUS VEHICLE ENGINEERING SAE EBOOK 17 CONNECTED VEHICLE ELECTRONICS

FPGAs “provide an extra layer to defense-in-depth protection schemes.” “—Willard Tu, Xilinx senior automotive director functionality requirements after manufacturing. In this way FPGAs Xilinx is competing with Nvidia graphical processing units differ from Application Specific Integrated Circuits (ASICs), which (GPUs), Intel’s Mobileye vision processing devices and” the FPGAs are custom manufactured for specific design tasks Intel gained by acquiring Altera. Willard Tu, senior automotive di- In new vehicles going forward, inputs now come from multiple rector at Xilinx, said Xilinx devices provide more transparency sensors and wireless links, areas where changes occur far more than Mobileye’s black box approach. If there are problems, that regularly than in conventional automotive systems. AI also re- makes it easier to debug. He added that FPGAs can be faster than quires the ability to adapt to changing patterns. These shifting GPUs. demands for data processing are helping FPGAs expand their role “GPUs batch parallel tasks, holding some until a set number ar- in production vehicles. rive. That introduces latency,” Tu explained. “We do parallelism, Programmable devices from Xilinx and Intel/Altera migrated be- running batchless processes where each input is an independent yond prototyping a few years ago, largely in rapidly-changing in- piece of data. There’s no queueing, so all elements have the same fotainment systems. Now, the image processing requirements of latency.” cameras, radar and lidar provide a boost for FPGAs, as does the He noted that as connectivity brings security concerns, FPGAs looming implementation of AI. provide an extra layer to defense-in-depth protection schemes. According to Grand View Research, automotive is now the third Tu compared silicon to a door , saying that once hackers find largest global market for FPGAs, after industrial and telecom. An- an opening, they can continue to exploit it even after software other analysis firm, Markets and Markets, predicts FPGA revenues has been updated. will rise from $5.83 billion in 2017 to $9.5 billion in 2023, noting “Hardware is the lock, once hackers figure out how to defeat that rising vehicle volumes in the Asia-Pacific region will drive that lock, they know how to get in. You can change the software, rapid FPGA growth in automotive. but they can still get in. With FPGAs, you can change the lock, Xilinx, which has shipped over 40 million parts to OEMs and Tier closing that vulnerability for good,” he asserted. 1s, is claiming significant progress in full-run vehicle shipments. In While conventional processors scale by moving to higher 2013, its chips were in 29 production models made by 14 OEMs. rates or adding cores, FPGAs can be upgraded without major re- This year, they are in 111 production models from 29 OEMs. designs. When alterations are needed, programmable logic can Recently Daimler announced it is teaming up with Xilinx so its be upscaled by adding more fabric, which is simpler than re- deep-learning experts at the Mercedes-Benz Research and Devel- designing a processing unit or waiting for faster parts. That is im- opment centers in and India can develop AI algorithms portant as more factors change as OEMs move towards on an adaptable Xilinx platform. autonomy. “Through this strategic collaboration, Xilinx is providing tech- “When you look at data aggregation and pre-processing and nology that will enable us to deliver very low latency and power- distribution, it’s hard to predict how many cameras, what type or efficient solutions for vehicle systems which must operate in radar and the style of lidar will be used,” Tu said. “There are a lot thermally constrained environments,” said Georges Massing, di- of variabilities in sensors, and they may link to CAN or Ethernet, rector user interaction and software, Daimler AG. so there’s a real need for programmability.” ■

18 AUTONOMOUS VEHICLE ENGINEERING SAE EBOOK CONNECTED VEHICLE ELECTRONICS Electronic Architectures Get Smart by Lee Bauer

Upgradable, scalable and powerful new architectures will help enable data-hungry connected, autonomous vehicles.

The mobility industry is experiencing the biggest disruption trical circuits than five years ago—and we expect that number will in its 125-year history, evolving at a pace that demands flexibility. increase by another 30% five years from now. Wiring and cabling There is an urgent need to remove all barriers in vehicle design has grown literally by a mile in length, on average, bringing pack- and engineering to deliver a holistic, total-system solution to aging challenges and additional mass. support safe, efficient and affordable products. And that urgency Due to increasing demand for software-enabled features, vehicle begins with a complete re-thinking of the vehicle’s computing requirements in the not-too-distant future are expected electrical/electronic architecture. to increase drastically compared to those of today. One measure Comprising a ‘brain’ (software and computational processing) of that is “flops”—the number of complex operations calculated and ‘nervous system’ (sensors and power/data distribution), each second. Today’s vehicles are completing significantly less than today’s architectures have become exceedingly complex and inflex- a teraflop – or 1 trillion – operations a second. This equates to the ible. Many have more than 100 computers per vehicle, each of them computing power of less than one iPhone 7. That’s expected to in- added to support a new feature. There are typically 25% more elec- crease to over 200 teraflops in the future – the computing power Aptiv ‘Smart’ architectures engineered to meet the demands of the higher levels of autonomous driving will offer capability far beyond today’s configurations.

AUTONOMOUS VEHICLE ENGINEERING SAE EBOOK 19 CONNECTED VEHICLE ELECTRONICS

Current network infrastructure cannot support the data transfer speeds “f the future. of more than 500 iPhone 7’s. It’s not a stretch to say the typical car all the electrification, active safety, automation, and connectivity is becoming a supercomputer—a data center on . features that consumers increasingly expect, while paving the way Data speeds during” the past decade have risen steadily from for autonomous driving. < 10 megabytes per second to today’s range of one gigabyte per second—but much, much faster speeds are needed for self-driving Prioritizing safety and affordability cars and cloud connectivity. Think of the ‘brain’ first and foremost as software-enabled vehi- A single connected and autonomous vehicle will generate 40 ter- cle features, including active safety and autonomous systems, in- abytes of uncompressed data per hour—about 3,000 times the vol- fotainment and user experience, and data and services. The ume of data generated daily by Twitter’s 270 million users! This exponential increase in compute power of advanced chipsets will staggering mountain of data will require more than 200 teraflops effectively transform the vehicle into a server platform capable of of processing power. That’s 200 million complex operations per delivering the features and services required for smart mobility. Un- second. derpinning all of this is the new ‘nervous system’—a reliable/re- Put simply, your vehicle exchanges 15,000 pieces of data in the silient data network that gets the right data to the right place at time it takes to blink. By 2020, that number is expected to jump the right time. to roughly 100,000. This means the traditional architecture ap- More features…more computing power…more data…and more proach will no longer be viable to support the growth in vehicle power distribution than ever before require radical changes in ar- content and complexity and to run the vehicle’s software algo- chitectures. And smart vehicle technology must be safe and afford- rithms. The network infrastructure cannot support the data trans- able. How do we guarantee the system is fail operational? And if fer speeds of the future. there is a failure in either computing or networking, how do we cre- Clearly, in terms of power and speed, current-generation vehicle ate redundancy? architectures are not up to the task. The solution is to move to an Unfortunately, if we were to physically try to take the existing upgradable and scalable “smart” architecture. These will capture available space in today’s vehicles and account for redundancy by Aptiv

The steady growth of electrical and electronic systems complexity is shown in ghosted illustrations of typical SUVs from 2000 and 2020.

20 AUTONOMOUS VEHICLE ENGINEERING SAE EBOOK CONNECTED VEHICLE ELECTRONICS

doubling the affected components to create a fail-operational de- Evolution and revolution sign, this level of redundancy in its purest form would not fit. (Fail- There is no one single solution when it comes to how wiring is operational performance, an aerospace term, means that after one routed. Rather, there are thousands of flexible approaches to any ve- failure in a system, redundancy management allows the vehicle to hicle platform, depending on the customer’s requirements. continue its mission. After any failure, the vehicle is capable of re- We see three approaches based on where we are in the lifecycle turning home safely.) and the need to capture desired customer feature growth. We then challenged ourselves to answer the question: What is One approach is to optimize. While a vehicle’s commercial cycle the redundancy needed to guarantee a three-layer fail operational usually lasts six years, we like to look at it during the lifecycle and design? And then, how do we consolidate those features to be af- provide suggestions on how to realize improvement that are mini- fordable and flexible enough to actually fit in the vehicle? mally invasive. This allows engineers to validate what’s working, op- Aptiv is the first technology company to examine the complete timize that output, and determine what can be improved. system holistically. First, we break it into two buckets—data Optimization also allows the vehicle to remain relevant with respect (used to support the movement of information around the vehi- to functionality, without extensive rewiring. cle; think of it as the part of wiring that allows the vehicle to Another approach is evolutionary: expand existing architecture ca- sense) and power, which allows energy to be moved around the pability, maintain some legacy elements and allow for new features vehicle. and functions. And then we have Smart architecture which is more of a revolution. Fewer controllers It picks up where evolution is no longer feasible, delivering new con- To enable the SAE Level 4/5 autonomous vehicles of the future, cepts to address the new mobility services of the future. Autonomous we must ensure the networking capability is in place to move data, driving requires Automotive Safety Integrity Level D—essentially fail- in real time, from the vehicle’s sensors to its compute platforms. safe operation. This requirement drives the revolution or an electrical New technologies like HDBaseT and evolving automotive Ethernet architecture break. With it come deep implications on each of the help provide the high transfer speeds required to make au- three critical system levels—power distribution, networking distribu- tonomous driving a reality. tion, and compute. We call this three-layer fail operational design. Smart architecture engineered to meet the demands of the A three-layer fail operational design approach is about resilience. higher levels of autonomous driving will offer capability beyond It considers power failure, network failure and even compute failure. today’s configurations. Their multi-domain controller ‘brains’ can It is the ability to dynamically re-route power, network traffic and even process massive amounts of data and manage multiple electronic decision making to bring an autonomous car to a safe stop. Our ex- sub-systems simultaneously—unlike current domains that control, tensive system design expertise in all three layers delivers the neces- in most cases, one function at a time. sary IP for fail-safe operation—the foundation for any smart vehicle This brings increased computing power in larger (more powerful) architecture. but substantially fewer controllers. Traditional vehicle architectures never contemplated nor were de- Moving forward, Aptiv sees three critical considerations for a signed to support the explosion of content and features in an effec- smart vehicle architecture. One is a flexible software framework tive, cost-efficient manner. They have become the limiting factor. They with a tailor-made computing, data and power distribution network must adapt—and become smarter. ■ designed to support it. Second, are the software-defined features and compute power that are separated enabling independent life- Lee Bauer is Aptiv’s Vice President of Mobility Architec- cycles. ture Group. Finally, there’s resiliency—the need for the system to contem- plate and address multilayer system fault tolerance, while meeting redundancy requirements for the highest level of automotive safety integrity.

AUTONOMOUS VEHICLE ENGINEERING SAE EBOOK 21