Passenger Response to Driving Style in an Autonomous Vehicle
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Rapport AI Internationale Verkenning Van De Sociaaleconomische
RAPPORT ARTIFICIËLE INTELLIGENTIE INTERNATIONALE VERKENNING VAN DE SOCIAAL-ECONOMISCHE IMPACT STAND VAN ZAKEN EIND OKTOBER 2020 5. AI als een octopus met tentakels in diverse sectoren: een illustratie Artificiële intelligentie: sociaal-economische verkenning 5 AI als een octopus met tentakels in diverse sectoren: een illustratie 5.1 General purpose: Duurzame ontwikkeling (SDG’s) AI is een technologie voor algemene doeleinden en kan worden aangewend om het algemeen maatschappelijk welzijn te bevorderen.28 De bijdrage van AI in het realiseren van de VN- doelstellingen voor duurzame ontwikkeling (SDG's) op het vlak van onder meer onderwijs, gezondheidszorg, vervoer, landbouw en duurzame steden, vormt hiervan het beste bewijs. Veel openbare en particuliere organisaties, waaronder de Wereldbank, een aantal agentschappen van de Verenigde Naties en de OESO, willen AI benutten om stappen vooruit te zetten op de weg naar de SDG's. Uit de database van McKinsey Global Institute29 met AI-toepassingen blijkt dat AI ingezet kan worden voor elk van de 17 duurzame ontwikkelingsdoelstellingen. Onderstaande figuur geeft het aantal AI-use cases (toepassingen) in de database van MGI aan die elk van de SDG's van de VN kunnen ondersteunen. Figuur 1: AI use cases voor de ondersteuning van de SDG’s30 28 Steels, L., Berendt, B., Pizurica, A., Van Dyck, D., Vandewalle,J. Artificiële intelligentie. Naar een vierde industriële revolutie?, Standpunt nr. 53 KVAB, 2017. 29 McKinsey Global Institute, notes from the AI frontier applying AI for social good, Discussion paper, December 2018. Chui, M., Chung, R., van Heteren, A., Using AI to help achieve sustainable development goals, UNDP, January 21, 2019 30 Noot McKinsey: This chart reflects the number and distribution of use cases and should not be read as a comprehensive evaluation of AI potential for each SDG; if an SDG has a low number of cases, that is a reflection of our library rather than of AI applicability to that SDG. -
Appearance-Based Gaze Estimation with Deep Learning: a Review and Benchmark
MANUSCRIPT, APRIL 2021 1 Appearance-based Gaze Estimation With Deep Learning: A Review and Benchmark Yihua Cheng1, Haofei Wang2, Yiwei Bao1, Feng Lu1,2,* 1State Key Laboratory of Virtual Reality Technology and Systems, SCSE, Beihang University, China. 2Peng Cheng Laboratory, Shenzhen, China. fyihua c, baoyiwei, [email protected], [email protected] Abstract—Gaze estimation reveals where a person is looking. It is an important clue for understanding human intention. The Deep learning algorithms recent development of deep learning has revolutionized many computer vision tasks, the appearance-based gaze estimation is no exception. However, it lacks a guideline for designing deep learn- ing algorithms for gaze estimation tasks. In this paper, we present a comprehensive review of the appearance-based gaze estimation methods with deep learning. We summarize the processing Chip pipeline and discuss these methods from four perspectives: deep feature extraction, deep neural network architecture design, personal calibration as well as device and platform. Since the Platform Calibration Feature Model data pre-processing and post-processing methods are crucial for gaze estimation, we also survey face/eye detection method, data rectification method, 2D/3D gaze conversion method, and gaze origin conversion method. To fairly compare the performance of various gaze estimation approaches, we characterize all the Fig. 1. Deep learning based gaze estimation relies on simple devices and publicly available gaze estimation datasets and collect the code of complex deep learning algorithms to estimate human gaze. It usually uses typical gaze estimation algorithms. We implement these codes and off-the-shelf cameras to capture facial appearance, and employs deep learning set up a benchmark of converting the results of different methods algorithms to regress gaze from the appearance. -
Stanley: the Robot That Won the DARPA Grand Challenge
STANLEY Winning the DARPA Grand Challenge with an AI Robot_ Michael Montemerlo, Sebastian Thrun, Hendrik Dahlkamp, David Stavens Stanford AI Lab, Stanford University 353 Serra Mall Stanford, CA 94305-9010 fmmde,thrun,dahlkamp,[email protected] Sven Strohband Volkswagen of America, Inc. Electronics Research Laboratory 4009 Miranda Avenue, Suite 150 Palo Alto, California 94304 [email protected] http://www.cs.stanford.edu/people/dstavens/aaai06/montemerlo_etal_aaai06.pdf Stanley: The Robot that Won the DARPA Grand Challenge Sebastian Thrun, Mike Montemerlo, Hendrik Dahlkamp, David Stavens, Andrei Aron, James Diebel, Philip Fong, John Gale, Morgan Halpenny, Gabriel Hoffmann, Kenny Lau, Celia Oakley, Mark Palatucci, Vaughan Pratt, and Pascal Stang Stanford Artificial Intelligence Laboratory Stanford University Stanford, California 94305 http://robots.stanford.edu/papers/thrun.stanley05.pdf DARPA Grand Challenge: Final Part 1 Stanley from Stanford 10.54 https://www.youtube.com/watch?v=M2AcMnfzpNg Sebastian Thrun helped build Google's amazing driverless car, powered by a very personal quest to save lives and reduce traffic accidents. 4 minutes https://www.ted.com/talks/sebastian_thrun_google_s_driverless_car THE GREAT ROBOT RACE – documentary Published on Jan 21, 2016 DARPA Grand Challenge—a raucous race for robotic, driverless vehicles sponsored by the Pentagon, which awards a $2 million purse to the winning team. Armed with artificial intelligence, laser-guided vision, GPS navigation, and 3-D mapping systems, the contenders are some of the world's most advanced robots. Yet even their formidable technology and mechanical prowess may not be enough to overcome the grueling 130-mile course through Nevada's desert terrain. From concept to construction to the final competition, "The Great Robot Race" delivers the absorbing inside story of clever engineers and their unyielding drive to create a champion, capturing the only aerial footage that exists of the Grand Challenge. -
Through the Eyes of the Beholder: Simulated Eye-Movement Experience (“SEE”) Modulates Valence Bias in Response to Emotional Ambiguity
Emotion © 2018 American Psychological Association 2018, Vol. 18, No. 8, 1122–1127 1528-3542/18/$12.00 http://dx.doi.org/10.1037/emo0000421 BRIEF REPORT Through the Eyes of the Beholder: Simulated Eye-Movement Experience (“SEE”) Modulates Valence Bias in Response to Emotional Ambiguity Maital Neta and Michael D. Dodd University of Nebraska—Lincoln Although some facial expressions provide clear information about people’s emotions and intentions (happy, angry), others (surprise) are ambiguous because they can signal both positive (e.g., surprise party) and negative outcomes (e.g., witnessing an accident). Without a clarifying context, surprise is interpreted as positive by some and negative by others, and this valence bias is stable across time. When compared to fearful expressions, which are consistently rated as negative, surprise and fear share similar morphological features (e.g., widened eyes) primarily in the upper part of the face. Recently, we demonstrated that the valence bias was associated with a specific pattern of eye movements (positive bias associated with faster fixation to the lower part of the face). In this follow-up, we identified two participants from our previous study who had the most positive and most negative valence bias. We used their eye movements to create a moving window such that new participants viewed faces through the eyes of one our previous participants (subjects saw only the areas of the face that were directly fixated by the original participants in the exact order they were fixated; i.e., Simulated Eye-movement Experience). The input provided by these windows modulated the valence ratings of surprise, but not fear faces. -
Modulation of Saccadic Curvature by Spatial Memory and Associative Learning
Modulation of Saccadic Curvature by Spatial Memory and Associative Learning Dissertation zur Erlangung des Doktorgrades der Naturwissenschaften (Dr. rer. nat.) dem Fachbereich Psychologie der Philipps-Universität Marburg vorgelegt von Stephan König aus Arnsberg Marburg/Lahn, Oktober 2010 Vom Fachbereich Psychologie der Philipps-Universität Marburg als Dissertation angenommen am _________________ Erstgutachter: Prof. Dr. Harald Lachnit, Philipps Universität Marburg Zweitgutachter: Prof. Dr. John Pearce, Cardiff University Tag der mündlichen Prüfung am: 27.10.2010 Acknowledgment I would like to thank Prof. Dr. Harald Lachnit for providing the opportunity for my work on this thesis. I would like to thank Prof. Dr. Harald Lachnit, Dr. Anja Lotz and Prof. Dr. John Pearce for proofreading, reviewing and providing many valuable suggestions during the process of writing. I would like to thank Peter Nauroth, Sarah Keller, Gloria-Mona Knospe, Sara Lucke and Anja Riesner who helped in recruiting participants as well as collecting data. I would like to thank the Deutsche Forschungsgemeinschaft for their support through the graduate program NeuroAct (DFG 885/1), and project Blickbewegungen als Indikatoren assoziativen Lernens (DFG LA 564/20-1) granted to Prof. Dr. Harald Lachnit. Summary The way the eye travels during a saccade typically does not follow a straight line but rather shows some curvature instead. Converging empirical evidence has demonstrated that curvature results from conflicting saccade goals when multiple stimuli in the visual periphery compete for selection as the saccade target (Van der Stigchel, Meeter, & Theeuwes, 2006). Curvature away from a competing stimulus has been proposed to result from the inhibitory deselection of the motor program representing the saccade towards that stimulus (Sheliga, Riggio, & Rizzolatti, 1994; Tipper, Howard, & Houghton, 2000). -
A Survey of Autonomous Driving: Common Practices and Emerging Technologies
Accepted March 22, 2020 Digital Object Identifier 10.1109/ACCESS.2020.2983149 A Survey of Autonomous Driving: Common Practices and Emerging Technologies EKIM YURTSEVER1, (Member, IEEE), JACOB LAMBERT 1, ALEXANDER CARBALLO 1, (Member, IEEE), AND KAZUYA TAKEDA 1, 2, (Senior Member, IEEE) 1Nagoya University, Furo-cho, Nagoya, 464-8603, Japan 2Tier4 Inc. Nagoya, Japan Corresponding author: Ekim Yurtsever (e-mail: [email protected]). ABSTRACT Automated driving systems (ADSs) promise a safe, comfortable and efficient driving experience. However, fatalities involving vehicles equipped with ADSs are on the rise. The full potential of ADSs cannot be realized unless the robustness of state-of-the-art is improved further. This paper discusses unsolved problems and surveys the technical aspect of automated driving. Studies regarding present challenges, high- level system architectures, emerging methodologies and core functions including localization, mapping, perception, planning, and human machine interfaces, were thoroughly reviewed. Furthermore, many state- of-the-art algorithms were implemented and compared on our own platform in a real-world driving setting. The paper concludes with an overview of available datasets and tools for ADS development. INDEX TERMS Autonomous Vehicles, Control, Robotics, Automation, Intelligent Vehicles, Intelligent Transportation Systems I. INTRODUCTION necessary here. CCORDING to a recent technical report by the Eureka Project PROMETHEUS [11] was carried out in A National Highway Traffic Safety Administration Europe between 1987-1995, and it was one of the earliest (NHTSA), 94% of road accidents are caused by human major automated driving studies. The project led to the errors [1]. Against this backdrop, Automated Driving Sys- development of VITA II by Daimler-Benz, which succeeded tems (ADSs) are being developed with the promise of in automatically driving on highways [12]. -
Hierarchical Off-Road Path Planning and Its Validation Using a Scaled Autonomous Car' Angshuman Goswami Clemson University
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Clemson University: TigerPrints Clemson University TigerPrints All Theses Theses 12-2017 Hierarchical Off-Road Path Planning and Its Validation Using a Scaled Autonomous Car' Angshuman Goswami Clemson University Follow this and additional works at: https://tigerprints.clemson.edu/all_theses Recommended Citation Goswami, Angshuman, "Hierarchical Off-Road Path Planning and Its Validation Using a Scaled Autonomous Car'" (2017). All Theses. 2793. https://tigerprints.clemson.edu/all_theses/2793 This Thesis is brought to you for free and open access by the Theses at TigerPrints. It has been accepted for inclusion in All Theses by an authorized administrator of TigerPrints. For more information, please contact [email protected]. HIERARCHICAL OFF-ROAD PATH PLANNING AND ITS VALIDATION USING ASCALED AUTONOMOUS CAR A Thesis Presented to the Graduate School of Clemson University In Partial Fulfillment of the Requirements for the Degree Master of Science Mechanical Engineering by Angshuman Goswami December 2017 Accepted by: Dr. Ardalan Vahidi, Committee Chair Dr. John R. Wagner Dr. Phanindra Tallapragada Abstract In the last few years. while a lot of research efforthas been spent on autonomous vehicle navigation, primarily focused on on-road vehicles, off-road path planning still presents new challenges. Path planning for an autonomous ground vehicle over a large horizon in an unstructured environment when high-resolution a-priori information is available, is still very much an open problem due to the computations involved. Local- ization and control of an autonomous vehicle and how the control algorithms interact with the path planner is a complex task. -
Gaze Shifts As Dynamical Random Sampling
GAZE SHIFTS AS DYNAMICAL RANDOM SAMPLING Giuseppe Boccignone Mario Ferraro Dipartimento di Scienze dell’Informazione Dipartimento di Fisica Sperimentale Universita´ di Milano Universita´ di Torino Via Comelico 39/41 via Pietro Giuria 1 20135 Milano, Italy 10125 Torino, Italy [email protected] [email protected] ABSTRACT so called “focus of attention” (FOA), to extract detailed in- formation from the visual environment. An average of three We discuss how gaze behavior of an observer can be simu- eye fixations per second generally occurs, intercalated by lated as a Monte Carlo sampling of a distribution obtained saccades, during which vision is suppressed. The succession from the saliency map of the observed image. To such end we of gaze shifts is referred to as a scanpath [1]. propose the Levy Hybrid Monte Carlo algorithm, a dynamic Beyond the field of computational vision,[2], [3],[4], [5] Monte Carlo method in which the walk on the distribution and robotics [6], [7], much research has been performed in landscape is modelled through Levy flights. Some prelimi- the image/video coding [8], [9]. [10], [11] , [12], and im- nary results are presented comparing with data gathered by age/video retrieval domains [13],[14]. eye-tracking human observers involved in an emotion recog- However, one issue which is not addressed by most mod- nition task from facial expression displays. els [15] is the ”noisy”, idiosyncratic variation of the random Index Terms— Gaze analysis, Active vision, Hybrid exploration exhibited by different observers when viewing the Monte Carlo, Levy flights same scene, or even by the same subject along different trials [16] (see Fig. -
New Methodology to Explore the Role of Visualisation in Aircraft Design Tasks
220 Int. J. Agile Systems and Management, Vol. 7, Nos. 3/4, 2014 New methodology to explore the role of visualisation in aircraft design tasks Evelina Dineva*, Arne Bachmann, Erwin Moerland, Björn Nagel and Volker Gollnick Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR), German Aerospace Center, Blohmstr. 18, 21079 Hamburg, Germany E-mail: [email protected] E-mail: [email protected] E-mail: [email protected] E-mail: [email protected] E-mail: [email protected] *Corresponding author Abstract: A toolbox for the assessment of engineering performance in a realistic aircraft design task is presented. Participants solve a multidisciplinary optimisation (MDO) task by means of a graphical user interface (GUI). The quantity and quality of visualisation may be systematically varied across experimental conditions. The GUI also allows tracking behavioural responses like cursor trajectories and clicks. Behavioural data together with evaluation of the generated aircraft design can help uncover details about the underlying decision making process. The design and the evaluation of the experimental toolbox are presented. Pilot studies with two novice and two advanced participants confirm and help improve the GUI functionality. The selection of the aircraft design task is based on a numerical analysis that helped to identify a ‘sweet spot’ where the task is not too easy nor too difficult. Keywords: empirical performance evaluation; visualisation; aircraft design; multidisciplinary optimisation; MDO; multidisciplinary collaboration; collaborative engineering. Reference to this paper should be made as follows: Dineva, E., Bachmann, A., Moerland, E., Nagel, B. and Gollnick, V. (2014) ‘New methodology to explore the role of visualisation in aircraft design tasks’, Int. -
A Self-Supervised Terrain Roughness Estimator for Off-Road Autonomous
A Self-Supervised Terrain Roughness Estimator for O®-Road Autonomous Driving David Stavens and Sebastian Thrun Stanford Arti¯cial Intelligence Laboratory Computer Science Department Stanford, CA 94305-9010 fstavens,[email protected] Abstract 1 INTRODUCTION Accurate perception is a principal challenge In robotic autonomous o®-road driving, the primary of autonomous o®-road driving. Percep- perceptual problem is terrain assessment in front of tive technologies generally focus on obsta- the robot. For example, in the 2005 DARPA Grand cle avoidance. However, at high speed, ter- Challenge (DARPA, 2004), a robot competition orga- rain roughness is also important to control nized by the U.S. Government, robots had to identify shock the vehicle experiences. The accuracy drivable surface while avoiding a myriad of obstacles required to detect rough terrain is signi¯- { cli®s, berms, rocks, fence posts. To perform ter- cantly greater than that necessary for obsta- rain assessment, it is common practice to endow ve- cle avoidance. hicles with forward-pointed range sensors. Terrain is We present a self-supervised machine learn- then analyzed for potential obstacles. The result is ing approach for estimating terrain rough- used to adjust the direction of vehicle motion (Kelly ness from laser range data. Our approach & Stentz, 1998a; Kelly & Stentz, 1998b; Langer et al., compares sets of nearby surface points ac- 1994; Urmson et al., 2004). quired with a laser. This comparison is chal- lenging due to uncertainty. For example, at When driving at high speed { as in the DARPA Grand range, laser readings may be so sparse that Challenge { terrain roughness must also dictate vehicle signi¯cant information about the surface is behavior because rough terrain induces shock propor- missing. -
Human Visual Attention Prediction Boosts Learning & Performance of Autonomous Driving
Human Visual Attention Prediction Boosts Learning & Performance of Autonomous Driving Agents Alexander Makrigiorgos, Ali Shafti, Alex Harston, Julien Gerard, and A. Aldo Faisal Abstract— Autonomous driving is a multi-task problem re- quiring a deep understanding of the visual environment. End- to-end autonomous systems have attracted increasing interest as a method of learning to drive without exhaustively pro- gramming behaviours for different driving scenarios. When humans drive, they rely on a finely tuned sensory system which enables them to quickly acquire the information they need while filtering unnecessary details. This ability to identify task- specific high-interest regions within an image could be beneficial to autonomous driving agents and machine learning systems in general. To create a system capable of imitating human gaze patterns and visual attention, we collect eye movement data from human drivers in a virtual reality environment. We use this data to train deep neural networks predicting Fig. 1: A subject driving in the VR simulation. where humans are most likely to look when driving. We then use the outputs of this trained network to selectively mask intuitive human-robot interfaces [18]–[21]. The ability to driving images using a variety of masking techniques. Finally, understand and recreate this attention-focusing mechanism autonomous driving agents are trained using these masked has the potential to provide similar benefits to machine images as input. Upon comparison, we found that a dual-branch learning systems which have to parse complex visual scenes architecture which processes both raw and attention-masked images substantially outperforms all other models, reducing in order to complete their tasks, reducing computational bur- error in control signal predictions by 25.5% compared to a den and improving performance by quickly identifying key standard end-to-end model trained only on raw images. -
Stanley: the Robot That Won the DARPA Grand Challenge
Stanley: The Robot that Won the DARPA Grand Challenge ••••••••••••••••• •••••••••••••• Sebastian Thrun, Mike Montemerlo, Hendrik Dahlkamp, David Stavens, Andrei Aron, James Diebel, Philip Fong, John Gale, Morgan Halpenny, Gabriel Hoffmann, Kenny Lau, Celia Oakley, Mark Palatucci, Vaughan Pratt, and Pascal Stang Stanford Artificial Intelligence Laboratory Stanford University Stanford, California 94305 Sven Strohband, Cedric Dupont, Lars-Erik Jendrossek, Christian Koelen, Charles Markey, Carlo Rummel, Joe van Niekerk, Eric Jensen, and Philippe Alessandrini Volkswagen of America, Inc. Electronics Research Laboratory 4009 Miranda Avenue, Suite 100 Palo Alto, California 94304 Gary Bradski, Bob Davies, Scott Ettinger, Adrian Kaehler, and Ara Nefian Intel Research 2200 Mission College Boulevard Santa Clara, California 95052 Pamela Mahoney Mohr Davidow Ventures 3000 Sand Hill Road, Bldg. 3, Suite 290 Menlo Park, California 94025 Received 13 April 2006; accepted 27 June 2006 Journal of Field Robotics 23(9), 661–692 (2006) © 2006 Wiley Periodicals, Inc. Published online in Wiley InterScience (www.interscience.wiley.com). • DOI: 10.1002/rob.20147 662 • Journal of Field Robotics—2006 This article describes the robot Stanley, which won the 2005 DARPA Grand Challenge. Stanley was developed for high-speed desert driving without manual intervention. The robot’s software system relied predominately on state-of-the-art artificial intelligence technologies, such as machine learning and probabilistic reasoning. This paper describes the major components of this architecture, and discusses the results of the Grand Chal- lenge race. © 2006 Wiley Periodicals, Inc. 1. INTRODUCTION sult of an intense development effort led by Stanford University, and involving experts from Volkswagen The Grand Challenge was launched by the Defense of America, Mohr Davidow Ventures, Intel Research, ͑ ͒ Advanced Research Projects Agency DARPA in and a number of other entities.