(12) United States Patent (10) Patent No.: US 9,332,251 B2 Haisty Et Al
Total Page:16
File Type:pdf, Size:1020Kb

Load more
Recommended publications
-
A Topology-Adaptive Overlay Framework. (Under the Direction of Dr
ABSTRACT KANDEKAR, KUNAL. TAO: A Topology-Adaptive Overlay Framework. (Under the direction of Dr. Khaled Harfoush.) Large-scale distributed systems rely on constructing overlay networks in which nodes communicate with each other through intermediate overlay neighbors. Organizing nodes in the overlay while preserving its congruence with the underlying IP topology (the underlay) is important to reduce the communication cost between nodes. In this thesis, we study the state- of-the-art approaches to match the overlay and underlay topologies and pinpoint their limitations in Internet-like setups. We also introduce a new Topology-Adaptive Overlay organization framework, TAO, which is scalable, accurate and lightweight. As opposed to earlier approaches, TAO compiles information resulting from traceroute packets to a small number of landmarks, and clusters nodes based on (1) the number of shared hops on their path towards the landmarks, and (2) their proximity to the landmarks. TAO is also highly flexible and can complement all existing structured and unstructured distributed systems. Our experimental results, based on actual Internet data, reveal that with only five landmarks, TAO identifies the closest node to any node with 85% - 90% accuracy and returns nodes that on average are within 1 millisecond from the closest node if the latter is missed. As a result, TAO overlays enjoy very low stretch (between 1.15 and 1.25). Our results also indicate that shortest-path routing on TAO overlays result in shorter end-to-end delays than direct underlay delays in 8-10% of the overlay paths. TAO: A Topology-Adaptive Overlay Framework by Kunal Kandekar A thesis submitted to the Graduate Faculty of North Carolina State University in partial fulfillment of the requirements for the Degree of Master of Science Computer Science Raleigh 2006 Approved by: _______________________________ _______________________________ Dr. -
Nuclear Data
CNIC-00810 CNDC-0013 INDC(CPR)-031/L NUCLEAR DATA No. 10 (1993) China Nuclear Information Center Chinese Nuclear Data Center Atomic Energy Press CNIC-00810 CNDC-0013 INDC(CPR)-031/L COMMUNICATION OF NUCLEAR DATA PROGRESS No. 10 (1993) Chinese Nuclear Data Center China Nuclear Information Centre Atomic Energy Press Beijing, December 1993 EDITORIAL BOARD Editor-in-Chief Liu Tingjin Zhuang Youxiang Member Cai Chonghai Cai Dunjiu Chen Zhenpeng Huang Houkun Liu Tingjin Ma Gonggui Shen Qingbiao Tang Guoyou Tang Hongqing Wang Yansen Wang Yaoqing Zhang Jingshang Zhang Xianqing Zhuang Youxiang Editorial Department Li Manli Sun Naihong Li Shuzhen EDITORIAL NOTE This is the tenth issue of Communication of Nuclear Data Progress (CNDP), in which the achievements in nuclear data field since the last year in China are carried. It includes the measurements of 54Fe(d,a), 56Fe(d,2n), 58Ni(d,a), (d,an), (d,x)57Ni, 182~ 184W(d,2n), 186W(d,p), (d,2n) and S8Ni(n,a) reactions; theoretical calculations on n+160 and ,97Au, 10B(n,n) and (n,n') reac tions, channel theory of fission with diffusive dynamics; the evaluations of in termediate energy nuclear data for 56Fe, 63Cu, 65Cu(p,n) monitor reactions, and of 180Hf, ,8lTa(n,2n) reactions, revision on recommended data of 235U and 238U for CENDL-2; fission barrier parameters sublibrary, a PC software of EXFOR compilation, some reports on atomic and molecular data and covariance research. We hope that our readers and colleagues will not spare their comments, in order to improve the publication. Please write to Drs. -
The Atomic Simulation Environment - a Python Library for Working with Atoms
Downloaded from orbit.dtu.dk on: Sep 28, 2021 The Atomic Simulation Environment - A Python library for working with atoms Larsen, Ask Hjorth; Mortensen, Jens Jørgen; Blomqvist, Jakob; Castelli, Ivano Eligio; Christensen, Rune; Dulak, Marcin; Friis, Jesper; Groves, Michael; Hammer, Bjørk; Hargus, Cory Total number of authors: 34 Published in: Journal of Physics: Condensed Matter Link to article, DOI: 10.1088/1361-648X/aa680e Publication date: 2017 Document Version Peer reviewed version Link back to DTU Orbit Citation (APA): Larsen, A. H., Mortensen, J. J., Blomqvist, J., Castelli, I. E., Christensen, R., Dulak, M., Friis, J., Groves, M., Hammer, B., Hargus, C., Hermes, E., C. Jennings, P., Jensen, P. B., Kermode, J., Kitchin, J., Kolsbjerg, E., Kubal, J., Kaasbjerg, K., Lysgaard, S., ... Jacobsen, K. W. (2017). The Atomic Simulation Environment - A Python library for working with atoms. Journal of Physics: Condensed Matter, 29, [273002]. https://doi.org/10.1088/1361-648X/aa680e General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. Users may download and print one copy of any publication from the public portal for the purpose of private study or research. You may not further distribute the material or use it for any profit-making activity or commercial gain You may freely distribute the URL identifying the publication in the public portal If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. -
NVIDIA Launches Tegra X1 Mobile Super Chip
NVIDIA Launches Tegra X1 Mobile Super Chip Maxwell GPU Architecture Delivers First Teraflops Mobile Processor, Powering Deep Learning and Computer Vision Applications NVIDIA today unveiled Tegra® X1, its next-generation mobile super chip with over one teraflops of processing power – delivering capabilities that open the door to unprecedented graphics and sophisticated deep learning and computer vision applications. Tegra X1 is built on the same NVIDIA Maxwell™ GPU architecture rolled out only months ago for the world's top-performing gaming graphics card, the GeForce® GTX 980. The 256-core Tegra X1 provides twice the performance of its predecessor, the Tegra K1, which is based on the previous-generation Kepler™ architecture and debuted at last year's Consumer Electronics Show. Tegra processors are built for embedded products, mobile devices, autonomous machines and automotive applications. Tegra X1 will begin appearing in the first half of the year. It will be featured in the newly announced NVIDIA DRIVE™ car computers. DRIVE PX is an auto-pilot computing platform that can process video from up to 12 onboard cameras to run capabilities providing Surround-Vision, for a seamless 360-degree view around the car, and Auto-Valet, for true self-parking. DRIVE CX is a complete cockpit platform designed to power the advanced graphics required across the increasing number of screens used for digital clusters, infotainment, head-up displays, virtual mirrors and rear-seat entertainment. "We see a future of autonomous cars, robots and drones that see and learn, with seeming intelligence that is hard to imagine," said Jen-Hsun Huang, CEO and co-founder, NVIDIA. -
Co-Optimizing Performance and Memory Footprint Via Integrated CPU/GPU Memory Management, an Implementation on Autonomous Driving Platform
Co-Optimizing Performance and Memory Footprint Via Integrated CPU/GPU Memory Management, an Implementation on Autonomous Driving Platform Soroush Bateni*, Zhendong Wang*, Yuankun Zhu, Yang Hu, and Cong Liu The University of Texas at Dallas Abstract—Cutting-edge embedded system applications, such as launches the OpenVINO toolkit for the edge-based deep self-driving cars and unmanned drone software, are reliant on learning inference on its integrated HD GPUs [4]. integrated CPU/GPU platforms for their DNNs-driven workload, Despite the advantages in SWaP features presented by the such as perception and other highly parallel components. In this work, we set out to explore the hidden performance im- integrated CPU/GPU architecture, our community still lacks plication of GPU memory management methods of integrated an in-depth understanding of the architectural and system CPU/GPU architecture. Through a series of experiments on behaviors of integrated GPU when emerging autonomous and micro-benchmarks and real-world workloads, we find that the edge intelligence workloads are executed, particularly in multi- performance under different memory management methods may tasking fashion. Specifically, in this paper we set out to explore vary according to application characteristics. Based on this observation, we develop a performance model that can predict the performance implications exposed by various GPU memory system overhead for each memory management method based management (MM) methods of the integrated CPU/GPU on application characteristics. Guided by the performance model, architecture. The reason we focus on the performance impacts we further propose a runtime scheduler. By conducting per-task of GPU MM methods are two-fold. -
A Low-Cost Deep Neural Network-Based Autonomous Car
DeepPicar: A Low-cost Deep Neural Network-based Autonomous Car Michael G. Bechtely, Elise McEllhineyy, Minje Kim?, Heechul Yuny y University of Kansas, USA. fmbechtel, elisemmc, [email protected] ? Indiana University, USA. [email protected] Abstract—We present DeepPicar, a low-cost deep neural net- task may be directly linked to the safety of the vehicle. This work based autonomous car platform. DeepPicar is a small scale requires a high computing capacity as well as the means to replication of a real self-driving car called DAVE-2 by NVIDIA. guaranteeing the timings. On the other hand, the computing DAVE-2 uses a deep convolutional neural network (CNN), which takes images from a front-facing camera as input and produces hardware platform must also satisfy cost, size, weight, and car steering angles as output. DeepPicar uses the same net- power constraints, which require a highly efficient computing work architecture—9 layers, 27 million connections and 250K platform. These two conflicting requirements complicate the parameters—and can drive itself in real-time using a web camera platform selection process as observed in [25]. and a Raspberry Pi 3 quad-core platform. Using DeepPicar, we To understand what kind of computing hardware is needed analyze the Pi 3’s computing capabilities to support end-to-end deep learning based real-time control of autonomous vehicles. for AI workloads, we need a testbed and realistic workloads. We also systematically compare other contemporary embedded While using a real car-based testbed would be most ideal, it computing platforms using the DeepPicar’s CNN-based real-time is not only highly expensive, but also poses serious safety control workload. -
Developing & Deploying Autonomous Driving
DEVELOPING & DEPLOYING AUTONOMOUS DRIVING APPLICATIONS WITH THE NVIDIA DRIVE PX PLATFORM Shri Sundaram, Product Manager, DRIVE PX Platform DRIVE PX: AV Development Platform AV Developers: DRIVE PX as your tool AV HW/SW Ecosystem: DRIVE PX as your platform to reach developers 2 NVIDIA DRIVE PX Open AV Computing Platform for the Transportation Industry Powerful and scalable AV computer Deep Neural Network, Sensor Fusion and Computing Extensive I/O to interface with wide range of sensors and vehicle networks An open SW stack Level 3 to Level 5; ASIL-D functional safety 3 DRIVE PX DRIVING AV AI DRIVE Platform Engagements 400 350 300 250 Launched CES 2015 200 150 Spike in AV AI engagements after 100 50 we powered on discrete GPU 0 FY18 Q1 FY18 Q3 More than doubled in last 6 months Plus >145 AV Startups on NVIDIA DRIVE Source: NVIDIA statistics 4 AV DEVELOPMENT Path from Idea to Production IDEA DEVELOPMENT PROTOTYPE PRODUCTION Develop Test Deploy Perception Feature development Safety hardening OBJECTIVE Mapping/Localization Validation Performance tuning Path Planning SW upgrades Combination/More… PC PC DRIVE Platform TOOL Automotive Sensors Production SW OTA framework Scalable compute with discrete GPUs Ecosystem of sensors + other HW/SW peripherals TensorRT, CUDA, Open Source Frameworks 5 Autonomous Vehicle DEVELOPMENT FLOW Application Development 4 Using DRIVE PX Platform Test/Drive 3 1 Data Acquired Curated/Annotated Neural Network Deep Neural From Sensors Training Data Training Network Autonomous Vehicle Applications 2 1 Data Acquisition -
Diseño De Un Prototipo Electrónico Para El Control Automático De La Luz Alta De Un Vehículo Mediante Detección Inteligente De Otros Automóviles.”
Tecnológico de Costa Rica Escuela Ingeniería Electrónica Proyecto de Graduación “Diseño de un prototipo electrónico para el control automático de la luz alta de un vehículo mediante detección inteligente de otros automóviles.” Informe de Proyecto de Graduación para optar por el título de Ingeniero en Electrónica con el grado académico de Licenciatura Ronald Miranda Arce San Carlos, 2018 2 Declaración de autenticidad Yo, Ronald Miranda Arce, en calidad de estudiante de la carrera de ingeniería Electrónica, declaro que los contenidos de este informe de proyecto de graduación son absolutamente originales, auténticos y de exclusiva responsabilidad legal y académica del autor. Ronald Fernando Miranda Arce Santa Clara, San Carlos, Costa Rica Cédula: 207320032 3 Resumen Se presenta el diseño y prueba de un prototipo electrónico para el control automático de la luz alta de un vehículo mediante la detección inteligente de otros automóviles, de esta forma mejorar la experiencia de manejo y seguridad al conducir en la noche, disminuyendo el riesgo de accidentes por deslumbramiento en las carreteras. Se expone el uso de machine learning (máquina de aprendizaje automático) para el entrenamiento de una red neuronal, que permita el reconocimiento en profundidad de patrones, en este caso identificar el patrón que describe un vehículo cuando se acerca durante la noche. Se entrenó exitosamente una red neuronal capaz de identificar correctamente hasta un 89% de los casos ocurridos en las pruebas realizadas, generando las señales para controlar de forma automática los cambios de luz. 4 Summary The design and testing of an electronic prototype for the automatic control of the vehicle's high light through the intelligent detection of other automobiles is presented. -
Introduction 3 Acknowledgement 3 Problem and Project Statement 3
Introduction 3 Acknowledgement 3 Problem and Project Statement 3 Operational environment 5 Intended uses and users 5 Assumptions and limitations 6 End Product 7 Specifications and analysis 9 Proposed Design 9 Design Analysis 10 Implementation details 10 Testing process and testing results 11 Placing your work in the context of: 21 • Related products 21 • Related literature If/When needed, you should resort to Appendices. 22 Appendix I – Operation Manual 23 Raspberry PI 3 Model B 23 Nvidia Drive PX2 29 Software Licensing 31 Appendix II: Alternative/ other initial versions of the design 32 1) The DSRC 32 2) The Mobile Wireless Data 32 3) The Arduino & XBEE 32 Appendix III: Other Considerations 34 Appendix IV: Code 35 Introduction 35 External Libraries 35 Native Libraries 35 Gather Component Information 35 Finding and Initializing Devices 36 Initialize The GPS 37 Parsing and Transmission 37 Introduction As part of our senior design we had to create the means for the communication between two vehicles. The information that had to be communicated was the latitude, longitude coordinates, and the time. For this we have come up with several solutions throughout our design and testing process. We looked into DSRC, Cellular Communication, Raspberry Pi’s over a LAN network, and Xbee transceivers. All these are viable options and several can complement the others pretty well. We will have more details in following sections. Acknowledgement First off, we would like to formally thank Dr. Hegde for guidance on our project and the insightful advice on how to approach and overcome different obstacles of our progress. Also we would like to thank Vishal as our project manager who made sure we stuck to our milestones and meeting with us every week to make sure we were working towards our goals. -
Normas-ML: Supporting the Modeling of Normative Multi-Agent
49 2019 8 4 ADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal. Vol. 8 N. 4 (2019), 49-81 ADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal Regular Issue, Vol. 8 N. 4 (2019), 49-81 eISSN: 2255-2863 DOI: http://dx.doi.org/10.14201/ADCAIJ2019844981 NorMAS-ML: Supporting the Modeling of Normative Multi-agent Systems Emmanuel Sávio Silva Freirea, Mariela Inés Cortésb, Robert Marinho da Rocha Júniorb, Enyo José Tavares Gonçalvesc, and Gustavo Augusto Campos de Limab aTeaching Department, Federal Institute of Ceará, Morada Nova, Brazil bComputer Science Department, State University of Ceará, Fortaleza, Brazil cComputer Science Department, Federal University of Ceará, Quixadá, Brazil [email protected], [email protected], [email protected], [email protected], [email protected] KEYWORD ABSTRACT Normative Multi- Context: A normative multi-agent system (NMAS) is composed of agents that their be- agent System; havior is regulated by norms. The modeling of those elements (agents and norms) togeth- Norms; Modeling er at design time can be a good way for a complete understanding of their structure and Language; behavior. Multi-agent system modeling language (MAS-ML) supports the representation NorMAS-ML of NMAS entities, but the support for concepts related to norms is somewhat limited. MAS-ML is founded in taming agents and objects (TAO) framework and has a support tool called the MAS-ML tool. Goal: The present work aims to present a UML-based modeling language called normative multi-agent system (NorMAS-ML) able to model the MAS main entities along with the static normative elements. -
Deeprcar: an Autonomous Car Model Student: Bc
ASSIGNMENT OF MASTER’S THESIS Title: DeepRCar: An Autonomous Car Model Student: Bc. David Ungurean Supervisor: Ing. Zdeněk Buk, Ph.D. Study Programme: Informatics Study Branch: Knowledge Engineering Department: Department of Applied Mathematics Validity: Until the end of winter semester 2019/20 Instructions Search for RC car models suitable for autonomous driving implementation. Focus on models which allow the control software to be run onboard. The autonomous control will be based primarily on visual inputs from an onboard camera. Other sensors such as ultrasonic distance sensors can be also used. Implement the control algorithms using deep neural networks which use the visual information as input and produce control signals for steering and speed control. Design and implement several experiments to test various neural network architectures and image preprocessing methods. References - J. Koutnik, J. Schmidhuber, F. Gomez, Online evolution of deep convolutional network for vision-based RL, SAB 2014: http://people.idsia.ch/ koutnik/papers/koutnik2014sab.pdf - Zheng Wang, Self Driving RC Car: https://zhengludwig.wordpress.com/projects/self-driving-rc-car/ Ing. Karel Klouda, Ph.D. doc. RNDr. Ing. Marcel Jiřina, Ph.D. Head of Department Dean Prague February 16, 2018 Master’s thesis DeepRCar: An Autonomous Car Model Bc. David Ungurean Department of Applied Mathematics Supervisor: Ing. Zdenˇek Buk, Ph.d. May 9, 2018 Acknowledgements First of all, I would like to thank my supervisor Dr. Buk for giving me the chance to explore such an interesting field of machine learning and for always pointing me in the right direction, whenever I was starting to get lost. I am also very grateful to Dr. -
Session 3 15
Proceedings of CGAMES’2006 8th International Conference on Computer Games: Artificial Intelligence and Mobile Systems 24-27 July 2006 Hosted by Galt House Hotel Louisville, Kentucky, USA Organised by University of Wolverhampton in association with University of Louisville and IEEE Computer Society Society for Modelling and Simulation (SCS-Europe) Institution of Electrical Engineers (IEE) British Computer Society (BCS) Digital Games Research Association (DiGRA) International Journal of Intelligent Games and Simulation (IJIGS) Edited by: Quasim Mehdi Guest Editor: Adel Elmaghraby Published by The University of Wolverhampton School of Computing and Information Technology Printed in Wolverhampton, UK ©2006 The University of Wolverhampton Responsibility for the accuracy of all material appearing in the papers is the responsibility of the authors alone. Statements are not necessarily endorsed by the University of Wolverhampton, members of the Programme Committee or associated organisations. Permission is granted to photocopy the abstracts and other portions of this publication for personal use and for the use of students providing that credit is given to the conference and publication. Permission does not extend to other types of reproduction nor to copying for use in any profit-making purposes. Other publications are encouraged to include 300-500 word abstracts or excerpts from any paper, provided credits are given to the author, conference and publication. For permission to publish a complete paper contact Quasim Mehdi, SCIT, University of Wolverhampton, Wulfruna Street, Wolverhampton, WV1 1SB, UK, [email protected]. All author contact information in these Proceedings is covered by the European Privacy Law and may not be used in any form, written or electronic without the explicit written permission of the author and/or the publisher.