(12) United States Patent (10) Patent No.: US 9,332,251 B2 Haisty Et Al

Total Page:16

File Type:pdf, Size:1020Kb

(12) United States Patent (10) Patent No.: US 9,332,251 B2 Haisty Et Al US009332251B2 (12) United States Patent (10) Patent No.: US 9,332,251 B2 Haisty et al. (45) Date of Patent: *May 3, 2016 (54) SYSTEMS AND METHODS FOR OPTICALLY USPC ........................................... 382/285; 348/745 PROJECTING THREE-DIMENSIONAL TEXT, See application file for complete search history. IMAGES AND/OR SYMBOLS ONTO THREE-DIMIENSIONAL OBJECTS (56) References Cited (71) Applicant: Delta Sigma Company, Kennesaw, GA U.S. PATENT DOCUMENTS (US) 6,269,284 B1 7/2001 Lau et al. 6,456,340 B1* 9/2002 Margulis ....................... 348,745 (72) Inventors: Brett Stanton Haisty, Powder Springs, 6,843,564 B2 1/2005 Putilin et al. GA (US); Roger Curtis Richardson, 7,184,049 B2 2/2007 Ballin et al. Acworth, GA (US) 7,268,893 B2 9, 2007 Palmateer 7,399,086 B2 7/2008 Huewel (73) Assignee: DELTA SIGMA COMPANY., 7.433,796 B2 10/2008 Behan et al. Kennesaw, GA (US) (Continued) (*) Notice: Subject to any disclaimer, the term of this Primary Examiner — Dustin Nguyen patent is extended or adjusted under 35 (74) Attorney, Agent, or Firm — Thomas I Horstemeyer, U.S.C. 154(b) by 75 days. LLP This patent is Subject to a terminal dis claimer. (57) ABSTRACT The present invention provides a method whereby the spatial (21) Appl. No.: 14/109,137 relationship and orientations of one or more three-dimen sional objects being illuminated by an optical projector, and (22) Filed: Dec. 17, 2013 the optical projector itself, can be very accurately defined (65) Prior Publication Data both quickly and easily. The present invention also provides a novel computerized optical projection system whereby three US 2014/O184757 A1 Jul. 3, 2014 dimensional data when viewed by the human eye projected onto three-dimensional objects is not deformed as a result of Related U.S. Application Data the non-planar projection Surface. In one embodiment, the (63) Continuation of application No. 12/590,500, filed on invention provides computerized optical assembly or manu Nov. 9, 2009, now Pat. No. 8,610,761. facturing guidance systems, and related methods, that pro vide step-by-step assembly or manufacturing instructions for (51) Int. Cl. instructing technicians how to assemble or manufacture H04N IS/00 (2006.01) three-dimensional objects or systems. The assembly instruc H04N I3/04 (2006.01) tions are in the form of calibrated three-dimensional text, H04N 9/3 (2006.01) images and/or symbols, and are projected by one or a plurality (52) U.S. Cl. of optical projectors that are in operable communication with CPC ......... H04N 13/0425 (2013.01); H04N 9/3147 one or a plurality of computers onto the three-dimensional (2013.01); H04N 9/3185 (2013.01) objects or systems, or component parts or skins thereof. (58) Field of Classification Search CPC ........................... H04N 9/3147; H04N 9/3185 20 Claims, 23 Drawing Sheets US 9,332,251 B2 Page 2 (56) References Cited 2005/0064936 A1 3/2005 Pryor 2007/0038939 A1 2/2007 Challen et al. U.S. PATENT DOCUMENTS 2007/O190495 A1 8, 2007 Kendir et al. 2007/0247640 A1* 10/2007 Magome et al. ... 356,609 7,477,252 B2 1/2009 Chun 2008/0095468 A1* 4/2008 Klemmer et al. ............. 382,285 7,843,449 B2 11/2010 Krah 2008/0246943 A1 10, 2008 Kaufman et al. 8,265,376 B2 9, 2012 Tsuk et al. 2009,0268.095 A1 10, 2009 Hoshino et al. 2002fOO24640 A1 2/2002 Ioka 2011 0169924 A1 7/2011 Haisty et al. 2003/01.00998 A2 5/2003 Brunner et al. ................. 7O2/19 2005, 0043837 A1 2/2005 Rubbert et al. * cited by examiner U.S. Patent May 3, 2016 Sheet 1 of 23 US 9,332,251 B2 s U.S. Patent May 3, 2016 Sheet 2 of 23 US 9,332,251 B2 s s :: i is is ss 8888 88 SigExcessessisi isis; its say? 3888 as as U.S. Patent May 3, 2016 Sheet 3 of 23 US 9,332,251 B2 U.S. Patent May 3, 2016 Sheet 4 of 23 US 9,332,251 B2 U.S. Patent May 3, 2016 Sheet 5 of 23 US 9,332,251 B2 FIG - 5 Geoffretty E3&se Geometry A rotated Genetry i ruram.....--------- U.S. Patent May 3, 2016 Sheet 6 of 23 US 9,332,251 B2 F.G. 6 roses registest cach. U.S. Patent May 3, 2016 Sheet 8 of 23 US 9,332,251 B2 F.G. 8 Deities () at C whitespace C Costans U.S. Patent May 3, 2016 Sheet 9 of 23 US 9,332,251 B2 F.G. 9 forester NSF8 U.S. Patent May 3, 2016 Sheet 10 of 23 US 9,332,251 B2 F.G. ... O Syrholype - - - - - - - - - - - - - - - - - - - - Appearancs C line ce {O Rectarge s O firge C artiard C alyhedron U.S. Patent May 3, 2016 Sheet 11 of 23 US 9,332,251 B2 F.G. Notina Coburn x 4 Y 5 & 2 is C Points ire in XY Pine C Points iris in Y2Pierre Parse Offset O Points irre in X2 Pare iiafmate O Column 5 or & Define as ir U.S. Patent May 3, 2016 Sheet 12 of 23 US 9,332,251 B2 F.G. .. 2 is as Ek - is is a 8 is is & Essagassessesses U.S. Patent May 3, 2016 Sheet 13 of 23 US 9,332,251 B2 Sixs2 & asses Foxtssay seas ests, sessing fastasia issress Sex elevair axes. & S&s to theses. is assisrss.sists She as westerestos.state west so one Youssows- ------------------ belows us tes. Ricar assion ξ black bisex wis's star ovets. perator Process Steps & faG itesses a sixtes: sri Sys, "Esgassess'sses yeak '3're S K Ssss. she tearsiese, go to tie-Frash edge is xxess sists. s is besissix seabic 8x8x scies. U.S. Patent May 3, 2016 Sheet 14 of 23 US 9,332,251 B2 F.G. 4 She S. is:res its are is risis fiste Basion: U.S. Patent May 3, 2016 Sheet 15 Of 23 US 9,332,251 B2 . at Fre?ne of 8, Sis a rest Ster ~ sc weak Fiame - ESSSS is:is:Frns Fran Serri,sers' locate and click on the blinking item. U.S. Patent May 3, 2016 Sheet 16 of 23 US 9,332,251 B2 F.G. 6 as- is:Es: "desuse Fiatie is:e 8 stas - Ss -east stay.Stsir 333 in is Fistice ag. s Fais-Sto instructions T ne p U.S. Patent May 3, 2016 Sheet 18 of 23 US 9,332,251 B2 Measure Frare . i.: ssoss weak State SS Seawas Massure Fame fog, MLAsset', s : -SS: lit.:3; Fism: Top S. --s:Sax iris Fame B3ta. U.S. Patent May 3, 2016 Sheet 19 of 23 US 9,332,251 B2 instal Frame: Top. se 2EW88: Cy 34 Sirio affs 2EW 64; y i. x 3. is : is: is Fine p 3 sis: iFiene Etern & Ste: seat. Faire Sieg s tiss'? fare la is Step : '... Niessure Friars 8 attan Step - incta Fiate iot ...: Stepas Ysss:rists: Fat: 33SY irst fictions Caiibrat the gage, U.S. Patent May 3, 2016 Sheet 20 of 23 US 9,332,251 B2 U.S. Patent May 3, 2016 Sheet 21 of 23 US 9,332,251 B2 §§ §§ U.S. Patent May 3 9 2016 Sheet 22 of 23 US 9,332,251 B2 U.S. Patent US 9,332,251 B2 ?--------------------------66 US 9,332,251 B2 1. 2 SYSTEMS AND METHODS FOR OPTICALLY Each compact disk contains the files that are listed below, PROJECTING THREE-DIMENSIONAL TEXT, which are two types offiles (..txtand.cs. files). A few of the file IMAGES AND/OR SYMBOLS ONTO names appearing below appear on two lines, rather than on THREE-DIMIENSIONAL OBJECTS one line, as is indicated if there is no file size or creation date on the second line. The files that end in .txt are identical to the CROSS-REFERENCE TO RELATED APPLICATION corresponding files (those that have the same file name) that end in.cs, which are true C# source code files. The files that This application is a continuation of copending U.S. utility end in .cs can be used by Microsoft Visual Studio, or by a similar software development tool that Supports C#, and can application entitled, “Systems and Methods for Optically 10 Projecting Three-Dimensional Text, Images And/Or Symbols be compiled. The creation dates of the C# files are listed Onto Three-Dimensional Objects.” having Ser. No. 12/590, hereinbelow. The files that end in .txt are text files that can be 500, filed Nov. 9, 2009, which is entirely incorporated herein opened by Word, Notepad, Wordpad, or any other word pro by reference. cessing program that Supports the .txt format. For the pur 15 poses of viewing the information in the files, the .txt files COMPUTER PROGRAM LISTINGAPPENDIX should be used. Any of the text files on the compact discs can be converted to a source code file by changing its name from All of the information, material and source code that is XXX.txt to XXX.cs. This may be performed in Windows present in the accompanying two compact discs, which are Explorer by right clicking on the file and selecting rename, identical, and labeled as Copy 1 of 2, and Copy 2 of 2, is then typing the new name of the file in the highlighted area. hereby incorporated herein by reference in its entirety. All of the text files were created on Nov. 6, 2009. Authoring Assembly Software Tool Files Size (Bytes) FileName (C#) FileName (.TXT) Creation Date (C#) 2,728 AddDrawingSurface.cs AddDrawingSurface.txt Oct.
Recommended publications
  • A Topology-Adaptive Overlay Framework. (Under the Direction of Dr
    ABSTRACT KANDEKAR, KUNAL. TAO: A Topology-Adaptive Overlay Framework. (Under the direction of Dr. Khaled Harfoush.) Large-scale distributed systems rely on constructing overlay networks in which nodes communicate with each other through intermediate overlay neighbors. Organizing nodes in the overlay while preserving its congruence with the underlying IP topology (the underlay) is important to reduce the communication cost between nodes. In this thesis, we study the state- of-the-art approaches to match the overlay and underlay topologies and pinpoint their limitations in Internet-like setups. We also introduce a new Topology-Adaptive Overlay organization framework, TAO, which is scalable, accurate and lightweight. As opposed to earlier approaches, TAO compiles information resulting from traceroute packets to a small number of landmarks, and clusters nodes based on (1) the number of shared hops on their path towards the landmarks, and (2) their proximity to the landmarks. TAO is also highly flexible and can complement all existing structured and unstructured distributed systems. Our experimental results, based on actual Internet data, reveal that with only five landmarks, TAO identifies the closest node to any node with 85% - 90% accuracy and returns nodes that on average are within 1 millisecond from the closest node if the latter is missed. As a result, TAO overlays enjoy very low stretch (between 1.15 and 1.25). Our results also indicate that shortest-path routing on TAO overlays result in shorter end-to-end delays than direct underlay delays in 8-10% of the overlay paths. TAO: A Topology-Adaptive Overlay Framework by Kunal Kandekar A thesis submitted to the Graduate Faculty of North Carolina State University in partial fulfillment of the requirements for the Degree of Master of Science Computer Science Raleigh 2006 Approved by: _______________________________ _______________________________ Dr.
    [Show full text]
  • Nuclear Data
    CNIC-00810 CNDC-0013 INDC(CPR)-031/L NUCLEAR DATA No. 10 (1993) China Nuclear Information Center Chinese Nuclear Data Center Atomic Energy Press CNIC-00810 CNDC-0013 INDC(CPR)-031/L COMMUNICATION OF NUCLEAR DATA PROGRESS No. 10 (1993) Chinese Nuclear Data Center China Nuclear Information Centre Atomic Energy Press Beijing, December 1993 EDITORIAL BOARD Editor-in-Chief Liu Tingjin Zhuang Youxiang Member Cai Chonghai Cai Dunjiu Chen Zhenpeng Huang Houkun Liu Tingjin Ma Gonggui Shen Qingbiao Tang Guoyou Tang Hongqing Wang Yansen Wang Yaoqing Zhang Jingshang Zhang Xianqing Zhuang Youxiang Editorial Department Li Manli Sun Naihong Li Shuzhen EDITORIAL NOTE This is the tenth issue of Communication of Nuclear Data Progress (CNDP), in which the achievements in nuclear data field since the last year in China are carried. It includes the measurements of 54Fe(d,a), 56Fe(d,2n), 58Ni(d,a), (d,an), (d,x)57Ni, 182~ 184W(d,2n), 186W(d,p), (d,2n) and S8Ni(n,a) reactions; theoretical calculations on n+160 and ,97Au, 10B(n,n) and (n,n') reac­ tions, channel theory of fission with diffusive dynamics; the evaluations of in­ termediate energy nuclear data for 56Fe, 63Cu, 65Cu(p,n) monitor reactions, and of 180Hf, ,8lTa(n,2n) reactions, revision on recommended data of 235U and 238U for CENDL-2; fission barrier parameters sublibrary, a PC software of EXFOR compilation, some reports on atomic and molecular data and covariance research. We hope that our readers and colleagues will not spare their comments, in order to improve the publication. Please write to Drs.
    [Show full text]
  • The Atomic Simulation Environment - a Python Library for Working with Atoms
    Downloaded from orbit.dtu.dk on: Sep 28, 2021 The Atomic Simulation Environment - A Python library for working with atoms Larsen, Ask Hjorth; Mortensen, Jens Jørgen; Blomqvist, Jakob; Castelli, Ivano Eligio; Christensen, Rune; Dulak, Marcin; Friis, Jesper; Groves, Michael; Hammer, Bjørk; Hargus, Cory Total number of authors: 34 Published in: Journal of Physics: Condensed Matter Link to article, DOI: 10.1088/1361-648X/aa680e Publication date: 2017 Document Version Peer reviewed version Link back to DTU Orbit Citation (APA): Larsen, A. H., Mortensen, J. J., Blomqvist, J., Castelli, I. E., Christensen, R., Dulak, M., Friis, J., Groves, M., Hammer, B., Hargus, C., Hermes, E., C. Jennings, P., Jensen, P. B., Kermode, J., Kitchin, J., Kolsbjerg, E., Kubal, J., Kaasbjerg, K., Lysgaard, S., ... Jacobsen, K. W. (2017). The Atomic Simulation Environment - A Python library for working with atoms. Journal of Physics: Condensed Matter, 29, [273002]. https://doi.org/10.1088/1361-648X/aa680e General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. Users may download and print one copy of any publication from the public portal for the purpose of private study or research. You may not further distribute the material or use it for any profit-making activity or commercial gain You may freely distribute the URL identifying the publication in the public portal If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
    [Show full text]
  • NVIDIA Launches Tegra X1 Mobile Super Chip
    NVIDIA Launches Tegra X1 Mobile Super Chip Maxwell GPU Architecture Delivers First Teraflops Mobile Processor, Powering Deep Learning and Computer Vision Applications NVIDIA today unveiled Tegra® X1, its next-generation mobile super chip with over one teraflops of processing power – delivering capabilities that open the door to unprecedented graphics and sophisticated deep learning and computer vision applications. Tegra X1 is built on the same NVIDIA Maxwell™ GPU architecture rolled out only months ago for the world's top-performing gaming graphics card, the GeForce® GTX 980. The 256-core Tegra X1 provides twice the performance of its predecessor, the Tegra K1, which is based on the previous-generation Kepler™ architecture and debuted at last year's Consumer Electronics Show. Tegra processors are built for embedded products, mobile devices, autonomous machines and automotive applications. Tegra X1 will begin appearing in the first half of the year. It will be featured in the newly announced NVIDIA DRIVE™ car computers. DRIVE PX is an auto-pilot computing platform that can process video from up to 12 onboard cameras to run capabilities providing Surround-Vision, for a seamless 360-degree view around the car, and Auto-Valet, for true self-parking. DRIVE CX is a complete cockpit platform designed to power the advanced graphics required across the increasing number of screens used for digital clusters, infotainment, head-up displays, virtual mirrors and rear-seat entertainment. "We see a future of autonomous cars, robots and drones that see and learn, with seeming intelligence that is hard to imagine," said Jen-Hsun Huang, CEO and co-founder, NVIDIA.
    [Show full text]
  • Co-Optimizing Performance and Memory Footprint Via Integrated CPU/GPU Memory Management, an Implementation on Autonomous Driving Platform
    Co-Optimizing Performance and Memory Footprint Via Integrated CPU/GPU Memory Management, an Implementation on Autonomous Driving Platform Soroush Bateni*, Zhendong Wang*, Yuankun Zhu, Yang Hu, and Cong Liu The University of Texas at Dallas Abstract—Cutting-edge embedded system applications, such as launches the OpenVINO toolkit for the edge-based deep self-driving cars and unmanned drone software, are reliant on learning inference on its integrated HD GPUs [4]. integrated CPU/GPU platforms for their DNNs-driven workload, Despite the advantages in SWaP features presented by the such as perception and other highly parallel components. In this work, we set out to explore the hidden performance im- integrated CPU/GPU architecture, our community still lacks plication of GPU memory management methods of integrated an in-depth understanding of the architectural and system CPU/GPU architecture. Through a series of experiments on behaviors of integrated GPU when emerging autonomous and micro-benchmarks and real-world workloads, we find that the edge intelligence workloads are executed, particularly in multi- performance under different memory management methods may tasking fashion. Specifically, in this paper we set out to explore vary according to application characteristics. Based on this observation, we develop a performance model that can predict the performance implications exposed by various GPU memory system overhead for each memory management method based management (MM) methods of the integrated CPU/GPU on application characteristics. Guided by the performance model, architecture. The reason we focus on the performance impacts we further propose a runtime scheduler. By conducting per-task of GPU MM methods are two-fold.
    [Show full text]
  • A Low-Cost Deep Neural Network-Based Autonomous Car
    DeepPicar: A Low-cost Deep Neural Network-based Autonomous Car Michael G. Bechtely, Elise McEllhineyy, Minje Kim?, Heechul Yuny y University of Kansas, USA. fmbechtel, elisemmc, [email protected] ? Indiana University, USA. [email protected] Abstract—We present DeepPicar, a low-cost deep neural net- task may be directly linked to the safety of the vehicle. This work based autonomous car platform. DeepPicar is a small scale requires a high computing capacity as well as the means to replication of a real self-driving car called DAVE-2 by NVIDIA. guaranteeing the timings. On the other hand, the computing DAVE-2 uses a deep convolutional neural network (CNN), which takes images from a front-facing camera as input and produces hardware platform must also satisfy cost, size, weight, and car steering angles as output. DeepPicar uses the same net- power constraints, which require a highly efficient computing work architecture—9 layers, 27 million connections and 250K platform. These two conflicting requirements complicate the parameters—and can drive itself in real-time using a web camera platform selection process as observed in [25]. and a Raspberry Pi 3 quad-core platform. Using DeepPicar, we To understand what kind of computing hardware is needed analyze the Pi 3’s computing capabilities to support end-to-end deep learning based real-time control of autonomous vehicles. for AI workloads, we need a testbed and realistic workloads. We also systematically compare other contemporary embedded While using a real car-based testbed would be most ideal, it computing platforms using the DeepPicar’s CNN-based real-time is not only highly expensive, but also poses serious safety control workload.
    [Show full text]
  • Developing & Deploying Autonomous Driving
    DEVELOPING & DEPLOYING AUTONOMOUS DRIVING APPLICATIONS WITH THE NVIDIA DRIVE PX PLATFORM Shri Sundaram, Product Manager, DRIVE PX Platform DRIVE PX: AV Development Platform AV Developers: DRIVE PX as your tool AV HW/SW Ecosystem: DRIVE PX as your platform to reach developers 2 NVIDIA DRIVE PX Open AV Computing Platform for the Transportation Industry Powerful and scalable AV computer Deep Neural Network, Sensor Fusion and Computing Extensive I/O to interface with wide range of sensors and vehicle networks An open SW stack Level 3 to Level 5; ASIL-D functional safety 3 DRIVE PX DRIVING AV AI DRIVE Platform Engagements 400 350 300 250 Launched CES 2015 200 150 Spike in AV AI engagements after 100 50 we powered on discrete GPU 0 FY18 Q1 FY18 Q3 More than doubled in last 6 months Plus >145 AV Startups on NVIDIA DRIVE Source: NVIDIA statistics 4 AV DEVELOPMENT Path from Idea to Production IDEA DEVELOPMENT PROTOTYPE PRODUCTION Develop Test Deploy Perception Feature development Safety hardening OBJECTIVE Mapping/Localization Validation Performance tuning Path Planning SW upgrades Combination/More… PC PC DRIVE Platform TOOL Automotive Sensors Production SW OTA framework Scalable compute with discrete GPUs Ecosystem of sensors + other HW/SW peripherals TensorRT, CUDA, Open Source Frameworks 5 Autonomous Vehicle DEVELOPMENT FLOW Application Development 4 Using DRIVE PX Platform Test/Drive 3 1 Data Acquired Curated/Annotated Neural Network Deep Neural From Sensors Training Data Training Network Autonomous Vehicle Applications 2 1 Data Acquisition
    [Show full text]
  • Diseño De Un Prototipo Electrónico Para El Control Automático De La Luz Alta De Un Vehículo Mediante Detección Inteligente De Otros Automóviles.”
    Tecnológico de Costa Rica Escuela Ingeniería Electrónica Proyecto de Graduación “Diseño de un prototipo electrónico para el control automático de la luz alta de un vehículo mediante detección inteligente de otros automóviles.” Informe de Proyecto de Graduación para optar por el título de Ingeniero en Electrónica con el grado académico de Licenciatura Ronald Miranda Arce San Carlos, 2018 2 Declaración de autenticidad Yo, Ronald Miranda Arce, en calidad de estudiante de la carrera de ingeniería Electrónica, declaro que los contenidos de este informe de proyecto de graduación son absolutamente originales, auténticos y de exclusiva responsabilidad legal y académica del autor. Ronald Fernando Miranda Arce Santa Clara, San Carlos, Costa Rica Cédula: 207320032 3 Resumen Se presenta el diseño y prueba de un prototipo electrónico para el control automático de la luz alta de un vehículo mediante la detección inteligente de otros automóviles, de esta forma mejorar la experiencia de manejo y seguridad al conducir en la noche, disminuyendo el riesgo de accidentes por deslumbramiento en las carreteras. Se expone el uso de machine learning (máquina de aprendizaje automático) para el entrenamiento de una red neuronal, que permita el reconocimiento en profundidad de patrones, en este caso identificar el patrón que describe un vehículo cuando se acerca durante la noche. Se entrenó exitosamente una red neuronal capaz de identificar correctamente hasta un 89% de los casos ocurridos en las pruebas realizadas, generando las señales para controlar de forma automática los cambios de luz. 4 Summary The design and testing of an electronic prototype for the automatic control of the vehicle's high light through the intelligent detection of other automobiles is presented.
    [Show full text]
  • Introduction 3 Acknowledgement 3 Problem and Project Statement 3
    Introduction 3 Acknowledgement 3 Problem and Project Statement 3 Operational environment 5 Intended uses and users 5 Assumptions and limitations 6 End Product 7 Specifications and analysis 9 Proposed Design 9 Design Analysis 10 Implementation details 10 Testing process and testing results 11 Placing your work in the context of: 21 • Related products 21 • Related literature If/When needed, you should resort to Appendices. 22 Appendix I – Operation Manual 23 Raspberry PI 3 Model B 23 Nvidia Drive PX2 29 Software Licensing 31 Appendix II: Alternative/ other initial versions of the design 32 1) The DSRC 32 2) The Mobile Wireless Data 32 3) The Arduino & XBEE 32 Appendix III: Other Considerations 34 Appendix IV: Code 35 Introduction 35 External Libraries 35 Native Libraries 35 Gather Component Information 35 Finding and Initializing Devices 36 Initialize The GPS 37 Parsing and Transmission 37 Introduction As part of our senior design we had to create the means for the communication between two vehicles. The information that had to be communicated was the latitude, longitude coordinates, and the time. For this we have come up with several solutions throughout our design and testing process. We looked into DSRC, Cellular Communication, Raspberry Pi’s over a LAN network, and Xbee transceivers. All these are viable options and several can complement the others pretty well. We will have more details in following sections. Acknowledgement First off, we would like to formally thank Dr. Hegde for guidance on our project and the insightful advice on how to approach and overcome different obstacles of our progress. Also we would like to thank Vishal as our project manager who made sure we stuck to our milestones and meeting with us every week to make sure we were working towards our goals.
    [Show full text]
  • Normas-ML: Supporting the Modeling of Normative Multi-Agent
    49 2019 8 4 ADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal. Vol. 8 N. 4 (2019), 49-81 ADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal Regular Issue, Vol. 8 N. 4 (2019), 49-81 eISSN: 2255-2863 DOI: http://dx.doi.org/10.14201/ADCAIJ2019844981 NorMAS-ML: Supporting the Modeling of Normative Multi-agent Systems Emmanuel Sávio Silva Freirea, Mariela Inés Cortésb, Robert Marinho da Rocha Júniorb, Enyo José Tavares Gonçalvesc, and Gustavo Augusto Campos de Limab aTeaching Department, Federal Institute of Ceará, Morada Nova, Brazil bComputer Science Department, State University of Ceará, Fortaleza, Brazil cComputer Science Department, Federal University of Ceará, Quixadá, Brazil [email protected], [email protected], [email protected], [email protected], [email protected] KEYWORD ABSTRACT Normative Multi- Context: A normative multi-agent system (NMAS) is composed of agents that their be- agent System; havior is regulated by norms. The modeling of those elements (agents and norms) togeth- Norms; Modeling er at design time can be a good way for a complete understanding of their structure and Language; behavior. Multi-agent system modeling language (MAS-ML) supports the representation NorMAS-ML of NMAS entities, but the support for concepts related to norms is somewhat limited. MAS-ML is founded in taming agents and objects (TAO) framework and has a support tool called the MAS-ML tool. Goal: The present work aims to present a UML-based modeling language called normative multi-agent system (NorMAS-ML) able to model the MAS main entities along with the static normative elements.
    [Show full text]
  • Deeprcar: an Autonomous Car Model Student: Bc
    ASSIGNMENT OF MASTER’S THESIS Title: DeepRCar: An Autonomous Car Model Student: Bc. David Ungurean Supervisor: Ing. Zdeněk Buk, Ph.D. Study Programme: Informatics Study Branch: Knowledge Engineering Department: Department of Applied Mathematics Validity: Until the end of winter semester 2019/20 Instructions Search for RC car models suitable for autonomous driving implementation. Focus on models which allow the control software to be run onboard. The autonomous control will be based primarily on visual inputs from an onboard camera. Other sensors such as ultrasonic distance sensors can be also used. Implement the control algorithms using deep neural networks which use the visual information as input and produce control signals for steering and speed control. Design and implement several experiments to test various neural network architectures and image preprocessing methods. References - J. Koutnik, J. Schmidhuber, F. Gomez, Online evolution of deep convolutional network for vision-based RL, SAB 2014: http://people.idsia.ch/ koutnik/papers/koutnik2014sab.pdf - Zheng Wang, Self Driving RC Car: https://zhengludwig.wordpress.com/projects/self-driving-rc-car/ Ing. Karel Klouda, Ph.D. doc. RNDr. Ing. Marcel Jiřina, Ph.D. Head of Department Dean Prague February 16, 2018 Master’s thesis DeepRCar: An Autonomous Car Model Bc. David Ungurean Department of Applied Mathematics Supervisor: Ing. Zdenˇek Buk, Ph.d. May 9, 2018 Acknowledgements First of all, I would like to thank my supervisor Dr. Buk for giving me the chance to explore such an interesting field of machine learning and for always pointing me in the right direction, whenever I was starting to get lost. I am also very grateful to Dr.
    [Show full text]
  • Session 3 15
    Proceedings of CGAMES’2006 8th International Conference on Computer Games: Artificial Intelligence and Mobile Systems 24-27 July 2006 Hosted by Galt House Hotel Louisville, Kentucky, USA Organised by University of Wolverhampton in association with University of Louisville and IEEE Computer Society Society for Modelling and Simulation (SCS-Europe) Institution of Electrical Engineers (IEE) British Computer Society (BCS) Digital Games Research Association (DiGRA) International Journal of Intelligent Games and Simulation (IJIGS) Edited by: Quasim Mehdi Guest Editor: Adel Elmaghraby Published by The University of Wolverhampton School of Computing and Information Technology Printed in Wolverhampton, UK ©2006 The University of Wolverhampton Responsibility for the accuracy of all material appearing in the papers is the responsibility of the authors alone. Statements are not necessarily endorsed by the University of Wolverhampton, members of the Programme Committee or associated organisations. Permission is granted to photocopy the abstracts and other portions of this publication for personal use and for the use of students providing that credit is given to the conference and publication. Permission does not extend to other types of reproduction nor to copying for use in any profit-making purposes. Other publications are encouraged to include 300-500 word abstracts or excerpts from any paper, provided credits are given to the author, conference and publication. For permission to publish a complete paper contact Quasim Mehdi, SCIT, University of Wolverhampton, Wulfruna Street, Wolverhampton, WV1 1SB, UK, [email protected]. All author contact information in these Proceedings is covered by the European Privacy Law and may not be used in any form, written or electronic without the explicit written permission of the author and/or the publisher.
    [Show full text]