The Openuav Swarm Simulation Testbed: a Collaborative Design Studio for Field Robotics

Total Page:16

File Type:pdf, Size:1020Kb

The Openuav Swarm Simulation Testbed: a Collaborative Design Studio for Field Robotics The OpenUAV Swarm Simulation Testbed: a Collaborative Design Studio for Field Robotics Harish Anand, Zhiang Chen, Jnaneshwar Das Abstract— In this paper, we showcase a multi-robot design data analysis [5, 6]. An example application of sUAS- studio where simulation containers are browser accessible based imaging technology in geology involves generating Lubuntu desktops. Our simulation testbed, based on ROS, orthomosaic of a region and estimating rock traits such Gazebo, PX4 flight stack has been developed to tackle higher- level challenging tasks such as mission planning, vision-based as diameter and orientation. To extend such UAS based problems, collision avoidance and multi-robot coordination for sampling methods to other domains, we need software tools Unpiloted Aircraft Systems (UAS). The new architecture is that help in algorithms development and testing. Single UAS built around TurboVNC and noVNC WebSockets technology, to based methods encounter limitations in tasks such as wide- seamlessly provide real-time web performance for 3D rendering area mapping, freight transportation, and cooperative search in a collaborative design tool. We have built upon our previous work that leveraged concurrent multi-UAS simulations, and and recovery. Therefore, there is a need for a simulation extended it to be useful for underwater, airship and ground framework that provides flight stack and communication vehicles. This opens up the possibility for both rigorous Monte network to develop efficient swarm algorithms. Carlo styled software testing of heterogeneous swarm sim- With our belief that hardware abstraction and end-to-end ulations, as well as sampling-based optimization of mission simulation tools will accelerate innovation and education, we parameters. The new OpenUAV architecture has native support for ROS, PX4 and QGroundControl. Two case studies in the have made the OpenUAV simulator to be globally accessible paper illustrate the development of UAS missions in the latest and easy to use. OpenUAV setup. The first example highlights the development of a visual-servoing technique for UAS descent on a target. II. RELATED WORK Second case study referred to as terrain relative navigation A rich ecosystem of tools exists for UAS hardware and (TRN) involves creating a reactive planner for UAS navigation by keeping a constant distance from the terrain. software development. With improved on board computa- tional and sensing capabilities, heterogeneous swarms of I. INTRODUCTION ground, aerial, and underwater vehicles will enable efficient With an increasing number of use-cases for unpiloted exploration missions leveraging diversity and heterogene- aircraft systems (UAS), the need for robust controllers and ity [7]. A variety of tools exist to support design and mission planning algorithms are far exceeding available engi- deployment of single as well as multi-robot systems. neer or scientist time. The AI community has demonstrated promising results in this arena by utilizing simulations to A. RotorS learn controllers, often from a model-based reference such RotorS [8] is a Micro Aerial Vehicles (MAV) Gazebo as a model-predictive controller. [1] In past years, model- [9] simulator developed by the Autonomous Systems Lab based techniques have successfully shown impressive aerial at ETH Zurich. It provides UAV models such as the AscTec robot capabilities like quick navigation through unknown Hummingbird, the AscTec Pelican, or the AscTec Firefly. environments. To achieve real autonomy in applications such There are sensors such as an inertial measurement unit as indoor or outdoor mapping, disaster response, sensor de- (IMU), odometry sensor and the visual-inertial-sensor, which ployment and environmental-monitoring, UAS software must can be mounted on these models.[10] RotorS provide virtual demonstrate a semantic understanding of the environment. machine images that have pre-installed rotorS packages for [2] [3] easy setup and access to the simulator. OpenUAV framework The OpenUAV testbed was developed to reduce the num- has goals that are very similar to rotorS. However, it provides ber of field trials and crashes of sUAS systems by implement- similar capabilities in a containerized desktop environment. ing a simulation environment that allows extensive testing The improved OpenUAV docker image has additional fea- and mission synthesis for a swarm of aircraft [4]. In this tures like remote web access to a desktop session, support paper, we describe improvements to this testbed, enabling for orchestration tools like Kubernetes and mission planning interactive use of cloud resources on a browser. In addition to software QGroundControl. aerial systems, OpenUAV framework can model underwater and ground vehicles. B. FlightGoggles There is a growing demand for sUAS-based imaging FlightGoggles [11] is capable of simulating a virtual- technology to provide high resolution spatial context for reality environment around autonomous vehicle(s) in flight. When a vehicle is simulated in FlightGoggles, the sen- *This work was supported by NSF award CNS-1521617 Authors are with the School of Earth and Space Exploration, Tempe, AZ sor measurements are synthetically rendered in real time USA hanand4,zchen256,[email protected] while the vehicle vibrations and unsteady aerodynamics Fig. 1: Example of simultaneously accessing OpenUAV simulation containers. On the left web browser, we have an OpenUAV container with ID 9 accessible over the url term-9.openuas.us. Term-9 is executing a multi-UAV leader follower simulation. On the right web browser, we have a similar OpenUAV container with ID 6 accessible over the url term-6. openuas.us. Term-6 demonstrates a single UAV system controlled using QGroundControl which is running inside the container. are captured from the natural interactions of the vehicle. C. AirSim FlightGoggles’ real time photorealistic rendering of the en- AirSim is an open-source, cross-platform simulator built vironment is produced using the Unity game engine. The on Unreal Engine that offers visually realistic simulations exceptional advantage of FlightGoggles framework is the for drones and cars. [12] It supports hardware in the loop combination of real physics with the Unity based rendering simulations with flight controllers like PX4 and support of the environment for exteroceptive sensors. for popular protocols (e.g. MavLink). AirSim provides a Photorealism is an influential component for developing realistic rendering of scene objects such as trees, lakes and autonomous flight controllers in outdoor environments. Tra- electric poles. AirSim approach is useful for developing ditional robotics simulators employ a graphics rendering perception algorithms, especially in outdoor environments. engine along with the physics engine. Gazebo uses Object- One of the goals of OpenUAV system is to have minimum Oriented Graphics Rendering Engine (OGRE) and Open code changes when transitioning from simulation experiment Dynamics Engine or Bullet physics engine. Scenes rendered to field trials. Although Gazebo simulator is not capable of using Unity are photorealistic because they provide the art photorealistic rendering, it has the advantage of a straightfor- assets, material properties like shadows, specularity, emissiv- ward simulation description format to create general robots ity, shaders and textures to fine-tune scene objects to be more like manipulator arms or legged robots. Future versions realistic. Besides object enhancements, Unity has algorithms of OpenUAV hopes to utilize concepts from game engines for occlusion culling which disables rendering of objects that like Unity and Unreal Engine to generate visually realistic are not currently seen by the camera. Here, we realize a simulations. need in Gazebo to develop a photorealism layer between the physics and rendering engine to supplement computer III. DESIGN GOALS vision algorithms in robotics simulation. Future versions of OpenUAV is an on-premise, and cloud framework for OpenUAV hopes to provide an option to integrate with the developing dynamic controllers and swarm algorithms for game engines to create photorealistic renderings. UAS, remotely operated underwater vehicles (ROV), and autonomous underwater vehicles (AUV). OpenUAV utilizes the docker container technology that replaces the tedious installation of simulation and flight control software with its dependencies, by having to download a single pre-built ready-to-run image in Linux machines. [13] OpenUAV sim- ulation containers are built with a fast and lightweight oper- ating system Lubuntu with necessary flight control software, communication protocols and mission planning software. Figure 1 shows a demo of single and multi UAS scenarios in OpenUAV framework. Currently, to develop UAS software or conduct field exper- iments, the developer or researcher starts with the simulation and then moves to real robots. For simulation, they usually Fig. 2: OpenUAV container has simulation software such as work using the popular robotics tools like Gazebo, ROS Gazebo, ROS and RViz. In addition to that, it has support and flight stack such as PX4 and QGroundControl. The re- for flight stack like PX4 and mission planning software like searcher would realize the need for GPU enabled machine to QGroundControl. Softwares like TurboVNC, NoVNC and render the visualization from Gazebo. An apparent difficulty SSHFS are added to create interactive containers. they face is to be always at the GPU enabled computer to work on the simulation. This approach also indirectly
Recommended publications
  • NASA-LMCSOC SN and GN Interoperability Testbed
    Consolidated Space Operations Contract NASA/Lockheed Martin-CSOC Ground Network and Space Network Interoperability Testbed Lindolfo Martinez & Larry Muzny Lockheed Martin Space Operations Consolidated Space Operations Contract PO Box 58980, Mail Code L1C Houston, TX 77258-8980 (281) 853-3325 [email protected] [email protected] 1.0 Introduction Lockheed Martin-CSOC, a prime contractor for NASA ground systems, has been supporting in the development of plans for the evolution of NASA’s Ground Network (GN) and Space Network (SN), and where possible, synchronizing those plans with plans for the evolution of the Deep Space Network (DSN). Both organizations want those networks to have certain key attributes. Among the desired attributes are: 1. A common interface for all NASA networks to allow users to reduce development and operations costs by using common standards-based system interfaces; 2. Network interoperability to allow the sharing of resources among space agencies and other government agencies; and 3. Provide a common interface for integrating commercial ground network providers and network users. A multi-center and multi-contractor study team led by Lockheed Martin-CSOC concluded that the Space Link Extension (SLE) set of Consultative Committee for Space Data Systems (CCSDS) recommendations has the desired attributes. The team recommended that NASA and Lockheed Martin-CSOC set up an interoperability testbed to show how most future NASA missions can use CCSDS SLE and that SLE could meet NASA’s interoperability goals. NASA’s involvement with SLE started with its support for the development of SLE as the primary interface for future Deep Space Network missions.
    [Show full text]
  • Getting Started with PX4 for Contributors Mark West PX4 Community Volunteer Format
    Getting Started with PX4 For Contributors Mark West PX4 Community Volunteer Format A word about format Limited Time + Big Subject = Compression Some Subjects Skipped = Look in Appendix Demo Videos Clipped = Look in Appendix Who is this for? Anybody Levels: L1: Operate a PX4 vehicle L2: Build a PX4 vehicle L3: Build the source L4: Modify the source L5: Contribute L1: Operate You want to operate a drivable unit See Appendix : Operate Vehicle L2: Build a Vehicle You want to build a drivable unit See Appendix : Build Vehicle L3: Build the Source You want to build the image from source Big step up from L2 Choose Toolchain Container L3: Build the Source : Toolchain All the tools you need to build an image / exe Compilers / Linkers / Tools / Etc Installation: Manual : If needed Convenience Scripts : Preferred L3: Build the Source : Container Container? Like a better VM, with toolchain installed Easy to install, easy to update, no interaction Image = container snapshot L3: Build the Source : Container Container images built in layers px4io/px4-dev-ros-melodic px4io/px4-dev-simulation-bionic px4io/px4-dev-base-bionic ubuntu L3: Build the Source : Container Containers are isolated, all interaction is explicit File System -v ~/src_d/Firmware:/src/firmware/:rw File System Directory Directory Network Port -p 14556:14556/udp Network Port Host Docker run params Container L3: Build the Source : Toolchain or Container? Container if possible Toolchain if no suitable container * But you can make your own containers L3: Build the Source : Toolchain Demo Build image for SITL Simulation 1) Install Toolchain 2) Git PX4 Source 3) Build PX4 4) Takeoff L3: Build the Source : Container Demo Build image for SITL Simulation 1) Install Docker 2) Git PX4 Source 3) Locate Container 4) Run Container 5) Build PX4 6) Build available on host computer L4: Modify the Source Next step up in complexity Why? Fix bugs, add features Loop: Edit, Build, Debug IDE: Visual Studio, Eclipse, QT Creator, etc.
    [Show full text]
  • Using XML, XSL, and CSS in a Digital Library
    Using XML, XSLT, and CSS in a Digital Library Timothy W. Cole, William H. Mischo, Robert Ferrer, and Thomas G. Habing Grainger Engineering Library Information Center University of Illinois at Urbana-Champaign Abstract The functionality of formats available for the online representation of text continues to evolve. ASCII and bit-mapped image representations of text objects (e.g., journal articles) have been superceded by more functional representations such as application- specific word processing formats and proprietary and non-proprietary information interchange formats (e.g., Adobe PDF, Microsoft RTF, TeX, SGML). Standards like SGML and now XML, which support the representation of a text as an “ordered hierarchy of content objects,” are arguably the best and most sophisticated models available for representing text objects.1 Ratified as an international standard in 1989, SGML has become a well-established approach for encoding text. However SGML’s complexity and requirements for specialized and expensive tools to implement SGML- based systems have limited its scope as an information interchange standard, particularly in today’s Web-dominated environment. XML, established as a W3 Consortium Recommendation in February 1998, strives to make the best features of SGML more accessible to Web authors and publishers, but the XML specification itself doesn't explicitly deal with the presentation of content, nor does it address document object transformation. These issues must be addressed through the use of CSS and XSLT (both more recent W3 Consortium Recommendations). It is necessary to use these three technologies in concert to create powerful and robust text applications on the Web. This paper describes how XML, XSLT, and CSS can be used in a digital library application.
    [Show full text]
  • Introducing Control System Security Center (Cssc)
    Control System Security Center INTRODUCING CONTROL SYSTEM SECURITY CENTER (CSSC) Control System Security Center h(p://www.css-center.or.jp/en/index.html CSSC Promotion Video About 8 Minutes If Tokyo city falls into wide-area blackout, ・・・・・・・・ h(p://www.youtube.com/watch?v=qgsevPqZpAg&feature=youtu.be 2 Control System Security Center Where is Tagajo? l Jo = castle; since 8th century l Historically famous and important place in Japan l Tsunami (2-4 m height) caused by the earthquake has covered the 33% of the city land (Mar. 11.2011) l After the earthquake, Tagajo city launched “Research Park for Disaster Reduction” plan. – Internationally prominent effort for achieving disaster reduction – Development of distinct technologies and products – Policies for disaster reduction “The testbed of CSSC truly suits the concept of Research park for disaster reduction.” (Mayor of Tagajo) Source: hp://www.city.tagajo.miyagi.jp/ 3 Control System Security Center Industrial Control System Network Internet Maintenance/services, related factories, sales Office network Firewall Infrastructure Industrial Control System network (factories, building, filter plant, sewage plant, disaster control center) DCS PLC opening/closing valve Monitoring room(SCADA) controlling temperature, Engineering PC pressure and robot Parameter configuraon Evaluaon DCS: Distributed Control System PLC: Programmable Logic Controller SCADA: Supervisory Control And Data AcquisiXon 4 4 Control System Security Center PLC and DCS DCS PLC Usually, a DCS configuration comprises three PLC comprises a combination of PC monitoring and elements: an HMI (Human Machine Interface) control software and performs process monitoring used by the operator for control and monitoring and control. PLC is used, for example, in assembly and a control network that connects the HMI plants or for building control.
    [Show full text]
  • Reinforcement Learning and Trustworthy Autonomy
    Reinforcement Learning and Trustworthy Autonomy Jieliang Luo, Sam Green, Peter Feghali, George Legrady, and Çetin Kaya Koç Abstract Cyber-Physical Systems (CPS) possess physical and software inter- dependence and are typically designed by teams of mechanical, electrical, and software engineers. The interdisciplinary nature of CPS makes them difficult to design with safety guarantees. When autonomy is incorporated, design complexity and, especially, the difficulty of providing safety assurances are increased. Vision- based reinforcement learning is an increasingly popular family of machine learning algorithms that may be used to provide autonomy for CPS. Understanding how visual stimuli trigger various actions is critical for trustworthy autonomy. In this chapter we introduce reinforcement learning in the context of Microsoft’s AirSim drone simulator. Specifically, we guide the reader through the necessary steps for creating a drone simulation environment suitable for experimenting with vision- based reinforcement learning. We also explore how existing vision-oriented deep learning analysis methods may be applied toward safety verification in vision-based reinforcement learning applications. 1 Introduction Cyber-Physical Systems (CPS) are becoming increasingly powerful and complex. For example, standard passenger vehicles have millions of lines of code and tens of processors [1], and they are the product of years of planning and engineering J. Luo () · S. Green · P. Feghali · G. Legrady University of California Santa Barbara, Santa Barbara, CA, USA e-mail: [email protected]; [email protected]; [email protected]; [email protected] Ç. K. Koç Istinye˙ University, Istanbul,˙ Turkey Nanjing University of Aeronautics and Astronautics, Nanjing, China University of California Santa Barbara, Santa Barbara, CA, USA e-mail: [email protected] © Springer Nature Switzerland AG 2018 191 Ç.
    [Show full text]
  • A Simple Platform for Reinforcement Learning of Simulated Flight Behaviors ⋆
    LM2020, 011, v2 (final): ’A Simple Platform for Reinforcement Learning of Simulated . 1 A Simple Platform for Reinforcement Learning of Simulated Flight Behaviors ⋆ Simon D. Levy Computer Science Department, Washington and Lee University, Lexington VA 24450, USA [email protected] Abstract. We present work-in-progress on a novel, open-source soft- ware platform supporting Deep Reinforcement Learning (DRL) of flight behaviors for Miniature Aerial Vehicles (MAVs). By using a physically re- alistic model of flight dynamics and a simple simulator for high-frequency visual events, our platform avoids some of the shortcomings associated with traditional MAV simulators. Implemented as an OpenAI Gym en- vironment, our simulator makes it easy to investigate the use of DRL for acquiring common behaviors like hovering and predation. We present preliminary experimental results on two such tasks, and discuss our cur- rent research directions. Our code, available as a public github repository, enables replication of our results on ordinary computer hardware. Keywords: Deep reinforcement learning Flight simulation Dynamic · · vision sensing. 1 Motivation Miniature Aerial Vehicles (MAVs, a.k.a. drones) are increasingly popular as a model for the behavior of insects and other flying animals [3]. The cost and risks associated with building and flying MAVs can however make such models inaccessible to many researchers. Even when such platforms are available, the number of experiments that must be run to collect sufficient data for paradigms like Deep Reinforcement Learning (DRL) makes simulation an attractive option. Popular MAV simulators like Microsoft AirSim [10], as well as our own sim- ulator [7], are built on top of video game engines like Unity or UnrealEngine4.
    [Show full text]
  • Chapter 3 Airsim Simulator
    MASTER THESIS Structured Flight Plan Interpreter for Drones in AirSim Francesco Rose SUPERVISED BY Cristina Barrado Muxi Universitat Politècnica de Catalunya Master in Aerospace Science & Technology January 2020 This Page Intentionally Left Blank Structured Flight Plan Interpreter for Drones in AirSim BY Francesco Rose DIPLOMA THESIS FOR DEGREE Master in Aerospace Science and Technology AT Universitat Politècnica de Catalunya SUPERVISED BY: Cristina Barrado Muxi Computer Architecture Department “E come i gru van cantando lor lai, faccendo in aere di sé lunga riga, così vid’io venir, traendo guai, ombre portate da la detta briga” Dante, Inferno, V Canto This Page Intentionally Left Blank ABSTRACT Nowadays, several Flight Plans for drones are planned and managed taking advantages of Extensible Markup Language (XML). In the mean time, to test drones performances as well as their behavior, simulators usefulness has been increasingly growing. Hence, what it takes to make a simulator capable of receiving commands from an XML file is a dynamic interface. The main objectives of this master thesis are basically three. First of all, the handwriting of an XML flight plan (FP) compatible with the simulator environment chosen. Then, the creation of a dynamic interface that can read whatever XML FP and that will transmit commands to the drone. Finally, using the simulator, it will be possible to test both interface and flight plan. Moreover, a dynamic interface aimed at managing two or more drones in parallel has been built and implemented as extra objective of this master thesis. In addition, assuming that two drones will be used to test this interface, it is required the handwriting of two more FPs.
    [Show full text]
  • An Aerial Mixed-Reality Environment for First-Person-View Drone Flying
    applied sciences Article An Aerial Mixed-Reality Environment for First-Person-View Drone Flying Dong-Hyun Kim , Yong-Guk Go and Soo-Mi Choi * Department of Computer Science and Engineering, Sejong University, Seoul 05006, Korea; [email protected] (D.-H.K.); lattechiff[email protected] (Y.-G.G.) * Correspondence: [email protected] Received: 27 June 2020; Accepted: 3 August 2020; Published: 6 August 2020 Abstract: A drone be able to fly without colliding to preserve the surroundings and its own safety. In addition, it must also incorporate numerous features of interest for drone users. In this paper, an aerial mixed-reality environment for first-person-view drone flying is proposed to provide an immersive experience and a safe environment for drone users by creating additional virtual obstacles when flying a drone in an open area. The proposed system is effective in perceiving the depth of obstacles, and enables bidirectional interaction between real and virtual worlds using a drone equipped with a stereo camera based on human binocular vision. In addition, it synchronizes the parameters of the real and virtual cameras to effectively and naturally create virtual objects in a real space. Based on user studies that included both general and expert users, we confirm that the proposed system successfully creates a mixed-reality environment using a flying drone by quickly recognizing real objects and stably combining them with virtual objects. Keywords: aerial mixed-reality; drones; stereo cameras; first-person-view; head-mounted display 1. Introduction As cameras mounted on drones can easily capture photographs of places that are inaccessible to people, drones have been recently applied in various fields, such as aerial photography [1], search and rescue [2], disaster management [3], and entertainment [4].
    [Show full text]
  • Peter Van Der Perk
    Eindhoven University of Technology MASTER A distributed safety mechanism for autonomous vehicle software using hypervisors van der Perk, P.J. Award date: 2019 Link to publication Disclaimer This document contains a student thesis (bachelor's or master's), as authored by a student at Eindhoven University of Technology. Student theses are made available in the TU/e repository upon obtaining the required degree. The grade received is not published on the document as presented in the repository. The required complexity or quality of research of student theses may vary by program, and the required minimum study period may vary in duration. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain Department of Electrical Engineering Electronic Systems Research Group A Distributed Safety Mechanism for Autonomous Vehicle Software Using Hypervisors graduation project Peter van der Perk Supervisors: prof.dr. Kees Goossens dr. Andrei Terechko version 1.0 Eindhoven, June 2019 Abstract Autonomous vehicles rely on cyber-physical systems to provide comfort and safety to the passen- gers. The objective of safety designs is to avoid unacceptable risk of physical injury to people. Reaching this objective, however, is very challenging because of the growing complexity of both the Electronic Control Units (ECUs) and software architectures required for autonomous operation.
    [Show full text]
  • Autonomous Mapping of Unknown Environments Using A
    DF Autonomous Mapping of Unknown Environments Using a UAV Using Deep Reinforcement Learning to Achieve Collision-Free Navigation and Exploration, Together With SIFT-Based Object Search Master’s thesis in Engineering Mathematics and Computational Science, and Complex Adaptive Systems ERIK PERSSON, FILIP HEIKKILÄ Department of Mathematical Sciences CHALMERS UNIVERSITY OF TECHNOLOGY Gothenburg, Sweden 2020 Master’s thesis 2020 Autonomous Mapping of Unknown Environments Using a UAV Using Deep Reinforcement Learning to Achieve Collision-Free Navigation and Exploration, Together With SIFT-Based Object Search ERIK PERSSON, FILIP HEIKKILÄ DF Department of Mathematical Sciences Division of Applied Mathematics and Statistics Chalmers University of Technology Gothenburg, Sweden 2020 Autonomous Mapping of Unknown Environments Using a UAV Using Deep Reinforcement Learning to Achieve Collision-Free Navigation and Ex- ploration, Together With SIFT-Based Object Search ERIK PERSSON, FILIP HEIKKILÄ © ERIK PERSSON, FILIP HEIKKILÄ, 2020. Supervisor: Cristofer Englund, RISE Viktoria Examiner: Klas Modin, Department of Mathematical Sciences Master’s Thesis 2020 Department of Mathematical Sciences Division of Applied Mathematics and Statistics Chalmers University of Technology SE-412 96 Gothenburg Telephone +46 31 772 1000 Cover: Screenshot from the simulated environment together with illustration of the obstacles detected by the system. Typeset in LATEX, template by David Frisk Gothenburg, Sweden 2020 iv Autonomous Mapping of Unknown Environments Using a UAV Using Deep Reinforcement Learning to Achieve Collision-Free Navigation and Ex- ploration, Together With SIFT-Based Object Search ERIK PERSSON, FILIP HEIKKILÄ Department of Mathematical Sciences Chalmers University of Technology Abstract Automatic object search in a bounded area can be accomplished using camera- carrying autonomous aerial robots.
    [Show full text]
  • Simulating GPS-Denied Autonomous UAV Navigation for Detection of Sur- Face Water Bodies
    This may be the author’s version of a work that was submitted/accepted for publication in the following source: Singh, Arnav Deo& Vanegas Alvarez, Fernando (2020) Simulating GPS-denied autonomous UAV navigation for detection of sur- face water bodies. In Proceedings of 2020 International Conference on Unmanned Aircraft Systems:ICUAS. Institute of Electrical and Electronics Engineers Inc., United States of America, pp. 1792-1800. This file was downloaded from: https://eprints.qut.edu.au/211603/ c IEEE 2020 This work is covered by copyright. Unless the document is being made available under a Creative Commons Licence, you must assume that re-use is limited to personal use and that permission from the copyright owner must be obtained for all other uses. If the docu- ment is available under a Creative Commons License (or other specified license) then refer to the Licence for details of permitted re-use. It is a condition of access that users recog- nise and abide by the legal requirements associated with these rights. If you believe that this work infringes copyright please provide details by email to [email protected] License: Creative Commons: Attribution-Noncommercial 4.0 Notice: Please note that this document may not be the Version of Record (i.e. published version) of the work. Author manuscript versions (as Sub- mitted for peer review or as Accepted for publication after peer review) can be identified by an absence of publisher branding and/or typeset appear- ance. If there is any doubt, please refer to the published source. https://doi.org/10.1109/ICUAS48674.2020.9213927 Simulating GPS-denied Autonomous UAV Navigation for Detection of Surface Water Bodies Arnav Deo Singh Fernando Vanegas Alvarez Queensland University of Technology Queensland University of Technology Brisbane, Australia Brisbane, Australia [email protected] [email protected] Abstract— The aim to colonize extra-terrestrial planets has NASA has been developing a UAV for the exploration of been of great interest in recent years.
    [Show full text]
  • Unrealcv: Virtual Worlds for Computer Vision
    Session: Open Source Software Competition MM’17, October 23-27, 2017, Mountain View, CA, USA UnrealCV: Virtual Worlds for Computer Vision Weichao Qiu1, Fangwei Zhong2, Yi Zhang1, Siyuan Qiao1, Zihao Xiao1, Tae Soo Kim1, Yizhou Wang2, Alan Yuille1 1. Johns Hopkins University, 2. Peking University {qiuwch,zfw1226,edwardz.amg,joe.siyuan.qiao}@gmail.com {zxiao10,tkim60}@jhu.edu,[email protected],[email protected] ABSTRACT realism, while the newest video games such as GTA do not have UnrealCV1 is a project to help computer vision researchers build good APIs to access its internal data and sometimes have license virtual worlds using Unreal Engine 4 (UE4). It extends UE4 with a issues. Second, the modifications of one video game can notbe plugin by providing (1) A set of UnrealCV commands to interact transfered to others and needs to be redone case by case. In or- with the virtual world. (2) Communication between UE4 and an der to use a video game as a virtual world for computer vision external program, such as Caffe. UnrealCV can be used in two researchers, some extensions need to be done: (1) the video game ways. The first one is using a compiled game binary with UnrealCV needs to be programmly accessible through an API, so that an AI embedded. This is as simple as running a game, no knowledge of agent can communicate with it (2) the information in the virtual Unreal Engine is required. The second is installing UnrealCV plugin worlds need to be extracted to achieve certain tasks, such as ground to Unreal Engine 4 (UE4) and use the editor of UE4 to build a new truth generation or giving rewards based on the action of agents.
    [Show full text]