The OpenUAV Swarm Simulation Testbed: a Collaborative Design Studio for Field Robotics

Harish Anand, Zhiang Chen, Jnaneshwar Das

Abstract— In this paper, we showcase a multi-robot design data analysis [5, 6]. An example application of sUAS- studio where simulation containers are browser accessible based imaging technology in geology involves generating Lubuntu desktops. Our simulation testbed, based on ROS, orthomosaic of a region and estimating rock traits such Gazebo, PX4 flight stack has been developed to tackle higher- level challenging tasks such as mission planning, vision-based as diameter and orientation. To extend such UAS based problems, collision avoidance and multi-robot coordination for sampling methods to other domains, we need software tools Unpiloted Aircraft Systems (UAS). The new architecture is that help in algorithms development and testing. Single UAS built around TurboVNC and noVNC WebSockets technology, to based methods encounter limitations in tasks such as wide- seamlessly provide real-time web performance for 3D rendering area mapping, freight transportation, and cooperative search in a collaborative design tool. We have built upon our previous work that leveraged concurrent multi-UAS simulations, and and recovery. Therefore, there is a need for a simulation extended it to be useful for underwater, airship and ground framework that provides flight stack and communication vehicles. This opens up the possibility for both rigorous Monte network to develop efficient swarm algorithms. Carlo styled software testing of heterogeneous swarm sim- With our belief that hardware abstraction and end-to-end ulations, as well as sampling-based optimization of mission simulation tools will accelerate innovation and education, we parameters. The new OpenUAV architecture has native support for ROS, PX4 and QGroundControl. Two case studies in the have made the OpenUAV simulator to be globally accessible paper illustrate the development of UAS missions in the latest and easy to use. OpenUAV setup. The first example highlights the development of a visual-servoing technique for UAS descent on a target. II.RELATED WORK Second case study referred to as terrain relative navigation A rich ecosystem of tools exists for UAS hardware and (TRN) involves creating a reactive planner for UAS navigation by keeping a constant distance from the terrain. software development. With improved on board computa- tional and sensing capabilities, heterogeneous swarms of I.INTRODUCTION ground, aerial, and underwater vehicles will enable efficient With an increasing number of use-cases for unpiloted exploration missions leveraging diversity and heterogene- aircraft systems (UAS), the need for robust controllers and ity [7]. A variety of tools exist to support design and mission planning algorithms are far exceeding available engi- deployment of single as well as multi-robot systems. neer or scientist time. The AI community has demonstrated promising results in this by utilizing simulations to A. RotorS learn controllers, often from a model-based reference such RotorS [8] is a Micro Aerial Vehicles (MAV) Gazebo as a model-predictive controller. [1] In past years, model- [9] simulator developed by the Autonomous Systems Lab based techniques have successfully shown impressive aerial at ETH Zurich. It provides UAV models such as the AscTec robot capabilities like quick navigation through unknown Hummingbird, the AscTec Pelican, or the AscTec Firefly. environments. To achieve real autonomy in applications such There are sensors such as an inertial measurement unit as indoor or outdoor mapping, disaster response, sensor de- (IMU), odometry sensor and the visual-inertial-sensor, which ployment and environmental-monitoring, UAS software must can be mounted on these models.[10] RotorS provide virtual demonstrate a semantic understanding of the environment. machine images that have pre-installed rotorS packages for [2] [3] easy setup and access to the simulator. OpenUAV framework The OpenUAV testbed was developed to reduce the num- has goals that are very similar to rotorS. However, it provides ber of field trials and crashes of sUAS systems by implement- similar capabilities in a containerized desktop environment. ing a simulation environment that allows extensive testing The improved OpenUAV docker image has additional fea- and mission synthesis for a swarm of aircraft [4]. In this tures like remote web access to a desktop session, support paper, we describe improvements to this testbed, enabling for orchestration tools like Kubernetes and mission planning interactive use of cloud resources on a browser. In addition to software QGroundControl. aerial systems, OpenUAV framework can model underwater and ground vehicles. B. FlightGoggles There is a growing demand for sUAS-based imaging FlightGoggles [11] is capable of simulating a virtual- technology to provide high resolution spatial context for reality environment around autonomous vehicle(s) in flight. When a vehicle is simulated in FlightGoggles, the sen- *This work was supported by NSF award CNS-1521617 Authors are with the School of Earth and Space Exploration, Tempe, AZ sor measurements are synthetically rendered in real time USA hanand4,zchen256,[email protected] while the vehicle vibrations and unsteady aerodynamics Fig. 1: Example of simultaneously accessing OpenUAV simulation containers. On the left , we have an OpenUAV container with ID 9 accessible over the url term-9.openuas.us. Term-9 is executing a multi-UAV leader follower simulation. On the right web browser, we have a similar OpenUAV container with ID 6 accessible over the url term-6. openuas.us. Term-6 demonstrates a single UAV system controlled using QGroundControl which is running inside the container. are captured from the natural interactions of the vehicle. C. AirSim FlightGoggles’ real time photorealistic rendering of the en- AirSim is an open-source, cross-platform simulator built vironment is produced using the game engine. The on that offers visually realistic simulations exceptional advantage of FlightGoggles framework is the for drones and cars. [12] It supports hardware in the loop combination of real physics with the Unity based rendering simulations with flight controllers like PX4 and support of the environment for exteroceptive sensors. for popular protocols (e.g. MavLink). AirSim provides a Photorealism is an influential component for developing realistic rendering of scene objects such as trees, lakes and autonomous flight controllers in outdoor environments. Tra- electric poles. AirSim approach is useful for developing ditional robotics simulators employ a graphics rendering perception algorithms, especially in outdoor environments. engine along with the physics engine. Gazebo uses Object- One of the goals of OpenUAV system is to have minimum Oriented Graphics Rendering Engine (OGRE) and Open code changes when transitioning from simulation experiment Dynamics Engine or Bullet physics engine. Scenes rendered to field trials. Although Gazebo simulator is not capable of using Unity are photorealistic because they provide the art photorealistic rendering, it has the advantage of a straightfor- assets, material properties like shadows, specularity, emissiv- ward simulation description format to create general robots ity, shaders and textures to fine-tune scene objects to be more like manipulator arms or legged robots. Future versions realistic. Besides object enhancements, Unity has algorithms of OpenUAV hopes to utilize concepts from game engines for occlusion culling which disables rendering of objects that like Unity and Unreal Engine to generate visually realistic are not currently seen by the camera. Here, we realize a simulations. need in Gazebo to develop a photorealism layer between the physics and rendering engine to supplement III.DESIGN GOALS vision algorithms in robotics simulation. Future versions of OpenUAV is an on-premise, and cloud framework for OpenUAV hopes to provide an option to integrate with the developing dynamic controllers and swarm algorithms for game engines to create photorealistic renderings. UAS, remotely operated underwater vehicles (ROV), and autonomous underwater vehicles (AUV). OpenUAV utilizes the docker container technology that replaces the tedious installation of simulation and flight control software with its dependencies, by having to download a single pre-built ready-to-run image in machines. [13] OpenUAV sim- ulation containers are built with a fast and lightweight oper- ating system Lubuntu with necessary flight control software, communication protocols and mission planning software. Figure 1 shows a demo of single and multi UAS scenarios in OpenUAV framework. Currently, to develop UAS software or conduct field exper- iments, the developer or researcher starts with the simulation and then moves to real robots. For simulation, they usually Fig. 2: OpenUAV container has simulation software such as work using the popular robotics tools like Gazebo, ROS Gazebo, ROS and RViz. In addition to that, it has support and flight stack such as PX4 and QGroundControl. The re- for flight stack like PX4 and mission planning software like searcher would realize the need for GPU enabled machine to QGroundControl. Softwares like TurboVNC, NoVNC and render the visualization from Gazebo. An apparent difficulty SSHFS are added to create interactive containers. they face is to be always at the GPU enabled computer to work on the simulation. This approach also indirectly restricts them on providing a live demonstration of their components, virtualization components and interactive com- work to remote users, which they might have to depend on ponents. An overview of the OpenUAV simulation container other video conferencing applications. Another disadvantage is shown in Figure 2. of this setup is that multiple researchers cannot work on A. Simulation simulations in the same GPU machine simultaneously. [4] In the following section, we outline how these necessities Simulation has become a necessity to solve real world guided the design of the improved software architecture for problems in a safe and efficient manner. The software pack- OpenUAV. ages that enable simulation in OpenUAV framework are the following. A. Goals 1) Gazebo: [9] Gazebo is an open-source robotics simu- lator that is used to design robots, 3D worlds and perform As a remotely accessible open source simulation platform, realistic rigid body dynamics. Simulated objects have mass, OpenUAV’s main purpose has been to lower this barrier to velocity, friction, and other attributes that enable them to entry in research and development of autonomous vehicles. be realistic when forces are applied. Gazebo can simulate Therefore for OpenUAV to achieve its goals, it has to more general robots through its links-and-joints architecture implement the following requirements. like the sophisticated manipulator arms or legged robots. • Enable remote desktop accessibility through browsers Gazebo provides a web visualization tool called GzWeb that without compromising on performance provides visualization of the environment using browsers. An • Provide an easy to use, software development environ- earlier version of OpenUAV used GzWeb to interact with ment with support for remote code execution. the simulation running in cloud.[4] GzWeb being a thin web • Replicate actual devices in the simulation containers, client has a limited set of graphical options when compared by having similar memory constraints and compute to actual Gazebo client. The improved OpenUAV version capacity. utilizes remote desktop sharing technologies to enable seam- • Minimize the risk of data breach, through built-in en- less sharing of the Gazebo client to the users. Moreover, cryption between client and server. The data protection the new OpenUAV provides users with access to other PX4 mechanisms should not require additional layers of community developed tools like QGroundControl within the protection such as virtual private networks (VPN). same desktop. • Provide mechanisms for maintenance through daily 2) ROS: The Robot is a robotics build images, regular update releases and data recovery message passing framework that is designed to simplify mechanisms. programming for various robots [15]. ROS is widely used in the robotics community and has various community IV. SYSTEM ARCHITECTURE developed packages for sensors and actuators. A robotics system publishes itself as a ROS node. From that node, it can This section presents the software components of the subscribe and publish a message to named topics. OpenUAV OpenUAV and the interaction between user and the system. supports ROS over a separate simulation API like in AirSim The updated system design was inspired from the work of [12], so that users can use the robotics community developed Will Kessler and the Udacity team. [14] We can classify packages and to reduce the code changes during the transition the software components into three categories: simulation from simulation to real robots. 3) PX4: is an open-source flight stack for UAVs and other 3) Nginx: [28] is a web server which can also be used unmanned vehicles [16] . It is used to provide basic navi- as a reverse proxy, load balancer and HTTP cache. Open- gational functionalities like way point navigation, landing, UAV utilises Nginx as reverse proxy and load balancer for takeoff, hover. Latest versions of PX4 provides the higher accessing the remote desktop sessions. Each remote desktop level capabilities like collision avoidance in UAVs. MAVROS session running inside a container is given an simulation ID is a ROS package that simplifies the communication and (whole number) and the access to that session is using the control between the user code and the UAV by converting following URL pattern term-.openuas.us. Nginx ROS messages to MAVLink protocol [17]. This allows a user is also used as a TCP streaming proxy for the openssh server to send control commands through ros topics, thus allowing running inside the container. Streaming ports are opened on users with a higher level abstraction. the openuav server, based on user requirements. This feature 4) QGroundControl: is a software package used for mon- enables users to remotely develop software and execute code. itoring and mission planning for any MAVLink enabled drone [18]. QGroundControl can communicate with both PX4 and ArduPilot powered vehicles [19]. Since QGround- Control is developed using Qt, which is a cross-platform C. Interactive components software for application development, OpenUAV container base image has support for building and developing Qt In the following section, we describe the software compo- applications. QGroundControl can be used to evaluate the nents that enable the users to interact with the simulations simulations, as they provide a visual display on vehicle running in containers. Some of the technologies are used in position, flight track, waypoint and vehicle instruments. remote computing and high performance computing (HPC) simulation services [29, 30]. B. Virtualization 1) TurboVNC: when used with VirtualGL, provides a highly performing and robust solution for displaying 3D Virtualization is the process of creating a software-based applications over all types of networks [31, 32]. OpenUAV representation of something, such as virtual applications, utilizes TurboVNC’s 3D rendering capability to display servers, storage and networks [20]. In OpenUAV, we have remote frame buffer (Lubuntu desktop session) associated operating-system-level virtualization to deliver software as with the container to any connected viewers. packages called containers [21]. Figure 3 shows how each of these components interact with each other. 2) NoVNC: is a JavaScript VNC application that provides 1) Docker: container technology is arising as the pre- VNC sessions over the web browser [33]. NoVNC follows ferred means of packaging and deploying applications. the standard VNC protocol and has support for persistent Docker is able to take software and its dependencies, package connection through WebSockets. We use the noVNC client them up into a lightweight container. Out of the box, Docker to connect with the TurboVNC server running inside the provides isolation from host, isolation from each other, containers. The noVNC displays the VNC session over port improve the security of the application by restricting the 40001, which docker exposes at 40xx port where xx is the host system calls possible, and running applications in least simulation ID and Nginx proxies it to the outside world privileged mode [13, 22]. over unique terminals with same simulation ID term- Another advantage of containers is its ability to configure .openuas.us. NoVNC provides built-in encryption be- resources allocated to each container. These restrictions are tween client and server. Thus it does not require additional useful for replicating actual vehicle compute power in the layers of protection such as virtual private network (VPN). simulation containers, by having similar memory constraints A user with VNC client can also connect directly to the and compute capacity. TurboVNC VNC session. This requires exposing the VNC 2) Kubernetes: is an open-source software for automating session over nginx as a stream proxy. Another advantage deployment, scaling, and management of containerized ap- of NoVNC is, it enables users to easily switch between plications [23, 24]. OpenUAV architecture was tested in both machines while still being presented with the same desktop Docker Swarm [25] and Kubernetes. One of the architectural i.e. each application is exactly as they left it. goals of OpenUAV is to seamlessly provide access to mul- 3) Secure Shell FileSystem (SSHFS): [34] allows you to tiple simulation containers in cloud or local GPU enabled mount a remote file system using secure shell file transfer machine. Thus sharing of GPU resources among the pool of protocol (SFTP). Through SSHFS, OpenUAV provides an deployable containers (pods) is necessary. Kubernetes pods easy remote file access mechanism to simulation directories, are advantageous when compared to individual containers by having it mounted as remote filesystem in your local ma- since a group of containers (pods) working together is chine. An ssh server running inside the container can also act capable of achieving complex tasks [26]. We found that as a remote code execution environment which is useful for sharing of GPU resources in pods is a work in progress Integrated Development Environments like PyCharm [35]. among kubernetes and docker swarm community, because SSH access to container is made available as per requests the container isolation guarantees on GPU resources needs from users, since it requires exposing a streaming port in to be provided by the GPU vendors [27]. Nginx. Fig. 3: shows the virtualization components of OpenUAV. Docker port forwards each containers web interface as 40xx ports where xx denote the simulation ID. HTTP proxy module and stream module are adopted to proxy multiple HTTP sessions and stream ssh session to outside users.

V. CASE STUDIES In survey mode, we use the PX4 position controller to A. Visual servoing in sUAS using Apriltags and YOLO specify the way points for lawnmower based search navi- gation. The target used in the simulation is a UAV frame Our first case study showcases a single UAS doing visual and an AprilTag, which is visual fiducial system developed servoing in the OpenUAV framework. The quadrotor (DJI by April Robotics Laboratory. [38] For the detection of the F450) in simulation is equipped with a fixed on-board camera UAV frame, we collected 60 UAV images from simulation (without gimbal) facing downward, which is used for the and experiments. The images were augmented and trained controlled descent towards a target in the environment [4]. on You Only Look Once (YOLO) object detection algorithm. This simulation was developed as part of NSF CPS Chal- [39] The results are shown in Figure 4. Our approach used lenge 2018 [36]. YOLO for object detection from surveying heights of 7- Visual servoing is the fusion of results from many elemen- 10m and AprilTag module for precise maneuvering of UAV tal areas, like image processing, dynamics, control systems, descent when UAV is less than 5m from ground. [40] and real-time computing [37]. Through visual-servoing, we We have adopted ROS software package developed for try to control the UAS system using vision and attempt to YOLO object detection that enable us to obtain bounding descent towards the object of interest. Visual servoing has boxes in image coordinates. The bounding box is further two main approaches: position-based visual servoing (PBVS) transformed to camera coordinates for the controller. Our and image-based visual servoing (IBVS). In position-based PID controller attempts to keep the target at the center of visual servoing, features of the target are extracted from the image plane captured by the camera, to minimize the the image and using intrinsic and extrinsic parameters of probability of losing the target from the field of view. The the camera, the pose of the target is estimated in global PID parameters for YOLO bounding box based descent is coordinates. Our approach focuses on image-based visual tuned using numerous trails in OpenUAV framework. We use servoing (IBVS), where the control values are calculated the AprilTag ROS wrapper package for precise descent on the based on image features directly. This approach reduces the target UAV. The PID parameters for AprilTag based descent computational delay due to transformations and eliminates are calculated after similar trails in the OpenUAV framework. the errors due to sensor modelling and camera calibra- In addition to the ease of use, the docker environment tion [37]. Our approach is implemented in 3 major modes in OpenUAV framework provides the capability of saving - SURVEY, SCAN, DESCENT. Initially the UAV does a current simulation setup into an image file and replicating survey (SURVEY mode) of the area in a lawnmower pattern. the same setup across multiple simulation containers. Once we have a detection of the target on the ground, the scanning phase (SCAN mode) begins where we hover B. Terrain Relative Navigation the area to receive continuous detection of the target for 3 UAS systems are widely used for mapping in geological seconds. The scanning phase is implemented to improve the sciences, urban planning and precision agriculture. Lawn- robustness of the system to avoid false detection. The third mower patterns are commonly used for an aircraft to survey phase is the descent phase (DESCENT mode) where the UAV a large area. As a result, the ground sampling resolution does a descent towards the target using IBVS strategy. may vary. The inconsistent ground sampling resolution in (a) shows the detection of UAS in OpenUAV simu- (b) shows the detection of (a) shows the simulation setup that involved a lation using GeForce RTX Intel Aero in outside envi- ramp and a rectangular cuboid. The UAV has to 2080 Ti ronment maintain a relatively fixed height with changing terrain.

(c) shows the detection of UAS frame during a de- scent towards the object. It was able to classify UAS frame from noise, such as shadows. (b) shows the path, pose and point cloud data in Rviz. The path taken by the UAS is very similar Fig. 4: Predictions after YOLO training to the terrain structure. Fig. 5: Terrain Relative Navigation aerial imagery can cause discrepencies in estimates of rock dimensions. To alleviate this issue, we explore lawnmover controller measures the error using the following equation: navigation with realtime awareness of terrain-height. Various error = (ht + hf ) − hcurrent approaches have been explored in terrain relative navigation for autonomous landing of planetary rovers. [41] [42]. Our where hcurrent represents the current height of the UAV. The approach, which does not require prior knowledge of the parameters of PI controller was selected after numerous iter- environment, is vision-based and adjusts the z component ations of UAV simulations. A great advantage of simulating of the pose based on the terrain height. Our algorithm was in OpenUAV framework provides is the ability to replicate tested in simulation with the 3DR Iris quadrotor with Intel the similar setups across multiple containers, to fine tuning RealSense R200 depth camera. We utilized the point cloud controller parameters. library (PCL) [43] to transform point cloud data generated by Our simulation involves a world with a ramp and rectan- the depth camera to global coordinates. The transformation gular cuboid on it. The simulation intends to tests the UAV from camera coordinates to global coordinates is achieved on uphill and downhill scenarios. The controller is provided through a combination of UAV state estimation by PX4 with the x and y position of the goal waypoint. As figure 5 Extended Kalman Filter (EKF) [44], and a static transform demonstrates, the UAV can increase and decrease the altitude of the pose of the camera to the UAV. Since the navigation when it found the rectangular cuboid on the ramp. Moreover, software is independent of the sensor, we can also use a the UAV achieves fixed height navigation throughout the stereo camera that is better suited for outdoor use, to generate uphill and downhill motion. the point cloud. VI.CONCLUSIONS Throughout our simulation, we try to maintain a fixed In this paper, we presented a simulation testbed that al- height hf of 3m from the terrain. The UAV simulation aims lows collaborative mission development. We leveraged open- to reach the goal waypoint in conjunction with terrain relative source software such as ROS, Gazebo, PX4 flight stack, navigation. The UAV estimates the average height of the Docker, Rviz and TurboVNC to provide a rich development current terrain ht by looking up the centroid of a 0.5m 3D environment that scales with available compute. Two case voxel grid containing point cloud. There are 2 PI controllers studies are shown, visual-servoing, and terrain relative nav- developed: the first is a simple controller to adjust for errors igation. In future, we will enhance the camera rendering to in the estimated x and y-direction; the second is terrain aware be photorealistic by leveraging the Unity game engine. controller that corrects for the errors in the z-direction. The controller adjusts the velocity of the UAV based on the error ACKNOWLEDGMENT estimates. The first controller calculates the errors in x and This work was supported in part by the National Science y direction with respect to the goal waypoint. The second Foundation award CNS-1521617. REFERENCES / / . com / willkessler / nvidia - [1] Aviv Tamar et al. “Learning from the hindsight docker-novnc. plan—episodic MPC improvement”. In: 2017 IEEE [15] Morgan Quigley et al. “ROS: an open-source Robot International Conference on Robotics and Automation Operating System”. In: ICRA workshop on open (ICRA). IEEE. 2017, pp. 336–343. source software. Vol. 3. 3.2. Kobe, Japan. 2009, p. 5. [2] ICRA Workshop 2019 on Algorithms And Archi- [16] Lorenz Meier, Dominik Honegger, and Marc Polle- tectures For Learning In-The-Loop Systems In Au- feys. “PX4: A node-based multithreaded open source tonomous Flight. 2019. URL: https : / / uav - robotics framework for deeply embedded platforms”. learning- icra.github.io/2019/ (visited In: 2015 IEEE international conference on robotics on 09/14/2019). and automation (ICRA). IEEE. 2015, pp. 6235–6240. [3] Lukas Vacek et al. “sUAS for deployment and recov- [17] Vladimir Ermakov. MAVLink extendable communica- ery of an environmental sensor probe”. In: 2017 In- tion node for ROS with proxy for Ground Control ternational Conference on Unmanned Aircraft Systems Station. URL: http://wiki.ros.org/mavros (ICUAS). IEEE. 2017, pp. 1022–1029. (visited on 09/14/2019). [4] Matt Schmittle et al. “Openuav: A UAV testbed for the [18] E Zurich. Qgroundcontrol: Ground control station for CPS and robotics community”. In: 2018 ACM/IEEE small air land water autonomous unmanned systems. 9th International Conference on Cyber-Physical Sys- 2013. URL: http://qgroundcontrol.com. tems (ICCPS). IEEE. 2018, pp. 130–139. [19] ArduPilot Dev Team. “Ardupilot”. In: URL: www. [5] Philipp Lottes et al. “UAV-based crop and weed ardupilot. org, accessed 2 (2016), p. 12. classification for smart farming”. In: 2017 IEEE In- [20] VMWARE. Virtualization. URL: https : / / www . ternational Conference on Robotics and Automation vmware.com/solutions/virtualization. (ICRA). IEEE. 2017, pp. 3024–3031. . [6] Nassim Ammour et al. “ approach for [21] Carl Boettiger. “An introduction to Docker for re- car detection in UAV imagery”. In: Remote Sensing producible research”. In: ACM SIGOPS Operating 9.4 (2017), p. 312. Systems Review 49.1 (2015), pp. 71–79. [7] Amanda Prorok, M Ani Hsieh, and Vijay Kumar. [22] Docker. What is a Container? URL: https://www. “Formalizing the impact of diversity on performance docker.com/resources/what-container. in a heterogeneous swarm of robots”. In: 2016 IEEE [23] David Bernstein. “Containers and cloud: From lxc to International Conference on Robotics and Automation docker to kubernetes”. In: IEEE Cloud Computing 1.3 (ICRA). IEEE. 2016, pp. 5364–5371. (2014), pp. 81–84. [8] Fadri Furrer et al. “RotorS—A modular Gazebo MAV [24] Kubernetes community. Kubernetes. URL: https : simulator framework”. In: //kubernetes.io/ (visited on 09/14/2019). (ROS). Springer, 2016, pp. 595–625. [25] Docker community. Docker Swarm mode overview. [9] Nathan Koenig and Andrew Howard. “Design and URL: https://docs.docker.com/engine/ use paradigms for Gazebo, an open-source multi-robot swarm/ (visited on 09/14/2019). simulator”. In: 2004 IEEE/RSJ International Confer- [26] CoreOS community. CoreOS. URL: https : / / ence on Intelligent Robots and Systems (IROS)(IEEE coreos . com / kubernetes / docs / latest / Cat. No. 04CH37566). Vol. 3. IEEE. 2004, pp. 2149– pods.html (visited on 09/14/2019). 2154. [27] Is sharing GPU to multiple containers feasible? 2017. [10] ETHZ-ASL. RotorS is a MAV gazebo simulator. 2018. URL: https : / / github . com / kubernetes / URL: https : / / github . com / ethz - asl / kubernetes / issues / 52757 (visited on rotors_simulator. 09/14/2019). [11] Winter Guerra et al. “FlightGoggles: Photorealistic [28] Will Reese. “Nginx: the high-performance web server Sensor Simulation for Perception-driven Robotics us- and reverse proxy”. In: Linux Journal 2008.173 ing Photogrammetry and Virtual Reality”. In: arXiv (2008), p. 2. preprint arXiv:1905.11377 (2019). [29] Karin Meier-Fleischer, Niklas Rober,¨ and Michael [12] Shital Shah et al. “Airsim: High-fidelity visual and Bottinger.¨ “Visualization in a Climate Computing physical simulation for autonomous vehicles”. In: Centre”. In: EGU General Assembly Conference Ab- Field and service robotics. Springer. 2018, pp. 621– stracts. Vol. 16. 2014. 635. [30] University of Cambridge. Connecting to CSD3 via [13] Paolo Di Tommaso et al. “The impact of Docker TurboVNC. URL: https://docs.hpc.cam.ac. containers on the performance of genomic pipelines”. uk/hpc/user-guide/turbovnc.html (visited In: PeerJ 3 (2015), e1273. on 09/14/2019). [14] Will Kessler. Building a GPU-enhanced Lubuntu [31] Lien Deboosere et al. “Thin client computing solutions Desktop with nvidia-docker2. 2018. URL: https : in low-and high-motion scenarios”. In: International Conference on Networking and Services (ICNS’07). IEEE. 2007, pp. 38–38. [32] TurboVNC. A Brief Introduction to TurboVNC. URL: https : / / turbovnc . org / About / Introduction (visited on 09/14/2019). [33] Joel Martin et al. noVNC: HTML5 VNC Client. 2015. URL: https://novnc.com/info.html. [34] Matthew E Hoskins. “Sshfs: super easy file access over ssh”. In: Linux Journal 2006.146 (2006), p. 4. [35] JetBrains. PyCharmThe Python IDE for Professional Developers. URL: https : / / www . jetbrains . com/pycharm/. [36] Harish Anand, Zhiang Chen, and Jnaneshwar Das. Visual Servoing for UAV’s using Apriltags and YOLO. 2018. URL: https://github.com/DREAMS- lab/nsf_cps_challenge. [37] Seth Hutchinson, Gregory D Hager, and Peter I Corke. “A tutorial on visual servo control”. In: IEEE transactions on robotics and automation 12.5 (1996), pp. 651–670. [38] John Wang and Edwin Olson. “AprilTag 2: Efficient and robust fiducial detection”. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE. 2016, pp. 4193–4198. [39] Joseph Redmon et al. “You only look once: Unified, real-time object detection”. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016, pp. 779–788. [40] Zhiang Chen. YOLO ROS Drone detection. URL: https : / / www . youtube . com / watch ? v = x6ajf5Kv5to. [41] Andrew E Johnson and James F Montgomery. “Overview of terrain relative navigation approaches for precise lunar landing”. In: 2008 IEEE Aerospace Conference. IEEE. 2008, pp. 1–10. [42] Andrew Johnson et al. “A general approach to terrain relative navigation for planetary landing”. In: AIAA Infotech@ Aerospace 2007 Conference and Exhibit. 2007, p. 2854. [43] Radu B Rusu and S Cousins. “Point cloud library (pcl)”. In: 2011 IEEE international conference on robotics and automation. 2011, pp. 1–4. [44] Greg Welch, Gary Bishop, et al. “An introduction to the Kalman filter”. In: (1995).