Frugal Flight: Indoor Stabilization of a Computationally Independent Drone without GPS

Nikhil Devanathan

Acknowledgements

I wish to acknowledge my parents, Ram Devanathan and Subha Narayanan, for helping me by providing materials and equipment, reviewing my design, providing guidance, answering questions, offering pointers, instructing me on data analysis, taking photographs, and proof reading my written material.

I thank Mr. Jonathan Tedeschi at Pacific Northwest National Laboratory for patiently answering my questions about drone sensor systems and stabilization and speaking to me about his own work on sophisticated drones. I am indebted to Dr. David Atkinson at Pacific Northwest National

Laboratory for taking the time to serve as my project’s mentor once more and listening to my progress reports on this project on numerous Sundays. The advice, suggestions, and review from both Mr. Tedeschi and Dr. Atkinson was invaluable to my project and prompted me to develop many of the capabilities and systems on the drone.

All photographs associated with this project, shown in this report and in poster displays, were taken by me or by my parents.

Contents Abstract ...... 1 1. Introduction ...... 2 2. Literature Review ...... 5 3. Engineering Goals and Design Specification ...... 7 4. Design and Development ...... 9 4.1 Hardware ...... 9 First Prototype ...... 9 Second Prototype ...... 11 Third Prototype ...... 13 4.2 Flight Controller Configuration ...... 15 Ground Control Configuration...... 18 Serial Configuration ...... 19 4.3 Software ...... 19 Computer Vision ...... 20 Parallelization ...... 21 5. Tuning and Calibration ...... 22 5.1 and Gyroscope ...... 24 5.2 Ultrasonic Sensor ...... 25 5.3 Camera ...... 26 6. Future Direction ...... 27 7. Conclusions and Outlook ...... 28 8. Bibliography ...... 30 9. Appendix ...... 32

Abstract

Unmanned aerial vehicles (drones) represent an innovative technology with potential indoor applications in warehouse inspection, nuclear reactor and construction sites, decommissioning after industrial accidents, and law enforcement. Current approaches to stabilize drones using global positioning system (GPS) limit their application in these practical tasks, where the GPS signal may be lost. GPS based-stabilization is robust in position holding, but has limited functionality in moving a drone precisely in a small indoor area. The goal of this engineering project was to design, prototype, program, and test an inexpensive drone to demonstrate small- scale stabilization and precise motion without the aid of GPS or external computing. This was achieved through the use of efficient computer vision. The drone utilized two interconnected onboard computers. The first computer, a Raspberry Pi, ran Python code that polled a connected camera and ran computer vision algorithms to stabilize the drone. It was connected to the flight controller, the second computer, which controlled the drone thrust and stabilization using data from the Raspberry Pi. Different drone processes were run concurrently by taking advantage of multiple cores. The drone was built on an open carbon fiber frame for structural integrity and ease of alteration. It was powered by a 3-cell, 2200 milliamp hour Lithium Polymer battery which was able to easily power all systems on the drone. This device was able to position itself using Optical flow, which was altered for use with low computational power systems. The drone can also deliver or move a small payload. This drone serves as a proof of concept that can be augmented to be part of a swarm and can benefit from artificial intelligence with additional processing power. Using this technology, drones could be utilized indoors to inspect, move components, deliver payload, or conduct co-operative tasks that are unsafe for humans.

1

1. Introduction

Drone ( or UAV) popularity and use has skyrocketed over recent years. Drones are expected to have an annual impact between $31 billion and $46 billion on the Gross Domestic Product of the United States by 2026 (Levick, 2018). This advance can largely be attributed to the improvements in battery technology which have led to increased drone flight time. As computer technology has also simultaneously evolved, resulting in smaller, more efficient, and more powerful computers, the potential of drones with on-board computers has grown.

The idea of equipping drones with on-board computers is not new, but previous iterations have had their flaws. Drones with on-board computers have previously been large in size. This allowed for engineers to compensate for the large power consumption of the on-board computer with a large battery or fuel tank (Allain, 2018). Smaller drones that were computationally independent were very uncommon until recent years.

The more common solution involved in utilizing small drones, especially indoors, was to use external computing (Zhou, 2015). Drones were run in highly structured environments and monitored by motion capture cameras. The camera systems then connected to a powerful computer which would determine the position of the drone in real time and maneuver or stabilize it. While this technique was useful when working in extremely structured environments, the drones had limited range and were unable to function independently.

In more recent years, companies such as DJI (https://www.dji.com) and NVIDIA (https://www.nvidia.com) have pioneered putting computers on drones (Smolyanskiy, 2017). By utilizing highly efficient, mobile graphics processors, these corporations have been able to equip drones with significant computational power. This can be especially useful for stabilizing drones without the help of global positioning system (GPS). GPS is currently the primary way to stabilize drones and is often used in conjunction with data from an accelerometer and gyroscope for added precision. GPS functionality is excellent in open spaces like fields, parks, and tarmacs, but quickly loses accuracy when the sky is obstructed. In most cases, GPS is estimated to have an accuracy of 0.715 m, though the GPS in smartphones can have an error of up to 4.9 m according to gps.gov. This level of error is not suitable for stabilizing small drones, performing precision tasks, or

2 operating within obstructed areas such as cities or forests. Despite the disadvantages, GPS has remained the preferred method of drone stabilization due to the low computational and power cost of using GPS.

An alternative to GPS for drone stabilization is Light Detection and Ranging (LIDAR) technology. LIDAR is functionally similar to radar technology, and involves the use of a pulse of light and the measurement of the frequency and intensity of the returned light in order to collect detailed information about 3 dimensional distances and surfaces (Woodford, 2019). LIDAR has been used extensively for mapping the surface of the Earth and to guide self-driving cars. It is a highly precise technology. Although it is being used to stabilize drones, it is uncommon for multiple reasons. LIDAR systems often need GPS or are used in conjunction with GPS in order to function properly. In most cases, the redundancy is unnecessary and GPS alone is preferable. LIDAR systems are also expensive, complex, and sensitive. LIDAR is a technology largely pioneered by NASA and was designed for imaging over long distances, such as mapping the Earth’s surface from space. LIDAR is meant for scenarios mainly when accuracy matters more than simplicity of a solution and in situations where cost is not a primary constraint. Due to this reason, it is unlikely that LIDAR will see more use in consumer and industrial drones in the near future.

The more recent alternative that has been pioneered by the aforementioned companies is visual or optical stabilization. This method relies primarily on downward facing optical sensors which analyze characteristics of the visible ground at rapid frequency in order to determine the change in position of the drone (McGuire, 2017). This technique is referred to as optical flow. Optical flow has significant advantages over GPS in the precise stabilization of drones. GPS relies on an external system, the satellite system, while optical flow relies only on the onboard computer and the onboard camera. Furthermore, when near the ground, optical flow offers a significantly higher degree of precision compared to GPS and can give centimeter levels of accuracy compared to meter levels of accuracy for GPS.

The disadvantages of optical flow are in its requirements. Optical flow requires a camera that is able to stream images at typically 10+ Hz and a computer that is able to process the images and determine the change in position of the drone at similar speeds. Analyzing the relatively high-

3 resolution images at that speed requires significant computational resources: CPU, GPU, and RAM ((Smolyanskiy, 2017). With the high computational power comes a large power consumption. Companies like DJI and NVIDIA have been able to solve these problems by using state-of-the-art mobile processors, which are both efficient and powerful. This also comes with its caveats, however, as the use of advanced processors raises the cost per drone and reduces the accessibility of the technology.

Even with the caveats, the potential of optically stabilized drones is large. Drones could be used for the management of factories and warehouses. In factories, they would be able to monitor assembly lines, retrieve parts and tools, report errors, and assist workers. In warehouses, the drones would be able to scan stock and move small packages. These tasks would be ideal for optically stabilized drones, as the environments can be easily structured to allow the drones to autonomously charge when necessary. This would allow drones to efficiently work, charge, and resume work. In addition, the drones would have the added ability to work at night, which would allow for more effective stock management, especially in factories. Drones which can navigate independently of GPS could additionally be used as versatile monitoring devices. The drones would be able to conduct quality control tasks to identify damage to or faults in industrial equipment or components. They could also be used to monitor premises from the air and respond to events as they occur. Finally, these drones would also be able to assist law enforcement by providing vision in situations unsafe for officers.

In order to address the challenges presented above, I have designed, fabricated, programmed, and tested a low-cost, open-source drone that is able to stabilize optically. The drone utilizes only onboard systems and computing in order to stabilize and navigate. The primary computer, a Raspberry Pi, polls a downward facing ultrasonic sensor for altitude data and a downward facing camera for optical horizontal stabilization. The Raspberry Pi controls the flight controller, which serves as the secondary computer, based on the data it receives. The drone is a 650g quadcopter built on a 250mm carbon fiber frame and is powered by a 3-cell 2200mAh lithium polymer battery. The drone proves that by adding a small level of structure to the environment the quadcopter operates in, the computational power required to stabilize the drone optically can be reduced drastically.

4

2. Literature Review

Before undertaking this project, a literature review on related drone research was conducted to assess the state of the art and identify opportunities where improvements can be made. The Defense Advanced Research Projects Agency’s (DARPA) Fast Light Autonomy program (DARPA, 2017) has recognized that current drones are too reliant on a human tele-operator, GPS or external data links. Without such links, the drone will be unable to determine and control its position and orientation and will not be able to fly straight or level for long. Instead of new sensors and high computing power, DARPA is emphasizing superior algorithms that work in real time with low power single board computers.

The key challenges in stabilizing drones are latency of controls (Palossi, 2019), inaccuracy of sensors, dynamically varying external influences, unreliable state determination, and consumption of resources (power and computer resources) by sensors, cameras and navigational aids (Grzonka, 2012). These challenges become especially significant in indoor environments and in situations where GPS is unavailable due to interference, shielding or jamming (Warren, 2018). Researchers at the University of Washington have developed an array of antennas on the ground that can offer a backup system for GPS to support drones in the outdoor environment (Mcquate, 2018). Visual algorithms can play a key role in drone control and navigation when GPS fails (Warren, 2019).

Murphy (2018) has highlighted new opportunities for indoor use of drones in warehouses, real estate industry, and movie industry. Drones can be used efficiently for repetitive tasks such as inventory management, factory management, movement of parts in a warehouse, and inspection at an assembly line (Maurer, 2018). Kang (2015) has reviewed the civil engineering application of drones at construction sites and for inspection of structures for cracks, damage and leaks. Drones can be used to survey contaminated sites for radiation or chemical leaks or conditions that are hazardous to humans (Fournier, 2016). Regular inspection of large structures, such as nuclear reactors, is possible with drone technology (Vale, 2017). Most of these applications offer structured or semi-structured environments where it is feasible to use individual drones or drone swarms to perform tasks autonomously without GPS.

5

The science and technology of small drones, especially those suited for indoor applications, have been reviewed by Floreano (2015). The authors presented an analysis of flight time versus mass of the drone, which is shown in Figure 1. The drone size has to be selected judiciously.

Figure 1. Flight time vs. mass for small commercial drones from Floreano et al. (2015).

6

Smolyanskiy (2017) has introduced a hardware and software system that uses optical signals in the form of high-resolution video and on-board high-powered processor to navigate a challenging environment without GPS (Feist, 2018). The key to such navigation is the use of visual breadcrumbs (visual element used as a navigation aid) and machine learning. McGuire (2017) has proposed the use of optical flow and stereo vision for efficient indoor navigation of small drones. Based on these previous studies, indoor stabilization of a drone without GPS using open source software and open hardware capable of modification by users was identified as a promising area of research.

3. Engineering Goals and Design Specification

The goals of this project were as follows:

• Primary goals:

o Design, prototype, and program a drone with onboard computing

o Integrate open-source software and open hardware platform using a Raspberry Pi

o Stabilize and calibrate the drone motors for smooth operation

o Utilize computer vision to align the drone to red reflector tape within its view

• Secondary goals for the long-term:

o Fabricate multiple drone units for simultaneous use

o Demonstrate non-interference between drones

o Use drones in conjunction in a low-cost swarm

The first project specification was the type of drone to use and the drone form factor. A quadcopter drone was chosen for its simplicity, maneuverability, and versatility. Initially a tri- copter was considered, but the additional tail servo action required to counteract the rotational velocity generated from unbalanced propeller rotation proved to be disadvantageous. Quadcopters,

7 on the other hand, are inherently rotationally balanced due to the even number of propellers, one half of which spin opposite to the other half. Quadcopters are the simplest type of drone to work with, as the drone is maneuvered using only four upward thrusters without the need for balancing servos, flaps, or additional motors. This made them ideal for the project. For the form factor of my quadcopter, a standard 250 mm frame was chosen, as building a standard size quadcopter would make building the drone and replacing parts easier. 250 mm was also a relatively medium form factor; it was not especially large or unwieldy while also being large enough for expansion and modification.

The second specification was what sensor systems would be used onboard the drone. This changed somewhat over the course of the project. Initially, an accelerometer, gyroscope, barometer, and GPS were used on the first prototype. On the second prototype, however, the GPS was removed, a camera was added, and the barometer was replaced with an ultrasonic sensor, which had better resolution at low altitudes. On the third prototype, the only change was the addition of a radio. Initially, the goal of the project was simply designing a robust method to stabilize low cost drones indoors. As the project moved forward, GPS was deemed to be unsuitable and optical stabilization was used instead.

The third consideration was the mass of the drone. Due to the size of the drone, the largest battery that could be used was a 3-cell 2200 mAh lithium polymer battery. As such, power was a limited resource on the drone. Adding more mass to the drone would increase the power consumption of the drone due to the increased demand on the motors. Hence, a constraint of 750g was placed on the drone.

The final consideration was to make the drone low-cost and open-source. The goal was to keep the cost of the drone below $200 in order to facilitate the eventual adoption of the technology. By making the technology open-source, others would be able to develop on the platform designed in this project and add to the platform. Therefore, open-source and easily accessible parts were used to build the drone.

The Raspberry Pi was chosen to serve as the primary computer on the drone due to its efficiency and computational power relative to its mass. Python was chosen as the programming language for the project due to its high compatibility with the Raspberry Pi.

8

4. Design and Development

Figure 2. First prototype drone

The majority of the project concerned the design and development of the drone and the software I utilized on the drone in order to optically stabilize it. So far, there have been three major prototype iterations with more planned. Compared to the hardware, the software has gone through more than 10 iterations/major edits.

4.1 Hardware

The hardware component of this project primarily involved the construction and assembly of the drone frame, the configuration of the onboard computer systems, and the configuration of the onboard sensor systems, and the development of the robotic claw.

First Prototype

When beginning the drone construction, the first task was to determine the parts to be used to build the drone. The Raspberry Pi Zero was chosen to be the main computer on the drone, as the Raspberry Pi Zero was the smallest and most efficient of all of the Pi’s. For the motors, mini

9 quadcopter brushed motors rated for 3.7-7.4V were selected, based on publicly available data, due to their small size and relatively high thrust. Rather than using an off-the-shelf power distribution board (PDB) and the flight controller, a custom printed circuit board (PCB) was developed to fulfill the purpose of both. The PCB was built using Autodesk Eagle. It functioned both to power the drone, due the Raspberry Pi’s limited power distribution capabilities, and control the motors using transistors. The PCB fit onto the Raspberry Pi Zero pins as a hardware-on-top (HAT) board. As a HAT, it was able to receive input power through a tether, power the Raspberry Pi, and power the motors. The onboard sensors chosen for the drone were a barometer, accelerometer, gyroscope, and GPS. These onboard sensors were mounted onto the HAT, through which, they connected to the Raspberry Pi. Of the included sensors, the barometer provided altitude data, the gyroscope and accelerometer provided data about the motion and angle of the drone, and the GPS provided position data. Although after this prototype, the GPS was eliminated, initially it was included to help stabilize the drone.

After all of these components had been selected, a frame to mount all of the components was needed. For this, a custom frame made from 4mm polycarbonate was chosen. The polycarbonate material was used because it would be easy to cut and work with while serving as a strong frame material for the drone that would resist damage. First, screw holes corresponding to the footprint of the Raspberry Pi Zero were marked on the polycarbonate material. Next, a basic outline was drawn onto the polycarbonate with a body supporting the Pi and four arms, one for each motor. The frame was then milled by hand using a handheld rotary tool and the edges were finished with sandpaper. Holes were then drilled where previously marked in order to mount the Raspberry Pi. Finally, a hole the width of the motors was drilled in each arm so that the motors could be set into the frame.

After the frame was built, the Raspberry Pi and connected HAT were mounted using screws. The motors were set into the frame and locked using hot glue. Each motor was connected to the corresponding terminal on the HAT, and power was supplied to the HAT through a 9V tether. Once turned on, the drone struggled to achieve liftoff. While 9V was supplied to the HAT, it did not receive enough amperage to properly power all of the motors simultaneously. Also, due to design flaws in the HAT, the motors were not grounded properly and were difficult to control as a result. While this drone prototype did not succeed, the lessons learned from it were useful

10 moving forward. After this prototype, rather than using tethered power, onboard batteries were used.

Second Prototype

Figure 3. Second prototype drone in flight with tether visible

After testing the first prototype, the second prototype was designed. Instead of using custom parts on the drone, this prototype used entirely off the shelf parts. A 250mm drone kit was purchased and contained a 250mm carbon fiber frame, a Copter Control (CC3D) flight controller, four carbon fiber propellers, four 11.1V brushless motors, a passive power distribution board (PDB), and four 11A electronic speed controllers (ESCs). A Raspberry Pi Camera was also chosen to replace the GPS and horizontally stabilize the drone. These components were used in conjunction with the Raspberry Pi Zero from the previous prototype. All of the onboard sensors from prototype one were not used because the CC3D had an onboard accelerometer and gyroscope. The barometer was replaced with a gyroscope because the ultrasonic sensors had better resolution at distances less than 4 meters. Initially the motors were screwed to the arms of the drone. The ESC wires were then shorted and soldered to the PDB. The PDB was then wrapped in electrical tape for insulation. The motor arms were screwed to the base of the frame, the PDB was set on the base of the frame, the ESCs were set on the arms and the ESC wires were soldered to the motor

11 wires. After this, the remainder of the frame was screwed to the base and the arms. The flight controller was set on top of the drone. The Raspberry Pi also needed to be mounted on top of the drone, but the screw hole position was incompatible with the Raspberry Pi. To solve this, a polycarbonate sheeting piece was cut in the dimensions of the Raspberry Pi Zero. On the bottom of the sheet holes were drilled to match both the top of the drone and the Raspberry Pi Zero.

Along with the components of the drone, a 3-cell 2200mAh Lithium Polymer (3S LiPo) battery was purchased to power the drone. The ultrasonic sensor could not be directly wired to Raspberry Pi, as the sensor needed to be wired to the Raspberry Pi through resistors. To connect the ultrasonic sensor to the Raspberry Pi, perfboard was used to create a small circuit which put the leads from the ultrasonic sensor through resistors and connected to the Raspberry Pi. This connector board was wrapped in electrical tape and attached to the drone using adhesive. The last sensor, the Raspberry Pi Camera, was mounted to the underside of the drone and connected to the Raspberry Pi Zero with a ribbon cable.

Finally, the four carbon fiber propellers were screwed onto the motors and the controller was connected to the Raspberry Pi Zero using a male-to-male micro USB to mini USB cable.

This prototype worked significantly better than the previous prototype, though it also had some flaws. This prototype was able to lift off and fly with ease, although it may have been too powerful. For safety purposes, it was tested on an old bed and was tethered to a heavy metal case using paracord. The propellers were sharp and strong, giving them high cutting power, and the propellers were also nearly the same length of the arms of the drone. As a result, any component that stuck out even slightly from the drone frame was cut by the propellers. In one instance, the propellers cut through battery leads that protruded slightly from the battery bay at the back of the drone. As a safety feature, a kill switch had to be added to the power lead between the PDB and the battery which regulated the flow electricity through the drone. This allowed the drone to be manually shut down if the software froze while the motors were running.

The camera was able to detect reflector tape, but it was greatly affected by lighting conditions. In dark conditions, the camera was often unable to see the reflector tape and just saw solid black or brown. In bright lighting conditions, the drone mainly saw glare from the lights and did not recognize the color of the reflector tape well.

12

The major limitation with the drone was that the computational power of the drone seemed to be lacking, as the drone struggled to poll all of the sensors and the camera in real-time at sub- second speeds. This led to latency of control.

Third Prototype

Figure 4. Third prototype drone

The second prototype made significant progress towards meeting the engineering goals, and minimal changes had to be made in the third prototype. In order to address the problems that did exist existed with the previous drone prototype, the third prototype was developed. On the third prototype, the Raspberry Pi Zero was replaced with the Raspberry Pi 3B+. While the Raspberry Pi Zero is a single ARM 11 core computer with 512MB of RAM, the Raspberry Pi 3B+ is approximately 10x faster with four ARM 53 cores and 1 GB of RAM. The 3B+ only consumed about twice as much power, which amounted to 5V 2A and fit within the power consumption limit. The flight controller was moved to the camera bay of the drone, since the drone did not mount a forward-facing camera and the Raspberry Pi needed space on top, as it was larger than the Raspberry Pi Zero. The Raspberry was rewired to the flight controller, and instead of using a USB cable, the two were connected over the Raspberry Pi’s serial pins.

13

A radio receiver was also added to the drone, attached to the back of the ceiling of the battery bay. This was done so that the drone could be controlled manually by a remote controller, which was useful for testing purposes.

As a safety feature, the carbon fiber propellers were replaced with ABS plastic propellers which were triple-bladed but shorter in range. These propellers were also less likely to damage another object on collision, as they were made of a weaker material and would break first themselves.

Camera LED Camera LED

No Polar Perpendicular Filters Polar Filters

Direct glare Direct glare from the LED from the LED is blocked is not blocked from the from the camera camera

Reflector Tape Reflector Tape

Figure 5. Diagram of the function of polarizer filters in removing image glare

A key problem from the third prototype was the poor camera performance. This required both a hardware and software solution. Collimated LEDs were attached to the bottom of the drone through a potentiometer, adjacent to the camera, in order to illuminate the drone’s field of view. The potentiometer functioned as a dimmer and allowed for the brightness of the LEDs to be adjusted based on the ambient light level. Regardless of how much the LEDs were dimmed, however, reflector tape at the center of the camera’s field of view was rendered invisible due to intense glare from the powerful LEDs. This proved to be a problem with the direction of the light rather than its intensity. This problem persisted regardless of where the LEDs were placed. To eliminate the glare, a photography technique, polarizer filtration, was applied. Polarization is a property of light, similar to intensity and wavelength. The human eyes is unable to see polarization

14 however. A linear polarizer filter allows only light that is polarized at a certain angle to pass through. Light cannot pass through perpendicular polarizer filters as light angled to pass through one will be blocked by the other. To apply this to the drone, parallel pieces of polarizer filter were put over the LEDs on the bottom of the drone. A perpendicular piece of polarizer filter was then put over the camera. LEDs normally emit unpolarized light. The filters only let light of a certain polarization pass through and the same light was unable to enter the camera. Due to this, the glare from the reflector tape was unable to enter the camera, while the nonpolarized light that bounced multiple times within the reflector tape was able to enter the camera.

The final major change to prototype three was the addition of the robotic claw to the bottom of the drone. The goal behind the claw was to give the drone the ability to pick up and move objects which helps meet some of the possible applications of the drone. The claw was actuated by a 5V micro servo motor, which despite its small size was able to control the arm effectively. The servo was powered by the flight controller, from which it received 5V 2A, but was controlled by the Raspberry Pi using PWM pulses. The claw itself was built from LEGOs as it needed two parallel, linked gear shafts.

Overall, prototype three met the design specifications. Despite the additions to the drone, flight time was not compromised. The drone had the computational power required to handle all of the necessary software processes, due to the onboard computer upgrade. The new propellers generated more thrust than the previous carbon fiber propellers and were less destructive. The camera was able to identify and lock on to pieces of reflector tape within its view. The drone remained stable in the air and was able to navigate smoothly.

Fourth Prototype

While the third prototype drone met the engineering goals and was able to stabilize over a piece of reflector tape, the frame of the drone was stouter than necessary, and the drone had a high center of gravity which made it difficult to stabilize occasionally. Furthermore, the drone did not demonstrate swarming capability, as only one drone was tested. To address these issues, two fourth prototype drones were developed in order to fix the minor problems the third prototype faced and demonstrate swarming capability. The new prototype required components which would enable smoother flight passively, without additional software. As such, a thinner

15

250 mm carbon fiber drone frame and a superior flight controller, were used. An additional radio transmitter and receiver were added to the drone in order to facilitate simple drone to drone communication.

On a drone, a high center of gravity is a feature commonly found in racing drones. It offers the benefits of faster and snappier responses alongside the ability to maintain momentum well. On a drone meant for indoor usage, where stabilization is the key goal, having a high center of gravity proved to be a hindrance on the third drone prototype. To solve this, the fourth prototype was built “upside down” with a lither frame. On the fourth prototype, the normal bottom of the drone was the top, and the motors were mounted above the battery compartment of the drone rather than level with it. As a result, most of the mass of the drone was centered below the motor plane, which provided passive stability to the drone.

The CC3D board from the previous drone prototype functioned as the flight controller of the drone and was able to respond well to the Raspberry Pi. However, the CC3D was built with an aged F1 processor. The F1 processor struggled to keep pace with the rapid corrections necessary to stabilize the drone without instructions from the Raspberry Pi. To mitigate this issue, the SPRacing F3, a newer F3 flight controller was used. If flashed with the correct firmware, the SPRacing F3 was compatible with LibrePilot. Initially, it was difficult to flash the F3 board with the correct firmware, as it used a UART connection over USB. Due to this, connected computers recognized it as a COM port device rather than a serial device, and the LibrePilot built-in flashing tool was unable to recognize and flash the board. After some research into the hardware of the SPRacing F3, the flash utility was used to successfully flash the board with LibrePilot firmware. Once loaded with the correct firmware, the flight controller could be configured using the LibrePilot ground control software and it proved an easy replacement for the CC3D. In operation, the F3 board showed significantly better passive stabilization than the CC3D. With the modified frame and the controller upgrade, the drift of the drone was minimized, even when not receiving instructions from the Raspberry Pi.

In order to facilitate communications between drones, a radio transmitter and receiver were installed on the two fourth prototype drones. Initially, Bluetooth was considered for facilitating communications, but Bluetooth required each drone to be paired and connected to every drone it was communicating with, which made it infeasible for sending broadcasts to a

16 potentially large number of drones. A simple protocol was devised for using the radio units to broadcast 4 binary digits and a Boolean between drones. The purpose of the communication was to relay information about what segments of the task at hand were currently being performed by other drones. So, for any task with less than 16 segments, the protocol is sufficient for drone-to- drone communication. The protocol was tested on a small track consisting of four pieces of reflector tape in a square pattern. The drones flew the perimeter of the track and were able to emulate collision avoidance by broadcasting the number of the piece of reflector tape they were stabilized over.

The last major change between the third and fourth prototype was the removal of the robotic arm below the drone. The third prototype already proved the capability of the drone to utilize a robotic arm during flight. The robotic arm was not critical to the operation of the prototype four drones. It did not align to a key goal, and it could possibly obstruct the downward facing drone camera. For these reasons, the fourth protype drone was built without it, although space to mount an arm remained open on the drone, the necessary input/output and power lanes on the Raspberry Pi remained open, and the code retained the methods to actuate a robotic arm, allowing it to be quickly attached and utilized if needed.

While the fourth prototype did not fix any issues that impeded the operation of the drone, as there were none, it brought a series of minor changes that contributed to a tangible performance increase in the drone. Due to the improved stability and communication capability of the fourth prototype over the third, two drones could successfully be operated within the same airspace, demonstrating that the drone can be scaled up to a drone swarm without major changes to the drone hardware or software. The fourth prototype was successful in meeting all of the engineering goals of this project and demonstrating the full capability of optimized software on general purpose hardware.

4.2 Flight Controller Configuration

Although not necessarily entirely hardware or software, the configuration of the CC3D flight controller was separate enough of a process to describe separately. While all of the code written for the project ran on the Raspberry Pi and it served as the controlling computer, the flight

17 controller was responsible for lower level flight control tasks, such as controlling the motor balance, exact motor voltages, and passive stabilization. In order to perform optimally, the flight controller needed to be configured separately from the software on the Raspberry Pi.

Ground Control Configuration

The CC3D flight controller is programmed and configured using the Librepilot software, an open source ground control station for Copter Control boards. Using Librepilot, the first task was basic configuration. In basic configuration, the motors were configured to spin correctly and the neutral voltage, lowest voltage at which the motor will spin, was set for each motor. The accelerometer and gyroscope were also calibrated in this stage. For the calibration, the drone was kept on a flat surface while the flight controller calculated the sensor bias.

After completing the initial configuration, the drone’s arming sequence needed to be set. When unarmed, the drone’s motors will not move regardless of the signals the flight controller receives. By default, the drone is always disarmed. The drone was then set to arm when it received a “yaw right” signal at minimal throttle.

The next step was to setup the physical ports on the flight controller. The USB port on the flight controller was configured for telemetry, the mode in which the connection can be used to communicate with and control the drone. While the flight controller initially communicated with the Raspberry Pi using the USB port, the USB port was necessary for communication with a laptop that was used to configure the flight controller. To allow both devices to communicate with the CC3D, the connection to the Raspberry Pi was shifted to the 4-pin, serial “flexi” port on the CC3D. Librepilot was used to configure this port for telemetry and serial communication.

After prototype three was developed and the radio was added to the drone, Librepilot was used to configure the radio transmitter. The transmitter had onboard potentiometer dials which were linked to the proportional, integral, and derivative correction (PID) values on the CC3D, which were responsible for stabilization when hovering. Once the transmitter had been configured, the PID values were tuned using the dials. When tuning the flight controller, unique oscillation values (OUVs) were collected. The UOVs were the lowest constant of proportionality that could be applied to the drone at which the drone would overcorrect. UOVs were determined for both the

18 roll and pitch axis of the drone. Using an online tool called Optune, the UOVs were used to derive PID values for the drone that would stabilize it when hovering.

Serial Configuration

Initially, connecting the Raspberry Pi and the CC3D proved to be a challenge. Both devices were capable of serial connection, but the CC3D was intended to be a standalone board, rather than auxiliary device to a computer. The solution to connect the two devices lay in the open- source nature of Librepilot. To connect the two devices, Librepilot was cloned from GitHub onto the Raspberry Pi. Librepilot was then compiled on the Raspberry Pi. At this point, rather than utilizing the Librepilot application, the Librepilot Python class was used. This allowed the Raspberry Pi to initiate UavTalk, the Librepilot serial protocol without launching the Librepilot desktop application.

The next task was to identify the serial port to which the CC3D was connected to on the Raspberry Pi. This was done by disabling system control of the serial interface and monitoring the list of active serial connections when the CC3D was connected. The CC3D was identified to connect to the “/dev/ttyS0” port on the Raspberry Pi and the code was configured to utilize the port. Through this, the Raspberry Pi was able to gain access to the flight controller.

4.3 Software

The software component of the project was mainly the code written for the Raspberry Pi to control the flight controller. All of the code for this project was written in Python for multiple reasons: Python is highly compatible with the Raspberry Pi, the Librepilot library is written for Python, and Python has a diverse and active online community which made getting answers to programming questions easier. The primary goals for the Python code were to detect reflector tape within the camera field of view and let the drone hover over the visible reflector tape. The code also needed to poll the ultrasonic sensor and adjust the altitude of the drone towards a stable point. In order to fulfill these goals, two main libraries were utilized: Librepilot and OpenCV.

19

Computer Vision

Computer vision was utilized in the Python code through the open-source OpenCV library. First, the library was cloned onto the Raspberry Pi and all of the dependencies were installed. Afterwards, the library was compiled. The compilation took multiple days on the Raspberry Pi Zero and took hours on the Raspberry Pi 3B+. The compilation also occasionally errored out, requiring the compilation to restart. Initially, the OpenCV library was compiled in a Python virtual environment for portability and good practice. This, however, caused significant problems, as Librepilot could not be compiled in a virtual environment. As a result, Librepilot and OpenCV could not be used in the same program. Due to this problem, OpenCV, was compiled to the main Python install on the Pi.

Within the code, the PiCamera class built in to the Raspberry Pi was used to start an image stream from the downward-facing camera. The class was also used to poll an image from the stream which was saved in the Raspberry Pi’s memory. While hardware, such as LEDs and polarizer filters, were used to enhance the color detection process, software side image filters were also especially important in the process. Using image filters built into the PiCamera class, the brightness and contrast in the images taken was increased. This made sharp colors stand out clearly while backgrounds blended and faded. OpenCV was then used to convert the image from the red, green, blue (RGB) colorspace to the hue, saturation, value (HSV) colorspace. RGB is a useful colorspace for displays due to the red, green, and blue component of pixels. For image analysis, RGB, is not especially useful. This is because all three values in RGB contribute to the tone of a color. This makes it difficult to conduct color detection in this colorspace. In the HSV colorspace, however, only the hue value contributes to the tone of an image, saturation affects the vibrancy, and value affects the brightness. This makes it easy to set bounds to detect specific colors in an image. HSV bounds were used in the Python code to detect all red pixels. An image mask was created which was a matrix with the same dimensions as the image, where all red pixels in the image were assigned a value of “1” in the mask and all other pixels received a value of “0”. This effectively “flattened” the image, and made it easier to process, because instead of three channels, the image had been reduced to just one. OpenCV was then used to perform contour recognition, which groups all adjacent pixels with the same value. The centroid of each detected contour above a certain size limit was then calculated. Using the coordinates of the centroids, the centroid closest

20 to the center of the field of view was identified. The chosen coordinate was then used in an alignment method which used the coordinates as PID input and controlled the flight controller using the output.

Using much of the same logic as the method which allowed the drone to stabilize itself over a piece of reflector tape, another method was used to align the drone to two pieces of a reflector tape. The method to align started by processing an image to determine the centroid of the nearest piece of red reflector tape. It then determined the centroid of a piece of blue reflector tape closest to the previously determined red centroid. Using the two centroids, the method calculated the slope between the two points, and then used an arctangent function to determine the degrees of the polar angle between the two pieces of reflector tape. After determining the degrees of the angle, the drone determined the error of the angle from 90°, which it used as the basis of a proportional correction to the yaw of the drone.

Parallelization Main Python Process

Camera Shared Resource Vertical Drone Horizontal Stabilization Control Stabilization Process Process Process

Process Initialization Shared Resource Data Transfer (Directional) Queue (Directional)

Figure 6. Graphic illustrating the multiprocessing as used on the drone

At first, all of the Python code for the project was run entirely sequentially. The ultrasonic sensor polling and the computer-vision-based horizontal stabilization were part of the same loop. Despite this, the vertical and horizontal stabilization code functioned entirely independently, and

21 putting both together made them compete for computational resources and slowed both. Therefore, executing the code sequentially was highly inefficient. Even though the Raspberry Pi 3B+ was a 4-core machine, only one core was effectively utilized at any given point due to the Python Global Interpreter Lock (GIL). The GIL prevents Python from using multiple cores. However, the stabilization loop was only running at 5Hz.

To solve this problem, the Python multiprocessing library was utilized. The multiprocessing library sidesteps the GIL by instantiating additional Python processes to run methods from a main Python program. Because they are separate processes, the processes created by the multiprocessing library can operate on separate cores. This was especially important for this project. The vertical stabilization and the horizontal stabilization were separated into two different processes. In order to manage data between the processes, shared objects and queues were used. The PiCamera class object was initialized by the main process, but later used by the horizontal stabilization process. The object was shared at a very small cost in speed to allow both to use the same object. Both stabilization methods also needed to access the Librepilot class object, as it was used to send data to the flight controller. Putting the Librepilot class object in shared memory, however, significantly slowed down the program due to the frequency of requests made to access it. To mitigate this issue, a third process was created which took data and sent it to the flight controller. The process was given sole access to the Librepilot object and it was removed from shared memory. A system of queues was used to send data to the third process, which then sent the data. After implementing parallelization and solving the small problems that arose as a result, the horizontal stabilization loop reached a consistent 10Hz loop rate and the vertical stabilization loop reached a consistent 25Hz loop rate.

Drone-to-Drone Communication

Drone to drone communication was a crucial, but relatively simple component of the code. Unlike other key methods, it did not function as its own process, but rather the listener for drone to drone communication was mounted on the drone control process, due to its high loop rate, to save computational power. The goal of the communications was to provide a method for one drone to quickly transmit information about its location to all nearby drones. Rather than

22 using a communication protocol which relied on pairing and explicit connections between drones, a simple protocol for broadcasting omnidirectional messages over radio was developed.

The protocol could be used to send 4 binary digits (an integer from 0 to 15, inclusive) and a Boolean. The idea behind the protocol was that the area in which the drones would function would be divided into “zones”. Each zone would be the area in which a single task would occur. Based on the flow of the code, or reflector tape queues, each drone would know the number of its zone. If each drone avoided occupied zones, collision avoidance between drones could be emulated without omnidirectional distance sensors.

Every broadcast began with a “long” 2 ms high pulse followed by a “short” 1 ms low pulse. Four short pulses would follow for which a high would correspond to a binary 1 and a low would correspond to a 0. A fifth short pulse would follow for which a high would signify a true and a low would signify a false. The purpose of the initial long pulse is to give drone control process time to start the listener and reduce the chance of a missed broadcast. The four binary digits convey the number of the zone the message concerns, and the Boolean signifies whether the zone is occupied. Upon entering a zone, each drone communicates that its zone is occupied, and upon leaving, communicates that the zone is unoccupied. While the protocol is primitive, broadcasts execute quickly, and the listeners have a low chance of failure. A more sophisticated protocol proved unnecessary.

5. Tuning and Calibration

After the drone had been assembled and much of the code had been written, the task that remained was tuning PID loops, both in the flight controller and in the Raspberry Pi, to gain marginal performance boosts. The function of the PID was a key component of the project. PID loops start with a setpoint, the goal of the system. The loop takes input in the form of error from the setpoint, typically collected from sensors. The change in error over time becomes the derivative value of the loop, the summation of all of the errors multiplied by the loop interval becomes the integral value. The error, integral, and derivative are each multiplied by an independent constant and summed up. The sum is then used to increment a value which becomes the output of the loop

23

which is used to adjust the system towards the setpoint. The goal of a PID loop is to effectively reach the setpoint in minimal time, and stabilize quickly at the setpoint without overcorrection.

Proportional 퐾푝푒(푡)

Error Integral Adjustment to Output ∑ 푡 ∑ Current State 퐾푖 න 푒(푡)푑푡 0

Derivative 푑푒(푡) 퐾푑 푑푡 Sensor Reading

Figure 7. Graphic illustrating PID logic

5.1 Accelerometer and Gyroscope

Yaw Pitch Roll

Figure 8. Illustration of the three principal axes and rotation on each axis

The accelerometer and gyroscope data used on the drone does not leave the flight controller. The attitude, velocity, and acceleration were put through a PID loop within the flight controller and the data from the loop was used to stabilize the quadcopter in the background

24 whenever it was idling in the air. The sensors were responsible for stabilizing the three principal axes of the drone: pitch, roll and yaw. Pitch is the forward or backwards rotation of the frame; positive pitch lifts the nose of the drone and moves the quadcopter backwards. Roll is the side-to- side rotation of the drone; positive roll raises the left wind and moves the quadcopter to the right. Yaw is the planar orientation of the quadcopter; positive yaw turns the nose of the quadcopter to the right. To tune these PID loops for the principal axes, unique oscillation values or (UOVs) were required. The UOVs were determined by setting the integral and derivative constants to 0 on the pitch and roll UAV loops and setting the proportional constant to .003. The derivative constant for yaw was set to 0, and both the integral and proportional constants were set to 0.0035. A radio transmitter was then configured so that two dials were linked to the proportional constants for pitch and roll. Each constant was individually increased and decreased to determine the lowest constant value at which the drone over corrects, which is visible as oscillation over the tested axis. Once the UOVs for pitch and roll were collected, they were entered into an online tool called Optune. Optune used the UOVs to derive ideal PID values for the flight controller. Using the values from Optune, the flight controller was stabilized.

5.2 Ultrasonic Sensor

The ultrasonic sensor was polled in a process, separate from the other processes on the Raspberry Pi. The setpoint of the PID loop was the desired hover altitude. The input into the PID loop was a reading from the ultrasonic sensor subtracted from the setpoint. The derivative value was the error divided by the loop speed in seconds. The error multiplied by the loop speed in seconds was added to the integral. All three values were multiplied by constants, added, and the output of the loop was added to the flight controller’s thrust percentage. At first, a full PID loop was used to stabilize altitude. After testing however, the integral portion was removed. This was because the thrust varied with the voltage of the battery which was nominally 11.1V, 12.6V when full, and sometimes as low as 10V. Because of this, when at lower battery levels, the drone would take longer to take off. In this time, the integral value would rapidly rise, resulting in a large overcompensation. As the drone continued to ascend, the reverse occurred. To solve the problem, the integral value was removed and the loop sped up marginally.

25

The proportional and derivative constants were primarily tuned through trial and error. Through experimentation, proportional constant values at which the drone was prone to overcorrect were identified. Values at the edge of overcorrection were tried, and the highest value that did not lead to overcorrection was used. Once a proportional value was selected, the derivative value was raised until the quadcopter began to slow down in stabilizing to the setpoint. Through this method, the drone was vertically stabilized.

5.3 Camera

Similar to the ultrasonic sensor, the camera was polled in an individual process, separate from all other processes. The camera process was responsible for stabilizing the drone horizontally. The process started by capturing and processing an image from the PiCamera image stream. This image was then converted from the RGB to HSV colorspace for analysis purposes and the red pixels were used to create an image mask. The image mask was then run through contour recognition, which identified the centroids of all pixel clusters above a certain size. The centroids were then returned as coordinate pairs. The images output from the camera were 112 by 112 pixels. The low resolution was chosen to speed up the image processing and because high resolution was unnecessary for reflector tape detection. As such, the coordinates the camera method received ranged from (0,0) to (111,111). Because (56,56) was the center of the view, it served as the setpoint of the camera PID loop. Before the error could be determined and run through the PID loop, however, the camera method utilized a hypotenuse calculation to determine which ordered pair was nearest to the center of the field of view. That ordered pair was used to generate the error. The “x” error was the “x” value of the coordinate subtracted from 56, while the “y” error was the “y” value of the coordinate subtracted from 56. Initially, this loop was also a PID loop, but it was determined that due to the small field of view of the camera and the relatively low loop speed of 10Hz, only a P loop was necessary. The proportional constants for the “x” and “y” axis were both determined through trial and error.

26

.

• Image taken of red tile in dark conditions with cellphone camera (top left) • Image taken of red tile in light conditions with cellphone camera (top right) • Drone camera image of red tile in dark conditions (middle left) • Drone camera image of red tile in light conditions (middle right) • Computer vision red detection results in dark conditions (bottom left) • Computer vision red detection results in light conditions (bottom right)

Figure 9. A set of images showing the lack of effect of lighting conditions on the color recognition

6. Future Direction

The drone will be upscaled in terms of size and capabilities compared to the current prototype. The next protype is likely to be 2 to 3 times the lengths of the current drone and will use a microcomputer with a dedicated graphics processes for smoother and more robust image analysis onboard the drone. The drone, as it is, is an excellent proof of concept for both the usage

27 of thoroughly optimized software on modular drones built with general purpose hardware and efficient optical stabilization techniques as an indoor alternative to GPS-based stabilization. Subsequent prototypes would use the technology investigated in this project as the basis for targeting specific applications such as indoor mapping, factory process monitoring, warehouse inventory monitoring, and monitoring of premises.

Other future changes may include adding artificial intelligence to the drone. With a more capable microprocessor, the drone would be able to utilize a machine learning model to adapt to tasks and become more efficient at repetitive tasks over time. Furthermore, machine learning may prove useful in allowing a drone to navigate in completely unstructured environments without GPS or an operator.

Apart from the scope for future development, the drone was able to utilize ultrasonic sensor data and an image stream from a low-cost camera to align itself to pieces of red reflector tape and navigate. The drone utilized primarily open source hardware and software to poll on-drone hardware at high speed. Two drones were able to cooperatively fly in the same airspace, proving the potential of the drone for use in swarms. The design, development, prototyping, programming, and testing of a drone capable of indoor stabilization without GPS was a success.

7. Conclusions and Outlook

In this project, I designed, prototyped, developed, programmed and tested multiple prototypes of an independent drone without GPS. I demonstrated a technique to stabilize a drone using optical data in a minimally structured environment with minimal computational power. I calibrated and tuned multiple sensor PID loops to stabilize the drone in multiple ways. And, I demonstrated drone-to-drone communication and simultaneous operation of drones to prove their potential for use in swarms. Through this project, I gained an excellent understanding of the workings of drones, their stabilization, and the challenges of putting computers on drones. The resulting innovations of this project include a open source drone able to hold position accurately without GPS, a method to identify color-based characteristics in images with minimal computational power, and a technique to parallelize computational tasks on a drone.

28

These inexpensive drones have high potential for future use, research and commercialization. Currently, no drone is optimized for operation in an environment with provided, detectable structure and lacking GPS coverage. The drone developed in this project would be able to meet these demands. The methods utilized in this project to stabilize a drone without GPS would be invaluable to conduct large scale quality control, monitor premises, and assist law enforcement.

There are several possible future enhancements for my drone, including improving the computational power of the drone and adding AI capability to the drone. The drone has been successfully stabilized and programmed to align to, hover over, and navigate using red reflector tape. The demonstration of the working of these methods in real world conditions is a significant success of this project and shows the potential of a drone that in future will be used to further develop drone positioning without GPS or external communication and control systems.

29

8. Bibliography

Allain, Rhett (2018, January 13). "The physics of why bigger drones can fly longer." Wired. Retrieved from https://www.wired.com/story/the-physics-of-why-bigger-drones-can-fly- longer/. DARPA Public Affairs (2017, June 28). "Smart quadcopters find their way without human help or GPS.” Defense Advanced Research Projects Agency. Retrieved from https://www.darpa.mil/news-events/2017-06-28. Feist, Jonathan (2018, November 2). "Does your drone need GPS?" Drone Rush. Retrieved from https://www.dronerush.com/drone-gps-10778/. Floreano, Dario, and Robert J. Wood. "Science, technology and the future of small autonomous drones." Nature 521, no. 7553 (2015): 460. Fournier, Vincent. "New technologies in decommissioning and remediation." IAEA Bulletin (2016): 23. Grzonka, Slawomir, Giorgio Grisetti, and Wolfram Burgard. "A fully autonomous indoor quadrotor." IEEE Transactions on Robotics 28, no. 1 (2012): 90-100. Kang, Dongho, and Young-Jin Cha. "Damage detection with an autonomous UAV using deep learning." In Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems 2018, vol. 10598, p. 1059804. International Society for Optics and Photonics, 2018. Levick, Richard (2018, May 15). " Drone industry just beginning to take off." Forbes. Retrieved from https://www.forbes.com/sites/richardlevick/2018/05/15/drone-industry-just- beginning-to-take-off/. Maurer, Michael, Jesus Pestana Puerta, Friedrich Fraundorfer, and Horst Bischof. "Towards an Autonomous Vision-based Inventory Drone." In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems-Workshop on Robotics for Logistics: Workshop on robotics for logistics in warehouses and environments shared with humans. 2018. McGuire, Kimberly, Guido De Croon, Christophe De Wagter, Karl Tuyls, and Hilbert Kappen. "Efficient optical flow and stereo vision for velocity estimation and obstacle avoidance on an autonomous pocket drone." IEEE Robotics and Automation Letters 2, no. 2 (2017): 1070-1076. Mcquate, Sarah (2018, August 16). "How a drone can soar without using GPS." TechXplore. Retrieved from https://techxplore.com/news/2018-08-drone-soar-gps.html.

30

Murphy, Kara (2018, April 23). "Indoor drones open up new business opportunities." Unmanned Aerial. Retrieved from https://unmanned-aerial.com/indoor-drones-open-up-new- business-opportunities. Palossi, Daniele, Antonio Loquercio, Francesco Conti, Eric Flamand, Davide Scaramuzza, and Luca Benini. " A 64mW DNN-based visual navigation engine for autonomous nano- drones." arXiv.org. (2019) arXiv:1805.01831v2. Smolyanskiy, Nikolai, Alexey Kamenev, Jeffrey Smith, and Stan Birchfield. "Toward low-flying autonomous MAV trail navigation using deep neural networks for environmental awareness." In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4241-4247. IEEE, 2017. Vale, Alberto, Rodrigo Ventura, and Paulo Carvalho. "Application of unmanned aerial vehicles for radiological inspection." Fusion Engineering and Design 124 (2017): 492-495. Warren, Michael, Michael Paton, Kirk MacTavish, Angela P. Schoellig, and Tim D. Barfoot. "Towards visual teach and repeat for GPS-denied flight of a fixed-wing UAV." In Field and Service Robotics, pp. 481-498. Springer, Cham, 2018. Warren, Michael, Melissa Greeff, Bhavit Patel, Jack Collier, Angela P. Schoellig, and Timothy D. Barfoot. "There's No Place Like Home: Visual Teach and Repeat for Emergency Return of Multirotor UAVs During GPS Failure." IEEE Robotics and Automation Letters 4, no. 1 (2019): 161-168. Woodford Chris (2019, January 21). “LIDAR.” Explain That Stuff. Retrieved from https://www.explainthatstuff.com/lidar.html. Zhou, Wang, Dhruv Nair, Oki Gunawan, Theodore van Kessel, and Hendrik F. Hamann. "A testing platform for on-drone computation." In 2015 33rd IEEE International Conference on Computer Design (ICCD), pp. 732-735. IEEE, 2015.

31

9. Appendix

DroneDrone Height Height vs. vs. Time Time with with 25 25 Hz Hz polling polling 4560

4050 35 3040

2530 20

Height (cm) Height 20 Height (cm) Height 15

1010 5 0 0 0 0 5 1010 15 2020 25 30 30 3540 40 5045 50 6055 6070 Time (seconds) Time (seconds) Drone Height vs. Time with 25 Hz polling 100 90 80 70 60 50 40 Height (cm) Height 30 20 10 0 0 10 20 30 40 50 60 70 Time (seconds)

32

Drone Height vs. Time with 25 Hz polling 50 45 40 35 30 25 20 Height (cm) Height 15 10 5 0 0 10 20 30 40 50 60 70 Time (seconds)

Drone Height vs. Time with 25 Hz polling 70

60

50

40

30 Height (cm) Height 20

10

0 0 10 20 30 40 50 60 70 Time (seconds)

Figure 10-14. Graphs of the altitude of the drone in centimeters over 60 second runs of the drone as polled from the ultrasonic sensor. The sensor was polled at 25Hz, yielding approximately 1500 data points per graph. The graphs show that the drone’s altitude stabilizes after approximately 25 seconds and the drone is able to hold its vertical position.

33