ILikeToMoveItMoveIt: Localization 1

iLikeToMoveItMoveIt - Wirelessly Controlled Reconnaissance and Surveillance Vehicle

Accurate Localization of the vehicle within its environment

Neil Wong Hon Chan Samuel Echefu James Thesing Sicheng Xu

Date: 7 April 2014

ILikeToMoveItMoveIt: Localization 2

I. Overview The objective of this project was to investigate the high risk component of localization involved in the mobile navigation of a reconnaissance robot. More specifically on what sensors and algorithm to use to minimize noise in the robot localization. The robot will be able to navigate autonomously through its environment and accurate localization is necessary for it to correctly follow its waypoints. In addition, localization is a high risk component as the robot will not be able to process its shortest path algorithm without it. Different sensors ranging from IR, Sonar and the use of the were explored to verify which one is more effective. The Kinect was found to be the go to sensor for this project. Various algorithms have been researched to verify which one would be more effective in processing the data gathered from the Kinect. The testing strategy in section VI will be used as a testbench to validate the performance of one sensor/algorithm over the others.

II. Risk Specification

Marketing requirement Engineering Requirements Justification

4 D. Using the generated map, The user can specify particular the smartphone can specify locations for the robot to go to coordinates to the robot. without requiring computing Depending on its current resources from the app. location, the processing of the shortest path to the destination will be performed.

The localization is one of the most important requirement of the reconnaissance vehicle; knowing its current location allows the robot to generate its subsequent navigation path as well as being able to follow a given patrol command. In addition, accurate localization of the vehicle is necessary for it to be able to backtrack to a given position if ever the communication with the smartphone was lost. Having accurate localization allows the robot to be autonomous and is able to navigate to any given location.

III. Risk Investigation The following possible sensor options are considered based on prior knowledge that the floor plan, the map of the environment and current readings from the sensors are available.

Sensor Advantage Disadvantage

Infrared Sensor ● Cheap ● Difficult to pinpoint ● Easy to set up location without specific visual cues ILikeToMoveItMoveIt: Localization 3

● Difficult to do the processing in real time when vehicle is moving ● Issues with lighting and non reflecting environment

Sonar Sensor ● Relatively Cheap ● Requires 8 sensors ● Has a longer range and width ● Sonar beam can be ● absorbed or deflected

Kinect ● HD video ● Expensive ● Color Sensor ● Wide/expanded field of view ● IR Depth Sensor ● Can easily interact with PC

Feature Infrared Sensor Sonar Sensor Kinect

User Interface 2 2 5

Range 2 4 5

Width 2 4 5

View Adjustment 1 3 5

Device is easy to install 4 4 4

Accuracy of sensor 2 3 5

Cost of Sensor 5 3 1

Sensor is lightweight 5 3 1

Sensor can work in 3 3 5 real-time

Sum 26 29 36 Table 1: Comparison between IR, Sonar and Kinect Sensor ILikeToMoveItMoveIt: Localization 4

Figure 1. Specifications of the Kinect

From table 1 and Figure 1, the Kinect has more features than necessary for the gathering of RGB and depth location data. The Kinect Sensor for PC was chosen as the default sensor for this project. The wide variety of built in sensors allowed it to be very valuable to compute the position of the robot as well as generating the mapping of the environment. The one disadvantage of the Kinect was its high initial cost but this was negated since it was already available at the start of our project.

IV. Risk Mitigation Design IV.I Overview The Kinect will be used in conjunction with the Microsoft Tablet. Both the Kinect and the Surface will be mounted onto the vehicle using a wooden platform and frame. The Kinect will collect the data using its depth camera sensor and send it to the PC for processing. The PC would have prior information about the current map of the environment and will process the current location data to generate the final results, that is, the location and next navigation coordinates. The processing will be done using one of the algorithms below and the one that performs the best will be used in the final product. ILikeToMoveItMoveIt: Localization 5

Hardware: The Kinect will be connected to the Microsoft Surface through its proprietary adaptor to convert the Kinect plug into a standard USB connector and power connector. A frame will then be built to support the Kinect and the surface onto the vehicle.

Figure 2: Inside of a Kinect showing the different sensors Software: The Kinect drivers will be installed on the Surface together with the Kinect for Windows SDK. These softwares will enable the creation of a basic programs that will collect the surrounding data and do the processing to generate accurate localization. The interface to program the application will either be Eclipse IDE or Visual Studio. The PC will be the controller and the smartphone, robot and kinect will communicate through it.

Figure 3: The Visual Studio Interface Another possibility is to create the algorithms in Matlab and compare it to the other C# programs.

IV.II Localization - How it works:

The sensing and data gathering from the Kinect can be divided into three parts. The first one is to sense the environment, second would be making use of the color information provided by the Kinect to distinguish shape more accurately. The last one is to be able to differentiate between the visual cues ( furniture in the environment ) in order to easily detect structures such as rooms, doors and hallways in a building. The first two points are important mainly for the generation of a more detailed map while the last point would be the one that is necessary for accurate localization.

Localization can again be divided into two parts: The location algorithm and the compartmentalization ILikeToMoveItMoveIt: Localization 6

of the map into specific rooms. Before the algorithm is applied preprocessing has to be done in order to properly calibrate the system. The depth value, the RGB-D value and camera parameters have to be calibrated for the specific environment that the robot is in.

Different algorithms can be used to generate the mapping and localization of the vehicle. Some of the algorithm that will be explored are: ● The RGB-D Localization ● Fast Sampling Plane Filtering ○ Sample the depth image to produce a set of “plane filtered” points corresponding to the plane, its plane parameters and the polygons in 3D to fit these plane filtered points. ● Monte Carlo Localization ○ Using 3D map and the underlying sensor image formation model to generate a simulated RGB-D camera views which is then compared to the particles poses to accurately determine the location. ● Corrective Gradient Refinement ● Markov and Gaussian Localization based on Kalman Filters ○ special case of probabilistic state estimation that can be applied to mobile robot localization ● Make use of Extended Kalman filter or Multi-Hypothesis Tracking filter ● Particle Filter ○ The robot’s state space is represented as a sample(particle) density, and localization is generated in two phases: prediction and update. The model predicts the robot’s current pose with a probability density function. In the update phase a measurement (sensor) model is applied to evaluate each particle in the density.

Figure 4: Flow diagram of the Localization process

ILikeToMoveItMoveIt: Localization 7

After more testing, additional details will be added for each algorithm once we know how we can implement them and whether they are feasible for our project.

IV.III Mitigation of Risks Using the Kinect sensor allows the vehicle to have full access to a RGB-D Sensor that can capture the color of the environment and hence facilitate the process of locating the vehicle. Compared to the other sensors, the Kinect is more user friendly and can directly communicate with the Windows OS. Using the Kinect also opens more possibilities in the implementation of how the localization algorithms are created.

V. Parts List

Our Part name Cost QTY Availability Actual Product Cost

Infrared Proximity Sensor Short 2 Weeks Range - Sharp $15 0 $0 from GP2Y0A41SK0 SparkFun F

Ultrasonic Range Finder 2 Weeks Sonar Sensor $25 each 4 $100 from Maxbotix SparkFun LV-EZ1

Kinect for PC $250 1 $0 Available

ILikeToMoveItMoveIt: Localization 8

Microsoft $600 1 $0 Available Surface

AC Power $10 1 $0 Available Adaptor Cable

Wooden $5 1 $5 Available planks

VI. Testing Strategy The algorithms will first be created as a C# program or matlab program and ran on the Windows Surface.The following test plan will then be applied to test the various algorithms.

Test Related Pre­conditions Test Description (steps) Outcome No. ID Systems 1.0 Processing Time Map data The PC is able to generate the location of available, the vehicle in less than 1 minute Data from sensor 1.0.1 Processing Cost Map data The processing of the data to generate available, the location does not consume 100% of Data from sensor the CPU 1.1 Accuracy of Map data The location generated is accurate to location while available, within 5cm stationary Data from sensor 1.1.1 Accuracy of Map data The location generated is accurate to location while available, within 10cm moving Data from sensor 1.1.2 Accuracy of Map data The correct location is determined given ILikeToMoveItMoveIt: Localization 9

location in available, any locations (hallway, room etc) different location Data from sensor 2.0 Detection of Map data The PC can distinguish between walls, visual cues available, doors and objects Data from sensor 2.1 Avoid obstacle Map data The sensor detect an object in the path, available, the PC generate a new navigation path Data from sensor 3.0 Basic Map data The PC has the current location, the pathfinding available, shortest path to the goal is generated Data from sensor, current location on map 3.1 Basic waypoint Map data The smartphone sends the waypoint available, commands to the PC which then transmit Data from sensor, it to the vehicle. Waypoints from The vehicle correctly navigate to the given smartphone location through all the waypoints

VII. Uncertainties ● Various algorithms are available for use but their implementation to fit our purpose might require some more work ● The combination of both the readings of the sonar sensor and the Kinect in real time ( right now we are only focussing on the Kinect ) ● Before the localization can be tested, the mapping of the environment has to be already available on the PC ● Could change the location and the mapping to be done in real time simultaneously ● The patents surrounding the use of the algorithms and the Kinect has to be explored in more details.

Appendices A. Sonar Sensor Specifications https://www.sparkfun.com/products/639

B. Microsoft Kinect Specifications http://msdn.microsoft.com/en-us/library/jj131033.aspx

References: C. Fast Sampling Plane filtering for Localization and Navigation http://www.cs.cmu.edu/~coral/projects/cobot/localization.html http://www.cs.cmu.edu/~mmv/papers/11rssw-KinectLocalization.pdf ILikeToMoveItMoveIt: Localization 10

D. Using a Depth Camera for Indoor Robot Localization http://mobilerobotics.cs.washington.edu/rgbd-workshop-2011/camera_ready/cunha-rgbd11-localizat ion.pdf E. Localization System using Microsoft Kinect for Indoor Structures http://www.jspf.or.jp/PFR/PFR_articles/pfr2012S1/pfr2012_07-2406036.html

F. Localization and navigation solution using a kinect sensor https://www.ce.utwente.nl/aigaion/publications/show/2276

G. Markov Localization http://users.isr.ist.utl.pt/~mir/cadeiras/robmovel/Markov-Localization.pdf

source code for kinect http://www.cs.cmu.edu/~coral/projects/localization/source.html http://www.kinecthacks.com/kinect-sdk/