A Low‐cost Solution to Motion Tracking Using an Array of Sonar Sensors and an

Inertial Measurement Unit

A thesis presented to

the faculty of

the Russ College of Engineering and Technology of Ohio University

In partial fulfillment

of the requirements for the degree

Master of Science

Jason S. Maxwell

August 2009

© 2009 Jason S. Maxwell. All Rights Reserved. 2

This thesis titled

A Low‐cost Solution to Motion Tracking Using an Array of Sonar Sensors and an

Inertial Measurement Unit

by

JASON S. MAXWELL

has been approved for

the School of Electrical Engineering Computer Science

and the Russ College of Engineering and Technology by

Maarten Uijt de Haag

Associate Professor School of Electrical Engineering and Computer Science

Dennis Irwin

Dean, Russ College of Engineering and Technology 3

ABSTRACT

MAXWELL, JASON S., M.S., August 2009, Electrical Engineering

A Low‐cost Solution to Motion Tracking Using an Array of Sonar Sensors and an

Inertial Measurement Unit (91 pp.)

Director of Thesis: Maarten Uijt de Haag

As the desire and need for unmanned aerial vehicles (UAV) increases, so to

do the navigation and system demands for the vehicles. While there are a multitude

of systems currently offering solutions, each also possess inherent problems. The

Global Positioning System (GPS), for instance, is potentially unable to meet the

demands of vehicles that operate indoors, in heavy foliage, or in urban canyons, due

to the lack of signal strength. Laser‐based systems have proven to be rather costly,

and can potentially fail in areas in which the surface absorbs light, and in urban

environments with glass or mirrored surfaces. SONAR based systems, however, do not fall victim to any of the outlined problems above, and for this reason, is an area of navigation which requires further study and development. This thesis will cover

techniques for localizing and classifying targets in a vehicles environment using a

SONAR‐based system and an inertial measurement unit (IMU). The thesis will also

cover the extension of these localization and classification techniques for the use of

3D motion detection and tracking of objects.

Approved: ______

Maarten Uijt de Haag

Associate Professor School of Electrical Engineering and Computer Science 4

ACKNOWLEDGMENTS

I would like to thank my advisor Dr. Maarten Uijt de Haag for suggesting this

topic, as well as for providing assistance, guidance, and support throughout the

entirety of this project and my time at Ohio University. I would also like to thank my

committee members Dr. Frank Van Graas, Dr. Michael Braasch, and Gene Kaufman

for their assistance and support in regards to this project. Thanks to Robert

Widmer for assisting in the development of the hardware systems, as well as the

discussion of our projects as a whole. Thanks also go out to Justin Mamrak for

discussing and troubleshooting different hardware problems that arouse from time

to time. I would like to thank Chereice Maxwell for her support and help while I was

working long hours testing and writing; and finally, I would like to thank my parents

and family for their support throughout this project and my entire college career.

5

TABLE OF CONTENTS

Page

Abstract ...... 3

Acknowledgments ...... 4

List of Tables ...... 8

List of Figures ...... 9

List of Acronyms ...... 12

Chapter 1: Introduction ...... 13

1.1. Motivation ...... 13

1.2. Problem Statement ...... 14

1.3. Proposed Solution ...... 14

1.4. Scope ...... 15

1.5. Assumptions ...... 15

1.6. Overview of Thesis ...... 16

Chapter 2: Background ...... 17

2.1. Overview of SONAR ...... 17

2.2. Overview of Inertial Measurement Unit ...... 19

2.3. Biological Inspiration ...... 20

2.4. 2D Target Classification ...... 21

2.4.1. Vector Receiver ...... 21

2.4.2. 2D Array ...... 22

2.4.3. Target Classification ...... 23 6

2.5. 3D Target Localization and Classification ...... 29

2.5.1. 3D Sonar Array ...... 29

2.5.2. 3D Target Localization ...... 30

2.5.3. 3D Target Classification ...... 32

Chapter 3: System Design ...... 35

3.1. Hardware ...... 35

3.1.1. SONAR Units ...... 35

3.1.2. SONAR Array ...... 37

3.1.3. Inertial Measurement Unit ...... 38

3.1.4. Computer System ...... 38

3.1.5. System Communication Bus ...... 38

3.2. Hardware Integration and Interfacing ...... 40

3.3. Adjustments for Systemetic Sources of Error ...... 40

3.3.1. Accounting for System Delay Errors Due to Noise on the

Communication Bus ...... 41

3.3.2. Delay Between Transmission and Reception on Separate

SONAR Units ...... 42

3.3.3. Reducing IMU Errors...... 42

3.3.4. Addressing SONAR Unit Synchronization Errors ...... 43

Chapter 4: Motion Tracking ...... 45

4.1. Identifying Potentially Moving Objects ...... 45

4.1.1. Stationary Identification Process ...... 45 7

4.1.2. Moving Identification Process ...... 46

4.2. Moving Object Velocity and Position Estimations ...... 48

4.2.1. Target Velocity Estimation ...... 48

4.2.2. Target Location Tracking and Future Position Prediction ...... 49

Chapter 5: Test Results and Analysis ...... 55

5.1. Individual SONAR Unit Measurement Accuracy and Stability Tests ..... 55

5.2. Object Localization Tests ...... 60

5.3. Object Classification Tests ...... 63

5.4. Stationary Motion Detection, Tracking, and Prediction ...... 65

5.5. Moving Platform Motion Detection ...... 76

5.5.1. Application of Zero‐Velocity Update ...... 76

5.5.2. IMU and SONAR Integration for Moving Platform Motion

Detection ...... 80

Chapter 6: Conclusions and Future Work ...... 86

6.1. Conclusions ...... 86

6.2. Future Work ...... 86

References ...... 88

Appendix A: Simulation ...... 90 8

LIST OF TABLES

Page

Table 2.1: Positively Classified 3D Targets (Based on [6]) ...... 33

Table 3.1: USB‐I2C Command Set for Interfacing with SONAR units (based on [4]) 39

Table 5.1: Expected Versus Actual Distance Measurements ...... 56

Table 5.2: Standard Deviations ...... 57

Table 5.3: Horizontal Correct Classifications ...... 64

Table 5.4: Vertical Array Correct Classifications ...... 64

Table 5.5: Linear Extrapolation Predictions ...... 72

Table 5.6: Least Squares Estimation Prediction ...... 72

Table 5.7: Interpolation in Lagrange Form Predictions ...... 73

Table 5.8: Polynomial Predictions ...... 73

Table 5.9: Kalman Filter Predictions ...... 74

Table 5.10: Prediction Percent Errors ...... 75 9

LIST OF FIGURES

Page

Figure 2.1: Identical SONAR returns for unique objects ...... 18

Figure 2.2: Bat "SONAR System" Layout ...... 20

Figure 2.3: Vector Receiver (Based on [6]) ...... 21

Figure 2.4: Adjusted Vector Receiver (Based on [6]) ...... 22

Figure 2.5: 2D SONAR Array for Target Classification (Based on [6]) ...... 23

Figure 2.6: Geometry for SONAR Array Encountering a Plane (Based on [5]) ...... 24

Figure 2.7: Geometry for SONAR Array Encountering a Plane (Based on [5]) ...... 24

Figure 2.8: Geometry for SONAR Array Encountering a Corner (Based on [5]) ...... 26

Figure 2.9: Geometry for SONAR Array Encountering an Edge (Based on [5]) ...... 27

Figure 2.10: Classification Process Flowchart ...... 28

Figure 2.11: 3D SONAR Array (Based on [6]) ...... 30

Figure 2.12: 3D SONAR Array Geometry (Based on [6]) ...... 31

Figure 2.13: 3D Classification Geometry (Based on [6]) ...... 34

Figure 3.1: Devantech R287‐SRF02 Ultrasonic Ranger [4] ...... 35

Figure 3.2: Devantech R287‐SRF02 Beam Pattern [3] ...... 37

Figure 3.3: 3D SONAR Array ...... 37

Figure 3.4: Devantech R286‐USB‐I2C Module [4] ...... 39

Figure 4.1: Moving Identification Process ...... 47

Figure 4.2: Kalman filter cycle (Based on [14]) ...... 53

Figure 5.1: SONAR Units Stability Test Setup ...... 55 10

Figure 5.2: SONAR 1 Measurements Histogram Over 1,000 Tests ...... 57

Figure 5.3: SONAR 2 Measurement Histogram Over 1,000 Tests ...... 58

Figure 5.4: Geometry Plot with Platform Parallel to Wall ...... 59

Figure 5.5: Geometry Plot with Platform Yawed 20˚ Clockwise ...... 59

Figure 5.6: Geometry Plot with Platform Yawed 20˚ Counter‐Clockwise ...... 60

Figure 5.7: Stationary Object Localization Test ...... 61

Figure 5.8: 3D Object Localization ...... 62

Figure 5.9: Stationary Motion Detection Iteration 1 ...... 66

Figure 5.10: Stationary Motion Tracking Iteration 2 ...... 67

Figure 5.11: Stationary Motion Tracking Iteration 3 ...... 67

Figure 5.12: Stationary Motion Tracking Iteration 4 ...... 68

Figure 5.13: Stationary Motion Tracking Iteration 5 ...... 68

Figure 5.14: Stationary Motion Tracking Iteration 6 ...... 69

Figure 5.15: Stationary Motion Tracking Iteration 7 ...... 69

Figure 5.16: Stationary Motion Tracking Iteration 8 ...... 70

Figure 5.17: Stationary Motion Tracking Iteration 9 ...... 70

Figure 5.18: Least‐Squares Solution and Kalman Filter ...... 71

Figure 5.19: Zero‐Velocity Update Applied to Delta Velocity in X‐Axis ...... 77

Figure 5.20: Zero‐Velocity Update Applied to Delta Velocity in Y‐Axis ...... 78

Figure 5.21: Zero‐Velocity Update Applied to Delta Velocity in Z‐Axis ...... 78

Figure 5.22: Zero‐Velocity Update Applied to Delta Theta in X‐Axis ...... 79

Figure 5.23: Zero‐Velocity Update Applied to Delta Theta in Y‐Axis ...... 79 11

Figure 5.24: Zero‐Velocity Update Applied to Delta Theta in Z‐Axis ...... 80

Figure 5.25: SONAR Array Locations During Moving Platform Tests ...... 81

Figure 5.26: Euler Angles Measured by IMU ...... 83

Figure 5.27: Velocities Measured by IMU ...... 83

Figure 5.28: East North Position with SONAR Scan Locations Indicated ...... 84

Figure 5.29: Moving Platform Motion Identification ...... 84

12

LIST OF ACRONYMS

DOF Distance‐of‐Flight

GPS Global Positioning System

IMU Inertial Measurement Unit

SONAR Sound Navigation and Ranging

TOF Time‐of‐Flight

UAV Unmanned Aerial Vehicle

VCP Virtual COM Port 13

CHAPTER 1: INTRODUCTION

1.1. Motivation

Unmanned aerial vehicles (UAV) are becoming ever more prominent for a wide

array of applications including mapping, aerial photography, surveillance, reconnaissance, search and rescue, law enforcement, military applications, or even space applications. For many of these applications it is desirable for the UAV to operate autonomously, thus requiring the development of autonomous navigation techniques. A multitude of different methods for autonomous navigation have been explored through the use of systems such as the Global Positioning System (GPS), inertial navigators, air data, cameras and laser‐based systems. While these systems have resulted in successful deployment of UAVs, there are disadvantages to their use.

By nature, a UAV may be required to operate in urban environments (including urban canyons), residential areas, and even hostile environments; in these environments there is a high risk that the GPS signal is too weak, unavailable, or

even jammed (in the case of hostile environments). In cases such as space

applications, the GPS system would be completely unavailable as well. While laser‐

based systems can be used to counter some of the disadvantages of using only GPS,

they too have their disadvantages. Lasers can be very cost‐prohibitive. Inherently, lasers have a very narrow beam width, which forces systems employing them to rotate the laser assembly, thus, potentially, introducing errors into the observations.

Dark, light absorbing, objects and glass can also present potential failure to laser‐ 14 based systems. For these reasons, there is a need for a low‐cost that can be used for autonomous navigation.

A Sound Navigation and Ranging (SONAR) based navigation system does not fall victim to any of the errors outlined above, making it a possible alternative solution.

SONAR has the advantage of being relatively low‐cost in comparison to the alternatives and because they are housed on‐board of the UAV, signal availability is not an issue. Unfortunately, a single SONAR sensor is unable to provide enough information resolution to localize objects in its environment; in order to gather the necessary information for localization of objects, an array of interacting SONAR units must be developed. In order to properly achieve autonomous SONAR navigation, further research needs to be conducted to develop this SONAR array to aid in autonomous vehicle navigation and overcome the disadvantages outlined in the remainder of this thesis.

1.2. Problem Statement

A system must be developed that uses a SONAR array and inertial measurement

unit (IMU) to localize, classify and track targets in a 3D environment in addition to supporting a navigation function. The latter is not part of the discussion in this thesis.

1.3. Proposed Solution

The solution proposed in this research includes the classification and

localization of targets in a 3D environment, as well as proposed solutions to motion 15 tracking of objects in the surrounding environment. Further discussion of SONAR based navigation techniques is discussed in [1].

Four SONAR arrays composed of five individual SONAR sensors are placed on the four outer‐most edges of the UAV. The arrays communicate over an I2C

communication bus that is accessed from the central computer via a USB to I2C converter.

An IMU, which has been integrated with the SONAR arrays, provides the information about the relative change and acceleration of the UAV in its environment. This information is extrapolated and used to ensure that targets in the UAV’s environment are not incorrectly classified as being in motion due to the movement of the UAV itself.

1.4. Scope

This research will develop a system that is able to localize and classify targets in

three‐dimensions, and also track the motion of these objects in the environment

using SONAR arrays and an inertial measurement unit. The application of this

SONAR based system for system localization and other navigation techniques are

discussed in [1].

1.5. Assumptions

The following assumptions were made in the development and testing process of

proposed method:

1. The system is being operated in an environment that is at room

temperature (or 21˚ C). 16

2. The SONAR system‐scans are made while the platform on which the

SONAR array is mounted is stationary over one or multiple scans.

Future work should make every attempt to eliminate these assumptions.

1.6. Overview of Thesis

Chapter 2 covers the background information required to present the methods

and techniques involved in this research including an overview of SONAR, inertial

navigation as it pertains to the IMU used, the biological navigation inspiration for

the project, and target classification and localization in 2D and 3D. Chapter 3

discusses the overall system design, which includes the hardware setup and

correction of various system errors. Chapter 4 discusses the process followed for

identifying potentially moving objects, and includes estimation of the target velocity

and position. Chapter 5 addresses the system design through the discussion and results of the stationary and moving tests that were conducted. Finally, Chapter 6 discusses the conclusions of the research and also outlines future work that should be done to further develop the system.

17

CHAPTER 2: BACKGROUND

2.1. Overview of SONAR

SONAR (SOund NAvigation and Ranging) is a technique in which a transmitter‐

receiver pair determines range to the environment. A transmitter is used to

transmit sound at a specific frequency and a receiver is used to listen on that same

frequency for sound reflected from the sensor’s environment. This SONAR return is

often called an echo and is the result of a reflection of the sound on one or more objects in its environment in the direction that it was originally transmitted from.

The SONAR system will measure the time delay between the transmission of the object and the reception of the echo, which is known as the time‐of‐flight (TOF) of the SONAR ping. Once the TOF is known, the distance between the SONAR unit and the object in the environment can be calculated using the following equations:

cair = 331.3+ (0.606 ⋅ϑ ) m /s (1)

DOF = cair ⋅ TOF m (2)

where ϑ is the temperature in degrees Celsius (assumed to be room temperature, or 21 degrees Celsius for the purpose of this thesis), and the TOF is measured in seconds. In equation (2) the DOF refers to the Distance‐of‐Flight. It is important to understand that the actual physical distance between the object and the SONAR unit is ½ the DOF, as the transmitted ping has to travel to the object and back before it is received. 18

SONAR units do have a beam width that is much larger than laser‐based systems, which is one of its major sources of error if not dealt with in some fashion.

Because the SONAR measures the TOF of the transmitted ping, the resulting distance calculation may be of the first object that the sound wave encounters in the environment within the SONAR sensor beam. Figure 2.1 illustrates how multiple objects in the environment could return the same distance reading for a single

SONAR unit.

Figure 2.1: Identical SONAR returns for unique objects (x­axis: horizontal, y­axis: vertical axis)

19

In the case illustrated by Figure 2.1, all three of the objects are within the beam width of a single SONAR unit, and all three objects would result in a calculated distance of 15 meters. It would be impossible with a single SONAR unit to determine which object the SONAR unit is measuring resulting in a possible error of

8 meters in the y‐direction. To remove this ambiguity, multiple SONAR units may be employed. The integration of multiple SONAR units in an attempt to correct this problem, as well as classify objects will be discussed later in this thesis.

2.2. Overview of Inertial Measurement Unit

Inertial measurement units (IMU) are used to measure acceleration and orientation changes for a vehicle over a period of time. Because the unit is entirely

self‐contained and does not rely on external signals in anyway, it is unable to be

jammed, making it an ideal sensor for many navigation systems. Unfortunately, the

IMU is not without disadvantages as well. An IMU‐based navigation system (like an

Inertial Navigation System or INS) functions by continuously integrating the

accelerometer and gyro outputs, results in any and all errors in the process being accumulated into the final solution. Furthermore, an IMU is also susceptible to drift

errors, which can be as little as approximately 0.003 deg/hr for a navigation grade

IMU, and as high as 2000 deg/hr for a consumer grade IMU [2].

In order to account for the drift rates, another system can be integrated with the

IMU to offset, or “reset”, the errors in the IMU at regular intervals. For the purposes

of this research, a SONAR‐array is used to assess these inherent system errors. 20

2.3. Biological Inspiration

Echolocation is a technique that is utilized by some nocturnal creatures, such

as bats. The inability of bats to visually track their prey requires them to use

sophisticated echolocation techniques. These techniques appear to include filtering and correlation techniques as discussed in [11] and [12], which are not directly employed for the purpose of this research. However, the sensor configuration

(shown in Figure 2.2) of bats is mimicked. While the array developed and discussed

in this thesis uses two additional SONAR units used to aid in target classification, the core of the array contains the same three‐transducer configuration including a transmitter (which was upgraded to a transceiver for increased classification for the purpose of this thesis), surrounded by two receivers.

Figure 2.2: Bat "SONAR System" Layout [16]

21

2.4. 2D Target Classification

Accurate 2D and 3D SONAR arrays were developed and discussed in [5] and [6],

and its details are repeated here since an understanding of its principles of

operation form the basis for our motion detection and tracking method discussed

later.

2.4.1. Vector Receiver By using a single transmitter and two receivers, as shown in Figure 2.3, two

separate distance measurements are made to each receiver. Using these two

distance measurements, the bearing angle to the transmitter can be determined

using equation (3).

π ⎛ r 2 − r 2 + d 2 ⎞ α = − cos−1⎜ 1 2 ⎟ (3) 2 ⎝ 2r1d ⎠

The combination of these two receivers in this technique is referred to as a vector

receiver.

Figure 2.3: Vector Receiver (Based on [6])

Unlike the layout depicted in Figure 2.3, in the proposed setup the transmitter is

rigidly attached to the same platform as the two receivers. The transmitted pings

will then bounce off of an object in the environment and reflect back to the vector 22 receiver. In this adjusted layout, depicted in Figure 2.4, the measured distances and the bearing angle are no longer with respect to the transmitter itself, but with respect to the virtual image of the transmitter.

Figure 2.4: Adjusted Vector Receiver (Based on [6])

By using this technique, and thus obtaining the bearing angle to the object being

seen by the array, we are now able to determine which object in the environment,

that the SONAR array is encountering. This process eliminates the problem,

depicted in Figure 2.1, when a single SONAR unit encounters multiple objects within

its beam width that would each report the same distance reading.

2.4.2. 2D Array A 2D SONAR array was developed in [5] and further discussed in [6]. This

SONAR array exploits the vector receiver principle discussed in the previous section

for target classification. The array, shown in Figure 2.5, uses a single transceiver

(TR0), a single receiver (R1), and a single transmitter (T1). The receiver of TR0 and

the receiver of R1 form the vector receiver. 23

Figure 2.5: 2D SONAR Array for Target Classification (Based on [6])

2.4.3. Target Classification In 2D there are four potential classifications that the SONAR array will make: a

plane, a corner, an edge, or an unknown object. The distance measurements

recorded by the SONAR array are used to synthesize the environment that the array

encounters. These synthesis results are then compared to known geometries for a

plane, corner, or edge and a correct classification is declared if the results meet a

predefined criterion, within a certain threshold. In the event that the results are not

sufficiently placed within the threshold of any of the known scenarios, then the

target is classified as unknown. While classification is admittedly more important to

navigation and map generation than motion tracking, it can increase the array’s

ability to detect a potentially moving object and accordingly will be addressed in

detail. Figures 2.6 and 2.7 show the geometry for the interaction a plane (planar

surface). In both figures, r0,0 is the DOF from TR0 to its virtual image TR0’. Figure

2.6 shows the propagation paths for transmission on TR0 and reception on TR0

(r0,0) and R1 (r0,1), whereas figure 2.7 shows the propagation paths for transmission

on T1 and the reception on TR0 (r1,0). 24

Figure 2.6: Geometry for SONAR Array Encountering a Plane, Transmission on TR0 (Based on [5])

Figure 2.7: Geometry for SONAR Array Encountering a Plane, Transmission on T1 (Based on [5])

25

Using equation (3) and solely the results of the transmission from T0, (ranges r0,0

and r0,1) as depicted in figure 2.6, the bearing angle α1 can be calculated as follows:

⎛ 2 2 2 ⎞ π −1 r0,0 − r0,1 + d α1 = − cos ⎜ ⎟ (4) 2 ⎝ 2r0,0d ⎠

Using bearing angle α1 and the distance measurement r0,0, estimates for the angles

ˆ β , αˆ 2 , and rˆ1,0 can be obtained using basic geometry:

⎛ bcosα ⎞ βˆ = tan−1⎜ 1 ⎟ (5) ⎝ r0,0 − bsinα1 ⎠

ˆ αˆ 2 = α1 + β (6)

2 2 rˆ1,0 = (r0,0 + bsinα1) + (bcosα1) (7)

In equations 5 through 7 the range and angle measurements from the transmission from transmitter T0 are used to compute the estimates. Using the range

measurements obtained from the transmissions from T1 values for β , α2 , and r1,0

can be determined. If the measured values for r1,0 and α2 fall within acceptable error

bounds of the calculated estimates then the target is classified as a plane. If the

target does not fall within acceptable error bounds, the measurements are checked

to see if they meet the criteria for a corner classification. The error bounds for each

classification step are determined through testing which is discussed in Chapter 5.

The geometry of the corner situation, which is shown in Figure 2.8, produces very

similar equations as the plane equations and only a change in the manner in which

the β angle is calculated. 26

Figure 2.8: Geometry for SONAR Array Encountering a Corner (Based on [5])

⎛ ⎞ ˆ −1 bcosα1 β corner =−tan ⎜ ⎟ (8) ⎝ r0,0 − bsinα1 ⎠

ˆ αˆ 2corner = α1 + β corner (9)

rˆ = (r + bsinα )2 + (bcosα )2 (10) 1,1corner 0,0 1 1

Finally, if the criterion for a corner classification is not met, the target’s geometry is checked to see if it meets the necessary requirements for a classification as an edge.

Figure 2.9 shows the geometry of the edge situation and is followed by the equation 27 for r ′ , which is the only check that needs to be made for an edge, once the checks 2edge

for a plane and corner have failed.

Figure 2.9: Geometry for SONAR Array Encountering an Edge (Based on [5])

r ⎛ r ⎞ rˆ = 0,0 + ⎜ r − 0,0 ⎟ (11) 1,1edge 2 ⎝ 1,1 2 ⎠

r + r 2 + 4b2 − 4r bsinα = 0,0 0,0 0,0 1 (12) 2

If the requirements are not met for any of the above scenarios, the object is

identified as an unknown object. This classification process is outlined in the Figure

2.10. 28

Figure 2.10: Classification Process Flowchart

29

The array setup shown above represents the minimal required for object classification. Increasing the number of SONAR units along an axis of the array could result in decreased error bounds and potentially better object classification; however, it would also increase the total scan time and higher cost due increased number of sensors. As discussed in Chapter 5, increasing the scan time will adversely affect the ability of the array to accurately track motion.

2.5. 3D Target Localization and Classification

The 3D SONAR array, proposed and developed in [6] and shown in Figure 2.11,

joins two of the 2D SONAR arrays developed in [5] together into a single array. This

is the SONAR array configuration that will be used for the purpose of this research, so the array and the associated localization techniques are summarized here.

2.5.1. 3D Sonar Array One of the 2D SONAR arrays is mounted vertically, while the other is mounted horizontally. The spacing between the individual SONAR units is

increased slightly for more efficient localization and classification. The two arrays are joined at the transceiver location. 30

Figure 2.11: 3D SONAR Array (Based on [6])

With this array, the horizontal bearing angle and the vertical elevation angle can

both be calculated using equation (4), derived from equation (3), for each individual

2D array.

2.5.2. 3D Target Localization Figure 2.12 shows the 3D SONAR array with the elevation angle (β), the bearing angle (α), and the range (r) to the target labeled. Note that the angle β differs from the β angle used in Section 2.4.3. 31

Figure 2.12: 3D SONAR Array Geometry (Based on [6])

Figure 2.12 clearly shows that the horizontal SONAR array is used to calculate the

bearing angle, α, to the target, whereas the vertical SONAR array is used to calculate the elevation angle, β, to the target. With the combined results of the two 2D arrays,

the target can be localized in 3D using the following (spherical) equations:

x = (r ⋅ cosβ)⋅ cosα (13)

y = (r ⋅ cosβ)⋅ sinα (14)

z = r ⋅ sinβ (15)

In this setup, the two outermost SONAR units (designated as T1 and T2 in Figure

2.11) are used purely for classification purposes, with the localization of the 3D

point coming from the three innermost SONAR units (TR0, R1, and R2 in Figure

2.11). A full system scan is represented as the transmission on T1 and the reception

on TR0, R1, and R2, the transmission on T2 and the reception on TR0, R1, and R2, 32 and the transmission on TR0 and the reception on TR0, R1, and R2. Because the

SONAR scan will occur when the entire array is stationary, the point seen by the innermost array will be the same for both the horizontal and vertical arrays. This

3D point is then given a 3D classification, which is discussed in the next section.

2.5.3. 3D Target Classification Using the 2D target localization concept again, targets are classified as a plane, edge, corner, or an unknown object in both the vertical and horizontal SONAR arrays independently. The results from each array are then combined to come up with a combined classification; therefore 16 different classifications can be given, nine of these classifications will result in a positive outcome of the classification process in both the horizontal and the vertical array (which are shown in Table 2.1).

Six combinations can only be classified in either the horizontal or the vertical array, and one combination cannot be classified in either the horizontal or the vertical array [6].

33

Horizontal Sensor Vertical Sensor 3D Target Classification Classification Classification

Plane Plane 3D Plane

Plane Corner Horizontal Line Corner

Plane Edge Horizontal Line Edge

Corner Plane Vertical Line Corner

Corner Corner Corner Point

Corner Edge Complex Point

Edge Plane Vertical Line Edge

Edge Corner Complex Point

Edge Edge Edge Point

Table 2.1: Positively Classified 3D Targets (Based on [6])

Each of the different 3D classifications listed in Table 2.1 is shown below in Figure 2.13. 34

Figure 2.13: 3D Classification Geometry (Based on [6])

Though it is theoretically possible that the SONAR array configuration discussed in

this section could be used as a phased array, such usage is beyond the scope of this

research since in our case we can not control the actual SONAR output signal or

process the raw input signal, and as such is not discussed. Future work may include

advanced signal processing methodologies. 35

CHAPTER 3: SYSTEM DESIGN

3.1. Hardware

The hardware setup/system developed for this thesis consists of multiple

SONAR units, a single IMU, and two USB‐to‐I2C units. Development of the SONAR

array was done collaboratively with Robert Widmer, who will discuss navigation

techniques using the same array in [1] (to appear).

3.1.1. SONAR Units The SONAR units utilized for this thesis are Devantech R287‐SRF02 (SRF02)

units (pictured in Figure 3.1), which measure 24mm wide x 20mm deep x 17mm

high.

Figure 3.1: Devantech R287­SRF02 Ultrasonic Ranger [4]

36

These low‐cost ultrasonic rangers have a maximum range of 6 meters and a minimum range of approximately 15cm. They can be interfaced through a standard

UART or I2C interface, the latter of which is utilized for the purposes of this research.

Each unit has a beam width of approximately 55 degrees (the beam pattern from the

manufacturer is shown in Figure 3.2). In standard operation, each SONAR unit

transmits an 8‐cycle burst at a rate of 40 kHz and listens for the first echo response.

The corresponding range measurement is then reported in inches, centimeters, or

microseconds depending on the command sent to the unit by the user. Each SONAR

unit is also capable of being used exclusively as a transmitter or a receiver. In

transmission‐only mode, an 8‐cycle burst at a rate of 40 kHz is sent by the unit, with

no reception cycle, whereas in reception‐only mode the unit listens until it receives

a single response (without a transmission cycle). When communication takes place

using I2C each SONAR unit’s address is set to I2C address 0xE0 by default, and can be

changed by the user to any address up to 0xFE in two‐step intervals (0xE0, 0xE2,

0xE4, etc…). 37

Figure 3.2: Devantech R287­SRF02 Beam Pattern [3]

3.1.2. SONAR Array The SONAR array used for this research follows the layout discussed in Section

2.5.1 and an image of the setup is shown in Figure 3.3. The array utilizes the SONAR units discussed in Section 3.1.1.

Figure 3.3: 3D SONAR Array

38

3.1.3. Inertial Measurement Unit The IMU used for this thesis is a Microstrain 3DM‐G, which is a consumer grade

IMU. The decision to use a consumer grade IMU rather than a navigation grade IMU was made because of the relatively low cost for the consumer grade IMU in comparison to navigation grade units (approximately $1,000 compared to $50,000

[2]). While the consumer grade IMU will not produce as accurate results as a navigation grade IMU would, it is capable of being used for this project because integrating the IMU with the SONAR array could reduce the errors inherent to an

IMU. This particular IMU has a position drift due to acceleration bias of approximately 0.1 m/s2, and a gyro drift of approximately 2000 deg/hr [2].

Reduction and mitigation of these errors is discussed in section 3.3.

3.1.4. Computer System The computer used for testing and data acquisition for the purpose of this research was an Apple MacBook Pro running OS X 10.5.3. The computer has a 2.4

GHz Intel Core 2 Duo processor and 2 GB 667 MHz DDR2 SDRAM. These specifications far exceed the computational resources to perform the data acquisition function. However, the choice for this laptop was based on its portability and availability.

3.1.5. System Communication Bus I2C was chosen as the communication bus for interfacing the SONAR array with

the PC. In order to easily setup a connection between the built‐in I2C channel on

each individual SONAR unit and the PC, a Devantech R286‐USB‐I2C module

(pictured in Figure 3.4) was utilized. 39

Figure 3.4: Devantech R286­USB­I2C Module [4]

The R286‐USB‐I2C module uses the FTDI FT232R USB chip to handle all USB communications, which allows for the use of FTDI’s Virtual COM Port (VCP) drivers

[4]. Once the VCP drivers are installed, communication with any I2C device connected to the USB‐I2C module can be completed through the use of built‐in commands, which must adhere to the format depicted in Table 3.1.

Primary Device Device Number of USB‐I2C Address + Internal Date bytes data bytes command R/W bit Address

Byte Type I2C_AD1 Addr+R/W Register Byte Count Data

Example 0x55 0xE0 0x00 0x01 0x53

SONAR One USB‐I2C SONAR I2C Ranging in Meaning command command command address cm register byte follows

Table 3.1: USB­I2C Command Set for Interfacing with SONAR units (based on [4])

40

3.2. Hardware Integration and Interfacing

In order to interface with the hardware, a custom software package had to be

developed. The package was developed in C and C++ and uses threads to handle the

IMU data collection and data storage processes. The IMU data is retrieved

continuously and stored in circular buffers, which regularly output the data to tab‐

delineated text files for post‐test processing. The code that retrieves and outputs

the IMU data was developed by [13]. The software had to be modified to

incorporate the SONAR data collection and storage processes. The software was

developed with the intention of being controlled by a user, and therefore waits for

the user to input CNTRL+C, known as a SIGINT, from the keyboard and then

prompts the user to choose between either a SONAR system scan or to stop the data

collection process. The requirement for the user to trigger a SONAR scan was put in

place to ensure that the platform was completely stationary while running a SONAR

system scan. This process is used for the purpose of testing solely; in the future, the

SONAR scans should be initiated internally by the system, possibly from a guidance

or navigation system. Similar to the functionality of the IMU, all of the data is output

to tab‐delineated text files that can be imported into MATLAB for post‐test

processing.

3.3. Adjustments for Systematic Sources of Error

A few potential sources of errors have been identified during the development of

the outlined system. In order to create a more accurate system with the provided 41 hardware, the following attempts have been made to reduce and/or mitigate these errors as best as possible.

3.3.1. Accounting for System Delay Errors Due to Noise on the Communication Bus Experimentation has shown that the USB‐I2C module can produce inconsistent errors due to internal noise on measurements when attempting to synchronize multiple SONAR units through the USB‐I2C module. This does not present itself as an issue when the SONAR units are being used in standard transmit‐then‐receive mode, however, when transmitting on one SONAR unit and attempting to receive on another, large measurement errors are observed. Two potential solutions have been identified to account for this error. The first is to introduce a delay equivalent to approximately one meter (at the speed of sound) between the transmission command to one SONAR unit and the reception command to one of the other units.

This allows for all noise within the module to subside and more accurate measurements to be observed. The disadvantage of this option is that it limits the system to a minimum range of one meter (instead of the 15 cm minimum as specified for a single SONAR unit). The other solution, which was used in our data collection system, is to use three separate USB‐I2C modules, 1oone module for the

SONAR units that only perform pulse transmissions, one module for the SONAR units that only perform reception, and, finally, one module for the SONAR units that are used for both transmission and reception. Because up to 16 SONAR units can be connected to a single USB‐I2C module, this setup allows for eight separate SONAR 42 arrays (using the outlined configuration) to be used without additional hardware in future improvements of the proposed system.

3.3.2. Delay Between Transmission and Reception on Separate SONAR Units The SONAR array requires that a pulse be transmitted on one SONAR unit and received on another. The transmissions must be sent to the devices one after another through two separate USB‐I2C modules, which have an internal transmission speed of 19.2 Kbps. The delay between the transmissions produces an error in the DOF calculation that must be accounted for. For the purpose of this research, the transmission command is sent first, followed by the reception command, so the error due to the delay expressed in centimeters must be added back into the DOF measurement. The resulting DOF measurement calculation in meters is shown in equation (16).

5 bytes Final DOF = DOF + ⋅ c (16) ⎛ bytes⎞ air 19200 ⎜ ⎟ ⎝ sec ⎠

3.3.3. Reducing IMU Errors In order to reduce the IMU errors due to the gyro drift, a zero‐velocity update is performed by the algorithms using the IMU data. Before any tests are initiated, the

IMU collects a large amount of data while the platform is completely stationary. For

the purpose of this research, the IMU was left running and completely stationary for

approximately 5 minutes. The effect of the gyro drift on the orientation outputs can

then be observed and a polynomial fit can then be applied to the calculated

orientation data to isolate the drift. This estimate of the drift error will be good for a 43 short period of time and can be used to remove the error from short‐term future readings. This process is done at system start‐up, and then again (with smaller data sets) when the platform is stationary while the SONAR array is scanning. More complicated error estimation strategies using Kalman filters such as the ones described in [13] are outside the scope of this thesis but are recommended for future implementations.

3.3.4. Addressing SONAR Unit Synchronization Errors When transmitting on one SONAR unit and receiving on another, synchronization of the measurements must be performed to ensure accurate results. The manufacturer of the SONAR units does not provide any information about synchronization between multiple SONAR units. This makes the synchronization process difficult, resulting in an ad hoc approach.

The first step of the ad hoc procedure is to synchronize the transmission on a single SONAR unit and the reception on another. This is done by sending the commands to transmit on the first SONAR unit and then the command to receive on the second SONAR unit. It was observed that any setup having more than one

SONAR unit per USB to I2C module resulted in data loss when transmitting the

commands. In order to limit this issue, all transmitters were placed on one USB to

I2C module, all receivers were placed on another USB to I2C, and finally the

transceiver (TRO) was placed on its own USB to I2C module. After moving the

SONAR units to their own USB to I2C modules, accuracy of the readings from the

transmission on one SONAR unit and reception on another were verified with the 44 measurements on the laser range finder. The next step was to continue adding

SONAR units to the array until the final setup of two transmitters, two receivers, and one transceiver was met. In the process of adding the necessary SONAR units it was discovered that only one SONAR unit could actively be in receive mode per USB to

I2C module, and that delays needed to be added between successive commands sent

through a single USB to I2C module; the later of which was attributed to noise on the

communication bus. Ultimately, these delays and additional steps which were

required for successful synchronization increased the total time of a full system scan

to 1.16 seconds, which limits the system to an operating speed of 0.862 Hertz. 45

CHAPTER 4: MOTION TRACKING

4.1. Identifying Potentially Moving Objects

In order to positively identify and properly localize a moving object, the object

must be observed by all three of the SONAR units (TR0, R1, and R2) of one branch.

Failure to be observed by one of the three SONAR units will result in the inability of

the system to create either a bearing or an elevation angle to the object. Without a

bearing or elevation angle, the object’s location in the environment cannot be

determined. Identification of the moving objects follows a different process

depending on whether the SONAR array platform is in motion itself, or if the

platform is stationary.

4.1.1. Stationary Identification Process The stationary identification process requires SONAR system scans to be made

to derive a known image of the environment. Initially, several scans are made of the

current position, these scans are then averaged together to improve the known

image of the environment. After the surroundings have been identified, all future

scans will be compared to this known image, if the results of the new scans are

sufficiently different from the original environment, chances are that a new object

has entered into the field‐of‐view of the SONAR array. Because the SONAR array

itself is stationary, it is clear that this object can then be classified as a moving

object. Using the techniques outlined in Section 2.5.2 the elevation and bearing

angles to the object can be found; from these angles we can calculate the 3D

Cartesian coordinates of the object using equations (13), (14), and (15) shown in 46

Section 2.5.2. This process will limit motion tracking to the beam width of the

SONAR units and stationary operation of the SONAR array. In order for an object to be detected by the system in the process it must also be moving sufficiently slow, so that it can be observed by the SONAR array during a full system scan. In order to make velocity estimates the object must be seen by the SONAR array at least twice.

4.1.2. Moving Identification Process In order to identify and potentially track moving objects while the platform holding the SONAR array is in motion, the system must rely on the IMU in addition to the SONAR array. In this process, the SONAR array is used to detect planar surfaces in the environment. The location of these planar surfaces with respect to the SONAR array will be uniquely determined by the range, bearing angle, and elevation angles as determined in Section 2. [1] will discuss the use of these features for navigation of the moving platform in a manner similar to the one described in [13]. The platform then moves a short distance to another location.

After the platform moves, the IMU is used to predict the new location and orientation of the previously detected planar surfaces. The SONAR array then performs another scan at this new location. The planar surfaces derived from the new SONAR measurements are then compared to the IMU predictions to perform data association. After the planar surface feature has been completed, the change on position and orientation is derived from the consecutive planar surfaces. The difference between the computed position and orientation change and position and 47 orientation change calculated from the IMU measurements will then be used to calibrate the IMU. This continuous feedback mechanism is shown in Figure 4.1.

Figure 4.1: Moving Identification Process

In this process, if any surface localized by the SONAR arrays differs from the

predicted planes of the IMU beyond an acceptable threshold, it is marked as a

potentially moving object. In order for this method to work properly, the IMU must

be producing trust‐worthy predicted planes; and thus requires for each SONAR scan

to be taken over short intervals, allowing for re‐calibration of the IMU after each 48 comparison. Because of the short time between successive SONAR scans and the application of the zero‐velocity update to the IMU, the IMU drift should be greatly limited; for this reason, the acceptable threshold can be determined almost entirely based on the accuracy of the SONAR units.

4.2. Moving Object Velocity and Position Estimations

Once a moving object has been localized using the stationary 3D SONAR array,

velocity estimates can be made. Note that, in case the user is moving, the calibrated

user position from both the IMU and the SONAR observations of the planar surfaces

[1] must be used to compensate for the user change in position. In order to accommodate this process a timestamp is recorded for each measurement set of the

SONAR array.

4.2.1. Target Velocity Estimation In order to estimate the moving object velocity the object must be observed by the array in at least two separate system‐scans. With two separate timestamps and the two 3D locations for the object, an estimate of the object’s velocity can be made using equation (17).

⎡ ⎤ (x2 − x1) 1 ⎢ ⎥ Δx v = ()y − y = (17) t − t ⎢ 2 1 ⎥ Δt 2 1 ⎢ ⎥ ⎣ ()z2 − z1 ⎦

with magnitude (speed):

()x − x 2 + ()y − y 2 + ()z − z 2 speed = v = 2 1 2 1 2 1 (18) t2 − t1 49

Note that this process assumes that the SONAR array is stationary long enough for the full system scan to be made; the velocity would have to be adjusted based on the platforms velocity, if this is not the case. Scanning while moving, though, could potentially increase errors in the classification and localization of objects in the environment, which is why all scans were made when the array was stationary for the purpose of this thesis. Once the velocity of the localized object is estimated, predictions can be made about the future locations of the object.

4.2.2. Target Location Tracking and Future Position Prediction If an object is observed by the SONAR array during (at least) two successive scans, a prediction of the future location of the object can be made in addition to the velocity estimates. In order to make predictions of future locations of the object, four different extrapolation techniques have been investigated. These techniques are discussed in detail in the next section and have been implemented for quality comparison.

4.2.2.1. Linear Extrapolation

The most basic form of extrapolation is a linear extrapolation. Linear extrapolation can be applied to the position in each axis by first calculating the divided difference between two data points using equation (19) and then adding the result to the most recently found position, demonstrated in equation (20).

p − p s = k k−1 (19) tk − tk−1 50

pk+1 = (tk +1 − tk )s + pk

pk − pk−1 (20) = (tk +1 − tk ) + pk tk − tk−1

where pk represents the most recently observed position of the object in either the

x, y, or z axis, pk−1 represents the previously seen position of the object in either the

x, y, or z axis, and t2 and t1 represent the associated timestamps for the two

positions. Unfortunately, this method of prediction assumes that the object is always moving in a linear fashion from one point to the next. The method also fails

to take into consideration trends that occur over the entire flight path of the object

(should it be seen more than once by the SONAR array).

Least­Squares Estimation The least‐squares solution (discussed in [9] and [10]) is essentially finding a line of

best fit for the given data while minimizing the residual (error) vector. The least‐

squares solution is then the solution for x in equation (21) that minimizes e also in equation (21).

y = Hx + e (21)

where y is the measurement vector, H is the data matrix, x is the user state, and e

is the residual (or error) vector. By representing the residual vector as equation

(22), the final least‐squares solution is ultimately arrived at, shown in equation (23).

e = y − Hxˆ = y − yˆ (22)

xˆ = (HT H)−1 HT y (23) 51

Where xˆ represents the least squares estimate of x and yˆ is the estimated

measurement vector based on x.

4.2.2.2. Polynomial Extrapolation

It is unlikely that every moving object observed by the array is moving in a linear fashion, for this reason it is also important to examine a polynomial extrapolation technique. A polynomial extrapolation is done over the entire data set, rather than the last two data points, and can, therefore, potentially provide a more accurate

prediction by following trend in the data set. While there are many polynomial

extrapolation methods only the interpolation polynomial in Lagrange form and the

Newton polynomial were examined for the purpose of this research. For all cases

the method is applied to a set of k + 1 data points consisting of the recorded

timestamp and the position of the object in the x, y, or z‐axis, (t0, p0 ),...,(tk, pk ).

Interpolation Polynomial in Lagrange Form

The interpolation polynomial in Lagrange form (from [7]) is described by equation

(24) and equation (25).

k

Lt()= ∑p j l j ()t (24) j= 0

k t − ti t − t0 t − t j−1 t − t j +1 t − tk l j ()t = ∏ = ...... (25) i= 0,i≠ j t j − ti t j − t0 t j − t j−1 t j − t j +1 t j − tk

The result of equation (22) is a continuous polynomial function, and as such all

predictions for future locations of the object are determined by inputting the time

for the prediction into the result that was calculated with the known dataset. This 52 method is effective as it takes into account trends that exist throughout the entirety of the data set and is able to handle multiple non‐linear motions; however, it isn’t computationally practical, as each coefficient of the polynomial must be re‐ calculated each time a new data point is added to the set. For this reason, a more practical approach was investigated in the Newton polynomial.

Newton Polynomial The Newton Polynomial (discussed in [8]) yields similar results as the interpolation

polynomial in Lagrange form, but is a more feasible option in regards to

computation requirements as new data can be added without the need to

recalculate previously calculated coefficients. This method is described by equation

(26), equation (27), and equation (28).

k

Nt()= ∑a jn j ()t (26) j= 0

j−1

n j ()t = ∏()t − ti (27) i= 0

a j = [p0,..., p j ] (28)

In equation (28) above, [p0,..., p j ] represents the , which was

shown in equation (19). Finally, the resulting Newton polynomial is shown in equation (29).

Nt()= []p0 + []p0, p1 (t − t0 )+ ...+ [p0,..., pk ](t − t0 )(t − t1)...(t − tk−1) (29)

Future position predictions are made for this method in the same fashion as they

are done for the interpolation polynomial in Lagrange form. 53

4.2.2.3. Kalman Filter Another common method for predicting future states in navigation is the Kalman

Filter. The Kalman filter is a set of mathematical equations that provides an efficient recursive estimator of the state of a process, in a way that minimizes the mean of the squared error [14]. This makes the Kalman filter an ideal tool for future state prediction of moving objects. The Kalman filter consists of two cycles, the prediction and update cycle. In the prediction cycle previous measurements are use to predict the state at the current (or next epoch) epoch. Then, the measurement update adjusts the projected estimate by including the current measurement in the state estimate [14], this process is outlined in Figure 4.2.

Figure 4.2: Kalman filter cycle (Based on [14])

These two processes are represented by the equations below; the time update

process is represented by equations (30) and (31), whereas the measurement

update process is represented by equations (32), (33), and (34).

- xˆ k = Φxˆ k -1 + Buk -1 (30) 54

- T P k = ΦPk -1 A + Q (31)

- T - T -1 Kk = P kH (HP KH + R) (32)

− - xˆ k = xˆ k + Kk (zk − Hxˆ k ) (33)

- Pk = (I − KkH)P k (34)

In the equations above, Q represents the process noise covariance matrix, R

represents the measurement noise covariance matrix, the matrix Φ relates the state at the previous time step to the state in the current time step, matrix B relates the

optional control input to the sate x, matrix H relates the state to the measurement

- zk , P k represents a priori estimate error covariance, Pk represents a posteriori

estimate error covariance, and finally matrix K is chosen to be the gain [14]. 55

CHAPTER 5: TEST RESULTS AND ANALYSIS

5.1. Individual SONAR Unit Measurement Accuracy and Stability Tests

In order to determine the feasibility of detecting motion with SONAR units, the

SONAR units ability to reliable make distance measurements has been tested. The

SONAR units have also been tested to ensure that the distance readings are

reasonably accurate. In order to test the accuracy, two SONAR units were fastened

to each end of a 71.12 cm board; the board was then fastened to a rotating turntable,

and placed 60.6552 cm from a (assumedly) flat, level‐surfaced wall. This test setup

is shown in Figure 5.1.

Figure 5.1: SONAR Units Stability Test Setup

56

Each SONAR unit was then pinged 1,000 times and the results were recorded. This test was then repeated with both a 20‐degree clockwise rotation and a 20‐degree counter‐clockwise rotation about the z‐axis. The results of these tests, which are shown in Figures 5.2 and 5.3, show that the SONAR units’ output is reliable with a variation of about 1.8288 cm. The SONAR units also resulted in very accurate results with a maximum observed percent error of 3.6687%; Table 5.1 shows the expected results, actual results (based on the mean reading), and the percentage difference between the two. Table 5.2 shows the standard deviation on each SONAR unit for the three situations. The percent error shown in Table 5.1 was calculated against the expected results that were found through simulation (the simulation is discussed more thoroughly in Appendix A).

Actual Actual Expected Expected Distance Distance Distance % Error Distance % Error (mean) (mean) SONAR 1 SONAR 1 SONAR 2 SONAR 2 SONAR 1 SONAR 2 (cm) (cm) (cm) (cm) Parallel to Wall 60.6552 59.6676 1.6281% 60.6552 60.6247 0.0502%

20˚ Clockwise Yaw 48.5973 48.7009 0.2132% 74.5175 73.9292 0.3803%

20˚ Counter‐ 72.8868 73.1398 0.3471% 51.0113 49.1398 3.6687% clockwise Yaw

Table 5.1: Expected Versus Actual Distance Measurements

57

Standard Deviation SONAR Standard Deviation SONAR 1 2 (cm) (cm)

Parallel to Wall 0.039624 0.042672

20˚ Clockwise Yaw 0.207264 0.103632

20˚ Counter‐clockwise Yaw 0.073152 0.036576

Table 5.2: Standard Deviations

Figure 5.2: SONAR 1 Measurements Histogram Over 1,000 Tests

58

Figure 5.3: SONAR 2 Measurement Histogram Over 1,000 Tests

The simulated orientations of the array for each case, which were used for the expected measurements in Table 5.1, are shown in Figures 5.4, 5.5, and 5.6. In these

figures the red lines represent the beam center for each of the SONAR units, the

green lines represent the outermost vertical limits of each SONAR units beam width

raised about the center, and the magenta lines represent the outermost horizontal limits of the beam width expanded about the center. The beam width in these samples is shown only to highlight the point discussed earlier; that the shortest

distance measured on each SONAR unit is from the SONAR unit to its reflection. 59

Figure 5.4: Geometry Plot with Platform Parallel to Wall

Figure 5.5: Geometry Plot with Platform Yawed 20˚ Clockwise

60

Figure 5.6: Geometry Plot with Platform Yawed 20˚ Counter­Clockwise

5.2. Object Localization Tests

In order to properly test the localization techniques discussed in Chapter 2, an

assumably flat cardboard cutout was placed in the field of view of the SONAR array,

and it’s dimensions and location in relation to the SONAR array was measured with

a laser range finder with millimeter accuracy. The cardboard cutout was placed

1.905 m away from the SONAR array, which was placed on top of a table. The object

can only be seen by 4 of the 5 SONAR units (it is not in the field of view of T2). The

object has a width of 0.762 m and a height of 1.272 m. The top of the object is 1.238

m above the top of the SONAR array, and the left most edge of the object is 0.0825 m

from the edge of TR0. This geometric relationship is illustrated in Figure 5.8 where

the SONAR array is represented as the blue square. Figure 5.8 also shows the least‐ 61 squares approximation of the observed “reference” location of the object in green.

Figure 5.7 shows the objects observed x, y, and z positions for the entirety of the test in blue, it also shows the least squares approximation (computed using the MATLAB polyfit function) of those positions in green.

Figure 5.7: Stationary Object Localization Test

62

Figure 5.8: 3D Object Localization

In Figure 5.8, the black square represents the cardboard cutout and the cyan

colored spot represents the localized position of the cardboard cutout as

determined by the SONAR array for each of the 190 data samples. By taking the

mean value of the least squares solution for the x, y, and z positions an estimated (x,

y, z) “reference” location for the object is found to be (1.831669, ‐0.1557079,

1.013222) meters. The observed location of the object in the y and z‐axes are within

the expected range (as can be seen in Figure 5.8), whereas the observed x‐

coordinate is 7.33 cm shorter than the expected location, resulting in a percent error

of 3.8494%. The observed percent error for object localization matches that of the 63 maximum observed percent error when testing the SONAR units individually, and therefore, a percent error of 4% will be used as a threshold for object localization when attempting to detect moving objects in the environment.

5.3. Object Classification Tests

The SONAR array was setup near a plane, a corner, and an edge and recorded 50

data samples in each location. The results were run through a MATLAB simulation

to determine the classification for each of the data sets. The classification tests were

conducted by treating the sensor array as two distinct arrays: a vertical sensor array

and a horizontal sensor array. The classifications in both the vertical and horizontal

arrays were then compared to the expected outcome. It was found that the array is

best suited for vertical planes (confirming the results reported in [6]). Table 5.2

shows the number of times each array provided the correct classification out of the

50 data samples for the vertical array, while table 5.3 shows the number of times

each array provided the correct classification for the horizontal array (also over 50 data samples). Edge classification for this array proved to be very difficult, and nearly impossible using the equations and geometry outlined in [5]. Edge classification could be done in these instances because the surface the array was encountering was predetermined to be an edge, and the results could be mapped

accordingly. Future work should be done to identify edges by determining the

intersection of two known planar surfaces.

64

Horizontal Array: Number of Correct Classifications Per Surface Encountered Threshold 5 cm 8 cm 10 cm

Horizontal Plane 14 26 36

Horizontal Corner 19 34 39

Horizontal Edge 15 32 34

Table 5.3: Horizontal Correct Classifications

Vertical Array: Number of Correct Classifications Per Surface Encountered Threshold

5 cm 8 cm 10 cm

Plane 30 36 40

Corner 31 38 41

Edge 17 34 38

Table 5.4: Vertical Array Correct Classifications

Tables 5.3 and 5.4 clearly show that given a larger threshold for classification, an

object is more likely to be classified as a known classification; however, too large a

threshold could potentially increase the possibility of an object receiving an

incorrect classification (e.g. a corner being classified as a plane). In order to provide

the highest level of accuracy when classifying objects with the current array a

threshold of approximately 8 cm should be allowed. It is important to note that the 65 threshold for classification has no bearing on the threshold set for object localization. The localization of an object is done using only the 3D vector receiver made up of SONAR units TR0, R1, and R2. The SONAR units T1 and T2 are combined into the overall array purely for classification purposes, making one threshold for classification (noted above), and a unique and separate threshold for localization.

5.4. Stationary Motion Detection, Tracking, and Prediction

In order to test motion detection, tracking, and prediction with a stationary

SONAR array, the array was placed on a platform mimicking the same setup as seen

in the object localization tests discussed in Section 5.2. A round Styrofoam ball was

affixed to a string and hung from a PVC pipe. The pipe (and by extension the ball)

was moved into, and through, the field of view of the SONAR array by hand in a

sinusoid pattern. Figures 5.9 through 5.17 show the detected x, y, and z position of

the moving object, as well as the predictions for future locations of the object based

on linear extrapolation, the interpolation polynomial in Lagrange form, the Newton

polynomial, and the Least‐Squares solution. In all cases the Newton polynomial

overlaps the interpolation polynomial in Lagrange form, so the line width of the

interpolation polynomial in Lagrange form was increased from one to two for clarity

purposes. The legend is the same for each figure and therefore is only shown with

Figure 5.9. 66

Figure 5.9: Stationary Motion Detection Iteration 1 67

Figure 5.10: Stationary Motion Tracking Iteration 2

Figure 5.11: Stationary Motion Tracking Iteration 3 68

Figure 5.12: Stationary Motion Tracking Iteration 4

Figure 5.13: Stationary Motion Tracking Iteration 5 69

Figure 5.14: Stationary Motion Tracking Iteration 6

Figure 5.15: Stationary Motion Tracking Iteration 7 70

Figure 5.16: Stationary Motion Tracking Iteration 8

Figure 5.17: Stationary Motion Tracking Iteration 9 71

In Figures 5.9 through 5.17, it is clear that the best methods of prediction for future locations of the object are through the use of the Kalman filter and the least‐ squares solution. In figure 5.18, these two methods are shown again with a more appropriate scale when viewing only these two methods.

Figure 5.18: Least­Squares Solution and Kalman Filter

Tables 5.4 – 5.7 show all position predictions made by each prediction

technique, the actual observed position, and the corresponding error.

72

Linear Extrapolation n Prediction Actual Error x y z x y z x y z 2 27.07 8.905 9.375 27.06 ‐5.038 ‐5.125 0.01 13.943 14.5 3 27.05 ‐17.53 ‐18.12 21.67 ‐10.75 ‐12.0 5.38 6.78 6.12 4 16.83 ‐15.87 ‐18.16 25.51 1.968 ‐4.625 8.68 17.838 13.535 5 28.95 13.36 1.983 25.52 1.895 8.625 3.43 11.465 6.642 6 25.52 1.83 20.5 27.99 9.209 ‐5.625 2.47 7.379 26.125 7 30.21 15.76 ‐18.39 1.1e‐31 1.8e‐15 ‐30.0 30.21 15.76 11.61 8 ‐25.08 ‐8.25 ‐51.84 30.21 1.78 ‐16.0 55.29 10.03 35.84 9 57.28 3.375 ‐2.507 22.99 ‐4.568 ‐24.62 34.29 7.943 22.113

Table 5.5: Linear Extrapolation Predictions

Least Squares Estimation n Prediction Actual Error x y z x y z x y z 2 25.49 0.612 1.373 27.06 ‐5.038 ‐5.125 1.57 5.65 6.498 3 24.70 0.534 ‐2.029 21.67 ‐10.75 ‐12.0 3.03 11.284 9.971 4 23.11 0.456 ‐5.431 25.51 1.968 ‐4.625 2.4 1.512 0.806 5 22.31 0.378 ‐8.833 25.52 1.895 8.625 3.21 1.517 17.458 6 21.52 0.299 ‐12.23 27.99 9.209 ‐5.625 6.47 8.91 6.605 7 20.73 0.221 ‐15.64 1.1e‐31 1.8e‐15 ‐30.0 20.73 0.221 14.36 8 19.93 0.143 ‐19.04 30.21 1.78 ‐16.0 10.28 1.637 3.04 9 19.14 ‐0.065 ‐22.44 22.99 ‐4.568 ‐24.62 3.385 4.503 2.18

Table 5.6: Least Squares Estimation Prediction

73

Interpolation Polynomial in Lagrange Form n Prediction Actual Error x y z x y z x y z 2 27.07 8.905 9.375 27.06 ‐5.038 ‐5.125 0.01 13.943 14.5 3 27.05 ‐18.58 ‐19.21 21.67 ‐10.75 ‐12.0 5.38 7.83 7.21 4 11.46 ‐8.506 ‐11.44 25.51 1.968 ‐4.625 14.05 10.474 6.815 5 50.73 39.99 21.32 25.52 1.895 8.625 25.21 38.095 12.695 6 ‐20.68 ‐86.55 3.926 27.99 9.209 ‐5.625 48.67 95.759 9.551 7 106.3 189.6 ‐56.13 1.1e‐31 1.8e‐15 ‐30.0 106.3 189.6 26.13 8 ‐288.5 ‐368.5 160 30.21 1.78 ‐16.0 318.71 370.28 176 9 978.4 714.0 ‐163.6 22.99 ‐4.568 ‐24.62 955.41 718.56 138.98

Table 5.7: Interpolation Polynomial in Lagrange Form Predictions

Newton Polynomial n Prediction Actual Error x y z x y z x y z 2 27.07 8.905 9.375 27.06 ‐5.038 ‐5.125 0.01 13.943 14.5 3 27.05 ‐18.58 ‐19.21 21.67 ‐10.75 ‐12.0 5.38 7.83 7.21 4 11.46 ‐8.506 ‐11.44 25.51 1.968 ‐4.625 14.05 10.474 6.815 5 50.73 39.99 21.32 25.52 1.895 8.625 25.21 38.095 12.695 6 ‐20.68 ‐86.55 3.926 27.99 9.209 ‐5.625 48.67 95.759 9.551 7 106.3 189.6 ‐56.13 1.1e‐31 1.8e‐15 ‐30.0 106.3 189.6 26.13 8 ‐288.5 ‐368.5 160 30.21 1.78 ‐16.0 318.71 370.28 176 9 978.4 714.0 ‐163.6 22.99 ‐4.568 ‐24.62 955.41 718.56 138.98

Table 5.8: Newton Polynomial Predictions

74

Kalman Filter n Prediction Actual Error x y z x y z x y z 2 27.07 ‐5.038 ‐5.125 27.06 ‐5.038 ‐5.125 0.01 0 0 3 27.06 ‐7.894 ‐8.563 21.67 ‐10.75 ‐12.0 5.39 2.856 3.437 4 24.36 ‐4.607 ‐7.250 25.51 1.968 ‐4.625 1.15 6.575 2.625 5 24.74 ‐2.981 ‐3.282 25.52 1.895 8.625 0.78 4.876 11.907 6 24.93 ‐5.432 ‐3.750 27.99 9.209 ‐5.625 3.06 14.641 1.875 7 25.55 ‐4.527 ‐8.125 1.1e‐31 1.8e‐15 ‐30.0 25.55 4.527 21.875 8 22.56 ‐1.337 ‐9.179 30.21 1.78 ‐16.0 7.65 3.117 6.821 9 22.62 ‐6.880 ‐1.111 22.99 ‐4.568 ‐24.62 0.37 2.312 23.509

Table 5.9: Kalman Filter Predictions

From the plots above, it is clear that the Newton polynomial and the interpolation polynomial in Lagrange form provide the best method of representing the most likely flight path of, or tracking, the object. The results in Table 5.6 and

Table 5.7 also show that the Newton polynomial and the interpolation polynomial in

Lagrange form yield identical results; therefore, because the Newton polynomial is computationally advantageous over the interpolation polynomial in Lagrange form,

the Newton polynomial is the best solution for representing the flight path of the

sampled data after it has been recorded. Unfortunately, the prediction errors are

very large as the sample size of the data increases. This means that the Newton

polynomial would be a poor choice for motion tracking and prediction, and could

likely only be used for post‐collection flight path illustration. 75

From the errors observed it is clear that none of the prediction methods provide a highly accurate prediction for future locations of the object. Table 5.8 shows the average (with the maximum outlier values removed) and median percent error for each of the prediction methods.

Prediction Method Average Percent Error Median Percent Error

Linear Extrapolation 625.35 119.46

Least Squares Estimation 67.91 78.44

Interpolation Polynomial in 2206.82 171.7500 Lagrange Form

Newton Polynomial 2121.039 171.7500

Kalman Filter 70.59 30.985

Table 5.10: Prediction Percent Errors

Table 5.10 illustrates that the Least Squares Estimation and Kalman filter

provide the best method for prediction; with the least squares estimation having the

lowest average and the Kalman filter having the lowest median percent error;

however, the percent error observed is still rather high. Prediction errors would

likely decrease if the sampling rate of the data could be increased beyond the

current operational speed of 0.862 Hz, as the SONAR array would be able to better capture the motion of the moving object. This increased operational speed could 76 also lead to reductions in both the median and average percent error, which suggests that the Kalman filter would be the best‐equipped method for future position estimation. The data represented in these trials indicates that the Kalman filter would be the best overall solution as it produces the lowest prediction errors while also producing relatively low tracking errors when compared to the other methods.

5.5. Moving Platform Motion Detection

Because motion tracking and prediction techniques are the same in both the case of a stationary platform and the case of a moving platform, and because the techniques were discussed thoroughly in section 5.4, only setup in regards to the

IMU and motion detection for a moving platform will be discussed in this section.

5.5.1. Application of Zero‐Velocity Update The zero‐velocity update (as covered in Section 3.3.3) must be applied to each measurement individually as the drift is unique in each axis for each measurement.

Multiple hallway tests were conducted using the SONAR array in combination with the IMU; the results of one such test are shown here. The entire platform was kept stationary for a period of three minutes, while data was collected from the IMU. A least‐squares polynomial fit was applied to the acceleration and angular rate data in each axis (x, y, and z) over this time period. By removing this polynomial from the data over that same time period, the data shifts to centering about zero; this is demonstrated in Figures 5.19 through 5.24. Note that this polynomial is removed from the entire data set, but is only shown being removed from the stationary data 77 set at the beginning of the test, so that the re‐positioning of the data about zero is clearly seen. In Figures 5.19 through 5.24, the top half of the figure shows the recorded data from the IMU in blue, with the polynomial fit of the data shown in green; the bottom half of the figures show the data after the polynomial fit has been removed from it.

Figure 5.19: Zero­Velocity Update Applied to Delta Velocity in X­Axis 78

Figure 5.20: Zero­Velocity Update Applied to Delta Velocity in Y­Axis

Figure 5.21: Zero­Velocity Update Applied to Delta Velocity in Z­Axis 79

Figure 5.22: Zero­Velocity Update Applied to Delta Theta in X­Axis

Figure 5.23: Zero­Velocity Update Applied to Delta Theta in Y­Axis 80

Figure 5.24: Zero­Velocity Update Applied to Delta Theta in Z­Axis

5.5.2. IMU and SONAR Integration for Moving Platform Motion Detection Once the zero‐velocity update has been applied to the output data from the IMU, it can be used in conjunction with the SONAR data for motion detection via the process outlined in Section 4.1.2. Because only one SONAR array was developed for the purpose of this research, it was rotated around the platform to provide a full 180 degree view from the left to right side of the platform in incremental steps; in the future the array would not be rotated, but multiple arrays would be used. The first

SONAR scan was taken on the left side of the platform at what will be denoted as 0 degrees, each successive scan was taken in 45 degree increments until 180 degrees was reached; each scan location during this process is denoted in Figure 5.25. The 81 location of the SONAR array at 0˚, 90˚, and 180˚ are labeled in blue, whereas the location of the SONAR array at 45˚ and 135˚ are labeled in red for clarity purposes.

Figure 5.25: SONAR Array Locations During Moving Platform Tests

After placing the platform into a hallway, multiple SONAR system‐scans were

performed at each location shown in Figure 5.25, during which time the platform

remained stationary. The platform was then moved further into the hallway and the

process was repeated. Because it is known that the platform is stationary

throughout all scans from position one to position five, the zero‐velocity update can

be applied to all IMU data after each successive stationary period in an attempt to

further reduce the gyro drift. Note that in an actual operational scenario this may

not be the case, since the platform will be moving constantly, however, in that case

all sensor arrays will make a measurement at the same time not requiring

compensation from one scan to the other. There is also no need to compute the position of the platform when it is stationary, because its position will remain the 82 last computed position prior to the first sonar system scan. To accommodate this process the IMU data was used in individual chunks rather than as a continuous data set. The position calculations were computed between each of the successive stationary periods, and then all positions were stored into the overall position vector for the entirety of the test. The time in which the platform was at a complete stop were used to calculate a zero‐velocity update on the IMU; these stationary periods were then removed from the datasets, so the following plots represent the platforms movement through the hallway with the stops removed for clarity. Figure

5.26 shows the Euler angles associated with the platform changes over the duration of the test (in incremental steps as they were not computed when the platform was known to be entirely stationary). Figure 5.27 shows the velocity changes of the platform over the duration of the test (also in incremental steps), and in Figure 5.28 the path taken through the hallway by the platform recorded by the IMU is shown.

In Figure 5.28, the red stars on the path indicate a stopping place for the platform, which is where SONAR system‐scans were taken. During this hallway test, a moving object (in the form of a person walking down the hall) was only introduced on one side of the platform; thus, only the object localization scans for that side of the platform are shown in Figure 5.29.

83

Figure 5.26: Euler Angles Measured by IMU

Figure 5.27: Velocities Measured by IMU 84

Figure 5.28: East North Position with SONAR Scan Locations Indicated

Figure 5.29: Moving Platform Motion Identification 85

Each of the green stars in Figure 5.29 represents a localized position that was not identified as a potentially moving object; whereas, each of the magenta boxes represents a localized potentially moving object. In some cases a potentially moving object was seen by only a single SONAR unit, making it impossible to localize it using the techniques outlined in Section 2.5; in these instances an average position was computed using the results of the other SONAR scans made at the same platform position. This average was then marked as a potentially moving object, as the potentially moving object must have been seen in the general area that was seen by the other system‐scans. With each of these locations identified, the same motion tracking and predictions methods as shown in Section 5.4 can be applied in order to track the object and make predictions about its future locations. 86

CHAPTER 6: CONCLUSIONS AND FUTURE WORK

6.1. Conclusions

This research has investigated methods for detecting and tracking moving

targets through the use of a SONAR array and an IMU; methods for future position

prediction were also presented and evaluated. The SONAR array and associated

platform were developed for the purpose of this research. The discussed methods

encompass both a moving platform scenario, and a scenario in which the platform is

entirely stationary. The research revealed that the beam width of a single SONAR

unit is an inherent short coming that requires the use of multiple SONAR units to

overcome. Through the development of this thesis, it was found that when using

SONAR units, coupled together into the outlined array, it is possible to both detect

and track motion, with the Kalman Filter representing the best overall solution for

future position prediction. In the systems current implementation, however, it does

not translate well as a real‐time system because of the relatively slow speed of

sound in conjunction with the data acquisition time of the SONAR array. The

system, however, can be successfully used for slow moving platforms, or platforms

that are allowed to make frequent stops. It was found that the Kalman Filter would

be the most suitable overall method for future position prediction.

6.2. Future Work

Throughout the course of the research, areas for future development of the

system became apparent. The largest limiting factor of the SONAR array for motion 87 tracking appears to be the operational speed of 0.862 Hz. Future work should attempt to develop or find SONAR units that can be synchronized without the additional overhead that the current SONAR units require. Had this overhead not been required for the current system, the time for a single system scan could have been reduced to 0.637 seconds, nearly doubling the operational speed to 1.569 Hz.

Should the operational speed of the array be increased, the prediction methods outlined here should be re‐evaluated. In addition, more robust and thorough implementations of the Kalman filter should be explored.

Additionally, there needs to be an effort to reduce the overall footprint of the

SONAR array. Motion detection and tracking could be done with only three SONAR units; however classification techniques require the additional two SONAR units.

Attempts at possibly increasing the number of SONAR units and reducing the offset between them could lead to a smaller footprint of the array; however, special care must be paid to potentially reducing the accuracy of the classification system.

Future attempts should also be made to reduce the thresholds for identifying moving targets. The threshold for classification should also be reduced, so that more certainty exists when classifying objects. 88

REFERENCES

[1] Widmer, R. (2008) Autonomous Navigation Using SONAR and an Inertial Measurement Unit. Masters Thesis, To Appear, Ohio University.

[2] Uijt de Haag, M. (2008). Lecture Handout, Inertial Navigation Systems I: Introduction to INS Error Analysis, Ohio University.

[3] Acroname Robotics (2008, May 5 – last update). [Online]. Available: http://www.acroname.com/robotics/parts/R287‐SRF02.html [2008, February 3]

[4] Acroname Robotics (2008, May 5 – last update). [Online]. Available: http://www.acroname.com/robotics/parts/R286‐USB‐I2C.html [2008, February 3]

[5] Kleeman, L. and Kuc, Roman. (1995) Mobile Robot Sonar for Target Localization and Classification. The International Journal of Robotics Research. 14 (4), 295‐318.

[6] Akbarally, H. and Kleeman, L. (1995) A Sonar Sensor for Accurate 3D Target Localisation and Classification. Proc. IEEE International Conference on Robotics and Automation. 3, 3003 – 3008.

[7] Wolfram MathWorld (2008, June 18 – last update). [Online]. Available: http://mathworld.wolfram.com/LagrangeInterpolatingPolynomial.html [2008, April 17]

[8] Wolfram MathWorld (2008, June 18 – last update). [Online]. Available: http://mathworld.wolfram.com/NewtonsDividedDifferenceInterpolationFormula.h tml [2008, April 21]

[9] Wolfram MathWorld (2008, June 18 – last update). [Online]. Available: http://mathworld.wolfram.com/LeastSquaresFitting.html [2008, May 15]

89

[10] Van Graas, F. (2007). Lecture Handout, Electronic Navigation Systems: Least Squares Estimator, Ohio University.

[11] Barshan, B. and Kuc, R. (1992) A Bat‐Like Sonar System for Obstacle Localization. IEEE Transactions on Systems, Man, and Cybernetics. 22 (4), 636–646.

[12] Thomas, J.A., Moss, C.F., and Vater, M. (2002) ECHOLOCATION in Bats and Dolphins. The University of Chicago Press, Chicago.

[13] Venable, D (2008) Implementation of a 3D Imaging Sensor Aided Inertial Measurement Unit Navigation System. Masters Thesis, Ohio University.

[14] Welch, G. and Bishop, G. (2006) An Introduction to the Kalman Filter. [Online] Available: http://www.cs.unc.edu/~welch/kalman/kalmanIntro.html [2008, September 6]

[15] MATLAB Central (2005, October 24 – last update). [Online] Available: http://www.mathworks.com/matlabcentral/fileexchange/5377 [2008, September 6]

[16] North Yorkshire Bat Group (2009, June 5 – last update). [Online]. Available: http://www.nybats.org.uk/page8.htm [2000, June 6]

90

APPENDIX A: SIMULATION

As discussed in Section 5.1, a simulation was developed that would determine

the expected SONAR readings in distinct situations. More specifically, this

simulation was developed so that the expected readings on the SONAR units could be determined for the SONAR stability tests. The simulation was developed in

MATLAB and assumes there to be two SONAR units placed 71.12 cm apart from each other on a platform. The user is prompted for the distance that this platform is from an assumed to be flat wall, as well as the roll, pitch, and yaw of the platform.

The simulation first determines the orientation of the platform by using a discrete cosine matrix (DCM). This process is shown below:

⎡c (θ)c(ψ) −c(φ)s(ψ) + s(φ)s(θ)c(ψ) s(φ)s(ψ) + c(φ)s(θ)c(ψ) ⎤ ⎢ ⎥ DCM = ⎢ c(θ)s(ψ) c(φ)c(ψ) + s(φ)s(θ)s(ψ) −s(φ)c(ψ) + c(φ)s(θ)s(ψ)⎥ (A.1) ⎣⎢ −s(θ) s(φ)c(θ) c(φ)c(θ) ⎦⎥

DCM *[x,y,z]' (A.2)

where c in the DCM represents cosine and s represents sine.

The simulation then determines the distance of the wall from the two SONAR units; this distance is measured from the middle of the SONAR units beam width, as well as the two outermost edges of it’s beam width. These distances are determined using the differencing the platform’s x, y, and a z positions from the wall’s x, y, and z

positions and then using the distance equation as follow:

distance = x2 + y 2 + z2 (A.3) 91

Because SONAR units would return the first echo received as the distance to the wall, the minimum value calculated between the three different points of the SONAR units beam width is used as the expected reading. The entire solution is also plotted as can be seen in section 5.1, in figures 5.4, 5.5, and 5.6.