<<

Short Range Object Detection and Avoidance

N.F. Jansen CST 2010.068

Traineeship report

Coach(es): dr. E. García Canseco, TU/e dr. ing. S. Lichiardopol, TU/e ing. R. Niesten, Wingz BV

Supervisor: prof.dr.ir. M. Steinbuch

Eindhoven University of Technology Department of Mechanical Engineering Control Systems Technology

Eindhoven, November, 2010

Abstract

The scope of this internship is to investigate, model, simulate and experiment with a sensor for close range object detection for the purpose of the Tele-Service Robot (TSR) robot. The TSR robot will be implemented in care institutions for the care of elderly and/or disabled. The sensor system should have a supporting role in navigation and mapping of the environment of the robot. Several sensors are investigated, whereas the sonar system is the optimal solution for this application. It’s cost, wide field-of-view, sufficient minimal and maximal distance and networking capabilities of the Devantech SRF-08 sonar sensor is decisive to ultimately choose this sensor system. The positioning, orientation and tilting of the sonar devices is calculated and simulations are made to obtain knowledge about the behavior and characteristics of the sensors working in a ring. Issues surrounding the sensors are mainly erroneous ranging results due to specular reflection, cross-talk and incorrect mounting. Cross- talk can be suppressed by operating in groups, but induces the decrease of refresh rate of the entire robot’s surroundings. Experiments are carried out to investigate the accuracy and sensitivity to ranging errors and cross-talk. Eventually, due to the existing cross-talk, experiments should be carried out to decrease the range and timing to increase the refresh rate because the sensors cannot be fired more than only two at a time. A ROS node is still work in progress but is subject to finish at the end of November. At this point the conclusion can be drawn that the sonar system can be a good asset in navigation and mapping the environment of the robot.

3 Traineeship Report November 17, 2010

Contents

1 Introduction 10

2 Sensors 13 2.1 Ranging techniques ...... 14 2.1.1 Proximity ...... 14 2.1.2 Triangulation ...... 16 2.1.3 Time-Of-Flight ...... 17 2.1.4 Phase Modulation ...... 17 2.1.5 Intensity of reflection ...... 18 2.1.6 Frequency modulation ...... 18 2.2 Sensor technologies ...... 19 2.2.1 Acoustical ...... 19 2.2.2 Optical ...... 21 2.2.3 Electromagnetic ...... 21 2.3 Candidate systems overview ...... 22 2.4 Discussion ...... 24

3 Sonar Systems 25 3.1 Sonar selection ...... 25 3.2 Discussion ...... 27 3.3 Implementation ...... 28 3.3.1 Cross-talk ...... 28 3.3.2 Refresh rate ...... 29

4 Simulations 30 4.1 Simulation sensor behavior ...... 30 4.2 Simulation robot navigation with sonar ...... 33 4.2.1 Kinematics of differential steering ...... 33 4.2.2 Results navigation simulations ...... 35

5 Data Acquisition 38 5.1 The physical I2Cbus...... 38 5.2 PC to I2C adapter ...... 38 5.3 Protocol and communication ...... 39 5.3.1 Start signal ...... 40 5.3.2 Slave address transfer ...... 40 5.3.3 Data transfer ...... 40 5.3.4 Stop signal ...... 40

5 Traineeship Report November 17, 2010 CONTENTS CONTENTS

5.4 The I2C software protocol ...... 41 5.5 Example communication ...... 42

6 Experiments 45 6.1 Sensor characteristics ...... 45 6.1.1 Measured straight-line distance ...... 45 6.1.2 Cross-talk ...... 47 6.1.3 Angular measurements to pole ...... 48 6.1.4 Maximum angle measurements to wall ...... 49 6.2 Sensors mounted to robot platform ...... 50 6.2.1 Cross-talk ...... 51 6.2.2 Ranging a corner ...... 52 6.2.3 Ranging a doorway ...... 53

7 Concluding Remarks and Future Work 55

Acronyms 57

References 58

A Datasheet Devantech SRF08 59

B Datasheet Parallax PING))) 76

C Additional Measurement Tables and Results 89

D USB-I2C Devantech SRF-08 commands 91

E Matlab M-Files 93 E.1 Simulations ...... 93 E.1.1 Calculate position and orientation sensors ...... 93 E.1.2 Simulation random polygon in environment robot ...... 97 E.1.3 Function CalcDist2Obj ...... 99 E.1.4 Function Drawreading ...... 100 E.1.5 Simulation robot navigation with sonar ...... 101 E.1.6 Function Drawrobot ...... 105 E.2 Communication with sensors ...... 106 E.2.1 Main sensor communication ...... 106 E.2.2 Function StartRanging ...... 108 E.2.3 Function RecDistCmSens ...... 109

6 Traineeship Report November 17, 2010 List of Figures

1.1 Overview of TSR setup, adapted from [10] ...... 10 1.2 Overview of partners ...... 11

2.1 Applied technologies for optical proximity sensors, adapted from [4] ...... 15 2.2 Triangulation techniques, adapted from [4] ...... 16 2.3 Phase modulation, adapted from [3] ...... 18 2.4 Frequency modulation, adapted from [4] ...... 18 2.5 Shape and Field-Of-View (FOV) of ultrasonic sensor, adapted from [12] ...... 20 2.6 Ultrasonic ranging errors, adapted from [4] ...... 20

3.1 Sensor placements and orientations ...... 26 3.2 Sensor placement and orientation for Senscomp Series 6000 ...... 26 3.3 The groups of sensors operating on the mobile platform ...... 28

4.1 Example of sonar object detection ...... 30 4.2 Circle of radius r = 1, center (a, b) = (1.2, -0.5) ...... 31 4.3 Sensor simulation of behavior on different objects ...... 32 4.4 for simulation of moving mobile robot ...... 33 4.5 The Pioneer platform with differential steering ...... 33 4.6 Wheels at different velocities ...... 34 4.7 Simulation of mobile robot navigation throughout an office ...... 36 4.8 Simulation of mobile robot navigation throughout a ...... 36

5.1 Overview I2C connection ...... 38 5.2 The PC to I2C Adapter ...... 39 5.3 Overview I2C communication protocol ...... 39 5.4 Bit transfer on I2Cbus...... 40 5.5 Overview order of sensors mounted to mobile robot ...... 43 5.6 Flow of communication with sensors when ranging is issued ...... 43

6.1 Comparison of real distance to measured distance ...... 46 6.2 Resulting graph of cross-talk measurement ...... 47 6.3 Plan of angular measurement of square pole ...... 48 6.4 Plan of angular measurement of a wall ...... 49 6.5 Resulting graph of angular measurement ...... 50 6.6 Beam Pattern according to data sheet ...... 50 6.7 Overview of mobile robot platform with Devantech SRF-08 sonar sensors mounted to platform for experimental purposes ...... 51 6.8 Overview of robot and mounted sensors ranging the corner with wall of glass ...... 52

7 Traineeship Report November 17, 2010 LIST OF FIGURES LIST OF FIGURES

6.9 Results of ranging a corner with wall of glass ...... 53 6.10 Results of ranging a doorway ...... 53

8 Traineeship Report November 17, 2010 List of Tables

2.1 Candidate systems ...... 23 2.2 Sensors ...... 23

3.1 Sonar systems ...... 25 3.2 Sonar systems and cost estimation ...... 27

5.1 Send Ranging Command for Universal Serial (USB) to I2C...... 42 5.2 Receive Ranging Command for USB to I2C...... 42

6.1 Measurement results of ranging a wall directly in front of the sensor ...... 46 6.2 Cross-talk measurements ...... 47 6.3 Results of angular measurement of square pole ...... 48 6.4 Measurement results of ranging a wall in angular direction with respect to the sensor . . 50 6.5 Sensor grouping overview ...... 52

C.1 Measurement results of ranging a wall directly in front of the sensor ...... 89 C.2 Measurement results of angular ranging ...... 90 C.3 First measurement results of angular ranging to wall ...... 90

D.1 Start Ranging (cm) ...... 91 D.2 Request ranging information first echo ...... 91 D.3 Request software revision ...... 91 D.4 First command to change address ...... 91 D.5 Second command to change address ...... 92 D.6 Third command to change address ...... 92 D.7 Fourth command to change address ...... 92

9 Traineeship Report November 17, 2010 Chapter 1

Introduction

This is the final report for the Dynamics and Control Technology (DCT) internship assignment for the Tele-Service Robot (TSR) project, started on the 30th of August and finished with the final presentation at the 11th of November.

Within the TSR project the aim is to build a demonstrator tele-service robot for care and cure. The goal is to develop a care-robot which can take over household and care tasks for the elderly and handi- capped. In first instance, the robot will be placed in care institutions. The robot can be controlled both by a remote operator (nurse) and by the elderly and/or handicapped person requiring care. See Figure 1.1. The robot can also perform certain tasks autonomously and ultimately learn tasks: e.g. avoid an object, manipulate an object using multiple arms, and move to a predefined location. The robot has to be user friendly, both in appearance and in operation. It has to be reliable and operate safely in a human environment. That is, it should not harm people or objects, or itself and comply with safety standards.

Figure 1.1: Overview of TSR setup, adapted from [10]

10 Traineeship Report November 17, 2010 CHAPTER 1. INTRODUCTION

The TSR group consists of several companies and institutions each responsible for a certain part of the project (Figure 1.2). My internship takes place at the department of mechanical engineering at the University of Technology in Eindhoven (TU/e) and Wingz BV at the High Tech Campus in Eindhoven. Wingz is responsible for the electrical hardware on the platform whereas the mechanical engineering department of the TU/e is responsible for the mechanical part of the robot.

Figure 1.2: Overview of partners

The project is divided into iterative phases. For each case, specific requirements are generated. Cur- rently, the project is in Phase 1. The goal of Phase 1 is to build a TSR that allows shared control between the robot and the operator. For this the cockpit of the TSR needs a user friendly interface for manipu- lating the robot and the robot needs to be able to perform certain tasks autonomously (on command of the operator) and corrections on the inaccurate control provided by the operator. Besides building a functional system the focus in this phase lies on extendibility and safety ([10]). For navigational and mapping purposes, the Phase 1 robot is already equipped with a Scanning Laser Range Finder (SLRF) and a 3d camera. The sensors interact and work together to obtain a map of the robot’s environment after which the robot is able to navigate and manoeuvre to a certain (predefined) position. However, the robot is not able to determine whether objects are appearing behind the robot, which is an issue when backing up. Other than that, due to a limited directional camera and the narrow beam of light being emitted from the SLRF, especially for close range applications, the system lacks per- formance. It is possible to miss an object when the mobile platform is standing close-by, for example when the object is not as high as the laser beam is emitting light. Therefore, the main objective of this project is:

Analyse, model, simulate and implement a sensor for close-range applications for the purpose of the TSR robot

As there are several options available on the market, a decision has to be made for which sensor is

11 Traineeship Report November 17, 2010 CHAPTER 1. INTRODUCTION most suitable for the project. Every type of sensor has their advantages and disadvantages, for example range, and have to be taken into account. The assignment for this internship is first to research sensors on the market today. Second, to theoretically approach the problem for the robot based on simulations made with Matlab and third to do practical experiments with a prototype model of the TSR robot. To obtain full coverage of the ’close range’ (up to about 50 cm) surroundings of the robot, it has to be determined what quantity of sensors is needed. This needs to be calculated with Matlab. Every sensor has a different area of coverage and therefore for every sensor to be considered it has to be calculated what amount of sensors is needed to obtain full coverage of the robot’s area with just a small portion of overlap. From these calculations in Matlab it can be decided, taking price in consideration as well, which sensor is best suitable for the project. The sensor can then be ordered. Then, simulations need to be made with Matlab to obtain knowledge of the behavior of the sensors when a random object (with optimal reflection) approaches the sensors (i.e. when a robot moves to an object). The simulations are graphical. This means that a virtual environment will have to be created. In this environment the area of coverage of the sensors is shown and a random object is created which in first instance will appear outside the range of the sensors. When the simulations are finished and the sensors are delivered, the sensors will be mounted to the platform on a prototype robot. Then, practical experiments are carried out to research the actual behavior of the sensors in a real environment with non-optimal reflective objects. This shows the difference between the theoretical results and the results obtained in practice.

12 Traineeship Report November 17, 2010 Chapter 2

Sensors

The robot must be able to navigate from a certain position to a desired new location and orientation. At the same time it must be able to avoid any contact with fixed or moving objects while enroute. Fundamental in this situation is the use of a sensor that can acquire high resolution data, describing the robot’s physical surroundings. The sensor should operate in a timely, yet practical, fashion in keeping with the limited onboard energy of a mobile vehicle. This matter brings a numerous amount of problems, where (amongst others) the most essential problem in the collision avoidance system is the unknown nature and orientation of the target surface. The system must be able to detect a wide variety of surfaces under varying angles of incidence. The system is already equipped with a SLRF and a stereovision 3d camera for navigational purposes (Simultaneous Localization And Mapping (SLAM)). However, due to technical limitations, this system lacks performance for close range object detection and therefore the main goal for this investigation is the determination of a ranging sensor for close range applications. This sensor is needed to provide the mobile platform with sufficient environmental awareness of its surroundings to allow it to move about in a realistic fashion, avoiding fixed and moving objects. The following considerations are taken into account:

− FOV Must be wide enough with sufficient depth of field to suit the application.

− Range capability The minimum range of detection, as well as the maximum effective range, in this case 50 cm, must be appropriate for the intended use of the sensor.

− Angle of operation Small angles leading to blind spots can deteriorate performance.

− Accuracy, resolution Must be in keeping with the needs of the given task.

− Ability to detect all objects in the environment Objects can absorb emitted energy, targets surfaces can be specular (such as straight walls) as op- posed to diffuse reflectors, ambient conditions and noise can interfere with the sensing process.

− Operate in real time The update frequency must provide data at a sufficient rate taking the vehicle’s speed into ac- count. The sensors should allow the vehicle to stop or alter course to avoid collision even when driving with maximum velocity.

13 Traineeship Report November 17, 2010 2.1. RANGING TECHNIQUES CHAPTER 2. SENSORS

− Concise, easy to interpret data The output should be realistic from the standpoint of processing requirements. Too much data can be as meaningless as not enough data. Where multiple sensors are needed, the effort of combining the sensors and collecting data is a point of interest. − Sensitivity to moving objects When objects are moving or when the mobile vehicle is advancing, especially with Time-Of- Flight (TOF) (section 2.1.3) techniques, errors can occur in ranging. − Power consumption The power requirements should be minimal in keeping with the limited resources onboard a mobile vehicle. − Size The physical size and weight of the sensor system should be practical with regard to the intended vehicle. − Price The benefit-cost ratio should be optimal. This second chapter is intended to provide some background on the various non-contact distance measurement techniques available with related discussion of their implementation in the acoustical, optical and electromagnetical portions of the energy spectrum. This background information is based on information from [4], [12] and [13]. An overview of the candidate systems is presented and a decision is made for what type of sensor is optimal based on the considerations above.

2.1 Ranging techniques

2.1.1 Proximity A proximity sensor is able to determine the presence (opposed to the actual range) of nearby objects without any physical contact. The sensors were developed to gain position information in the close- in region (between a fraction of a centimeter and one or two meters), extending the sensing range beyond direct-contact sensors. A proximity sensor often emits an electromagnetic or electrostatic field, or a beam of electromagnetic radiation (infrared for instance) and looks for changes in the field of the return signal. Different proximity targets require different sensors. Permanent-magnet or Hall effect sensors Permanent-magnet sensors are good for sensing ferrous metallic objects over very short ranges. Magnetic sensors detect changes, or disturbances, in magnetic fields that have been created or modified and output information, in the case of the Hall Effect sensor, a voltage indicating the presence of an object. Induction-type proximity switches Induction-type proximity switches are applied to the detection of metal objects which are lo- cated at short range. Typical inductive sensors generate an oscillatory Radio Frequency (RF) field around a coil of wire. The effective inductance of the coil changes when a metal object enters the field. Inductive and permanent-magnet sensors in general have limited use for purposes of object detec- tion, except in very application-specific instances. To be complete, inductive sensors will be taken into consideration for this project.

14 Traineeship Report November 17, 2010 2.1. RANGING TECHNIQUES CHAPTER 2. SENSORS

Ultrasonic proximity sensors Ultrasonic proximity sensors (not be confused with ranging systems) are useful over longer dis- tances (in meters) for detecting most objects, liquid and solid. The system consist of one or two transducers. The first uses a transceiver, which transmits and receives the energy. The latter, uses one that transmits and the other that receives the energy. When no energy is present, the control circuitry indicates no output. When a signal is received and reaches the preset threshold, the sensor output changes state, indicating detection.

Optical proximity sensors Optical proximity sensors can be broken down in three groups [4](Figure 2.1):

(a) Break-Beam (b) Reflective

(c) Diffuse

Figure 2.1: Applied technologies for optical proximity sensors, adapted from [4]

(i). Break-beam The break-beam technique is based on separate transmitting and receiving elements physi- cally located on either side of the region of interest. The transmitter emits a beam of light, often supplied by a Light-Emitting Diode (LED), which is focused on the photosensitive re- ceiver (Figure 2.1a). Any object that passes between the transmitter and receiver breaks the beam, which leads to a disruption of the circuit. (ii). Reflective Figure 2.1b shows the reflectivity technique. This technique is evolved from break-beam through the use of a mirror, or retro-reflector, to reflect the transmitted energy back to a detector located at the position of the transmitter. The object is detected when it breaks the beam. This technique is not te be confused by the technique to measure the intensity of reflection, as the purpose of reflectivity is solely to detect an obstacle, not the range. (iii). Diffuse The diffuse technique is fairly similar to the reflective operation, except that energy is re- flected from the surface of the object of interest, as opposed to a cooperative target reflector (Figure 2.1c).

Capacitive sensors Capacitive sensors are effective for short-range detection of most objects. Such sensors measure the electrical capacitance between a probe and its surrounding environment. As an object ap- pears, the changing geometry and/or dielectric characteristics within the sensing region cause the capacitance to change. This affects the current flow through the probe, causing the sensor to detect an object. The sensors can be used to detect the presence of a wide range of material, but require relatively close range, and is therefore almost unapplicable for this project.

15 Traineeship Report November 17, 2010 2.1. RANGING TECHNIQUES CHAPTER 2. SENSORS

The performance specifications of proximity sensors depend on several factors. Effective range is a function of the physical characteristics of the object to be detected, its speed and direction of motion, the design of the sensor probe, and the quality and quantity of energy it radiates or receives. Distance resolution is dependent upon object size, speed, and generally reduces with increased range. Other important factors are the changes in ambient conditions, variations in reflectivity or other material characteristics of the target and the stability of the electronic circuitry of the sensor.

2.1.2 Triangulation Triangulation is a technique based on a simple trigonometric method for calculating the distances and angles needed to determine the location of the object of interest. An import premise of plane trigonometry is: given the length of a side and two angles of a triangle, it is possible to determine the length of the other sides and the remaining angle using the Law of Sines and/or Cosines [4]. Ranging systems using triangulation for robot navigation and collision avoidance are classified as either active or passive [4]. Passive stereoscopic ranging systems use only the ambient light of the scene to illuminate the target. These sensors are commonly TV cameras, solid state arrays or photodetectors to the locations P1 and P2, showed in Figure 2.2a. Both are focused on the same point, P3, thus forming an imaginary triangle. Active triangulation systems position at either point P1 or P2 a controlled light source, such as a laser, which is directed at the observed point P3. A directional imaging sensor is placed at the remaining point and is also aimed at P3 , see Figure 2.2b.

(a) Passive stereoscopic triangular ranging (b) Active laser triangulation ranging

Figure 2.2: Triangulation techniques, adapted from [4]

With both systems an array of points can be determined by adjusting the incident angles of the detectors and/or the light source in a raster sequence. The resulting range map is a three-dimensional image of the environment in front of the sensor. Limiting factors common to all triangulation sensors include angular measurement inaccuracies (and range) and inaccuracies due to highly absorptive surfaces or

16 Traineeship Report November 17, 2010 2.1. RANGING TECHNIQUES CHAPTER 2. SENSORS mirror surfaces.

2.1.3 Time-Of-Flight The range finding technique Time-Of-Flight (TOF) refers to the time it takes for a pulse of energy to travel from its transmitter to an observed object and back to the receiver [4]. The energy transmission originates typically from an ultrasonic, radio or light source. The relevant parameters are the speed of sound (roughly 340 m/s) and the speed of light (roughly 300 000 000 m/s). Time-Of-Flight systems measure the roundtrip time between an energy pulse emission and the resulting pulse echo from the reflectance of an object. Using elementary physics, the distance (d(t)) is determined by multiplying the velocity (v(t)) of the energy wave by the time required to travel the distance: v(t)t d(t) = (2.1) 2 The measured time represents the traveling twice the distance (from transmitter, to object and back to the receiver) and must therefore be reduced by half to result in the actual range of the target. The advantages of TOF systems arise from the direct nature of their straight-line active sensing. The returned signal follows essentially the same path back to a receiver located in close proximity to the transmitter. The absolute range to an observed point is directly available as output with no complicated analysis required. Furthermore, TOF sensors maintain accuracy as long as reliable signal detection is maintained, while triangulation schemes suffer from decreasing accuracy as range increases. Limitations of TOF systems are primarily related to the properties of the emitted energy, which vary across the spectrum. When light, sound or radio waves strike an object, depending on the object, a portion of the the original system is reflected and returns to the receiver. The remaining energy reflects in scattered directions or is absorbed, depending on the object’s surface, characteristics and the angle of incidence (angle of approach) of the source transmission. The scattered signals can reflect from sec- ondary objects as well and return to the detector at various times. This results in false signals yielding questionable or otherwise noisy data. Instances where no return signal is received can occur because of straight reflection (specular, or the angle of incident equals the angle of reflection) by the object. When the transmission source exceeds a certain critical angular level, the reflected energy will be deflected outside of the sensing envelope of the receiver, inducing no detection of the object of interest. Due to temperature and humidity changes, a variation in propagation speed is eminent inducing in- accuracies in the range computations. This variation is particularly applicable to acoustically based systems. Cross-talk can occur when multiple sensors are operating in the same environment. When a pulse of energy is emitted from one transmitter and an object with a certain shape, causes the energy to be reflected in the direction of a receiver from a different system, that system can misinterpret the reflected energy.

2.1.4 Phase Modulation Phase Modulation (PM) is a method of distance measurement that requires the transmission of a con- tinuous wave of energy in contrast to the pulsed outputs used in direct measurement TOF systems. Phase Modulation involves a determination of the shift in phase of the signal as it returns from a re- flecting object. An unbroken beam of modulated laser energy is directed towards a target; a portion of this wave is reflected by the object and returned to the detector along a directed path. This returned energy is compared to a simultaneously generated reference beam which has been split off from the original signal. The relative phase shift between the two is measured (Figure 2.3). This phase shift is a function of the round trip distance, that the wave has traveled. The accuracy of phase shift approach the accuracy achievable by pulsed laser TOF methods. Greater accuracy can be achieved by integrating over

17 Traineeship Report November 17, 2010 2.1. RANGING TECHNIQUES CHAPTER 2. SENSORS many measurements. However, this is time consuming, which makes it difficult to achieve real-time data. The time measurement problem of the TOF system is eliminated. However, it is replaced by the need for sophisticated phase measurement electronics, making it sufficiently more complex to be used.

Figure 2.3: Phase modulation, adapted from [3]

2.1.5 Intensity of reflection The concept of ranging by intensity reflection is based on an emitter projecting energy towards a sur- face, from which it is reflected, and subsequently sensed by the receiver. The energy is typically acoustic, electromagnetic or optical. This sensor technique calculates the distance to the surface of the object of interest from the sensed intensity of the reflected energy. However, two distinct problems are substan- tial to this method. First, if the surface acts as a mirror, it can reflect the incident energy away from the receiver. Then no surface presence will be detected. Second, if the surface is highly absorptive, none of the incident energy will be reflected, which leads again to no surface being detected.

2.1.6 Frequency modulation Frequency modulation involves the transmission of a continuous electromagnetic wave, modulate by a periodic triangular signal. This signal varies the carrier frequency linearly above and below the mean frequency as shown in Figure 2.4.

Figure 2.4: Frequency modulation, adapted from [4]

18 Traineeship Report November 17, 2010 2.2. SENSOR TECHNOLOGIES CHAPTER 2. SENSORS

The transmitter emits a signal that varies in frequency as a linear function of time:

f(t) = f0 + at (2.2) where f0 is the mean frequency and a the integer to set the amount of change over time. This signal is reflected from a possible target or object and arrives at the receiver at time t + T : d T = 2 (2.3) c where T is the roundtrip propagation time, d the distance to a target and c the speed of light. The received signal is compared with a reference signal taken directly from the transmitter. The re- ceived frequency curve will be displaced along the time axis relative to the reference frequency curve by an amount equal to the time required for wave propagation to the target and back. These two frequen- cies, combined in the mixer, produce a beat frequency:

BF = f(t) − f(t + T ) = aT (2.4)

This beat frequency can be measured, and used to calculate the distance to the object: 1 d = · BF · c · a (2.5) 2 The linearity of the frequency shift controls the accuracy of the system. The frequency modulation system has an advantage over the phase modulation technique in that a single distance measurement is not ambiguous. Phase modulation systems require two ore more mea- surements at different modulation frequencies to be unambiguous. However, frequency modulation has several disadvantages associated with the requirements of coherence of, when used, the laser beam and the linearity and repeatability of the frequency ramp.

2.2 Sensor technologies

2.2.1 Acoustical With the development of Sound Navigation and Ranging (SONAR), high-frequency acoustic waves have been used to determine position, velocity and orientation of objects. Ultrasonic energy (sound above the limit of the human hearing level) has been the most commonly applied technology. Ultrasonic transducers typically transmit at frequencies greater than 20 kHz generated by both mechanical and electronic sources. Acoustical ranging can be implemented using triangulation, time-of-flight, phase shift measurement or a combination of these techniques. The direction and velocity of a moving object can be determined by measuring the Doppler shift 1 in frequency of the returned energy. This shift is caused by objects moving toward or away from an observer. Typically, triangulation and time-of- flight methods transmit sound energy in pulses and are effective at longer distances for navigation and positioning, and at shorter distances for object detection. The shape and FOV of an emitted pulse by an ultrasonic ranger are shown in Figure 2.5.

1The Doppler shift, or Doppler effect, is the change in frequency of a wave for an observer moving relative to the source of the wave [2]

19 Traineeship Report November 17, 2010 2.2. SENSOR TECHNOLOGIES CHAPTER 2. SENSORS

Figure 2.5: Shape and FOV of ultrasonic sensor, adapted from [12] The performance of ultrasonic ranging systems is significantly affected by environmental phenomena and sensor design characteristics. Of primary concern is the attenuation of sound energy over distance. As an acoustic wave travels away from its source, its intensity decreases to the inverse square law and due to absorption of the sound by air. This varies with humidity and dust content of the air. Absorption can also occur at the reflection surface and is a function of the characteristics of the object being detected. Another parameter affected by the ambient properties of air is the velocity of sound. The factors involved are air temperature and wind direction and velocity. As the robot will be used indoors, these factors will not be significant problems. Another factor to consider is the beam dispersion angle of the selected transducer. Best results for ranging are obtained when the beam centerline is maintained normal to the target surface. However, as Figure 2.6a shows, if the angle of incidence varies from the perpendicular, the range actually being measured does not always correspond to that associated with the beam centerline. The first beam reflection is from the portion of the target that is closest to the sensor.

(a) Due to beam divergence (b) Due to specular reflection

Figure 2.6: Ultrasonic ranging errors, adapted from [4]

The actual line of measurement intersects the target surface at point B as opposed to point A. The width of the beam introduces an uncertainty in the perceived distance to an object from the sensor, but an even greater uncertainty in the angular resolution of the object’s position. A very narrow vertical

20 Traineeship Report November 17, 2010 2.2. SENSOR TECHNOLOGIES CHAPTER 2. SENSORS target, such as a long wooden dowel, maintained perpendicular to the floor corresponds to a relatively large region of floor space, that would essentially appear to the sensor to be obstructed. Worse yet, an opening such as a doorway may not be perceptible at all to the robot when only 6 feet away, simply because at that distance, the beam is wider than the door opening. Finally, errors due to the topograph- ical characteristics of the target surface must be taken into account, as been explained in section 2.1.3. When the angle of incidence of the beam decreases below a certain critical angle, the reflected energy does not return to strike the transducer (Figure 2.6b). Ultrasonic sensors are a powerful and practical method for range determination for selected applica- tions. Simple construction make them reliable and economical. The low-cost factor also makes design redundancy feasible, further improving system reliability and effectiveness. The issues mentioned above however, have to be kept in mind.

2.2.2 Optical Active optical sources employed in range finding include broadband incandescent (the emission of light from a hot body due to its temperature), narrowband LEDs and coherent lasers. Lasers can be found in ranging equipment based on triangulation, TOF, proximity, phase modulation, interferometry and re- turn signal intensity (the last two options are not explained due to the limited applications in robot close range object detection). Lasers produce a bright, intense output which is important for long-distance ranging, and for distinguishing the signal from the background. Secondly, by nature or through use of corrective optics, laser beams are narrow and collimated, with little or no divergence. This property al- lows the source to be highly directional or spatially selective, because an intense beam of energy can be concentrated on a small spot at long distances. However, this induces a problem when the entire region of a robot needs to be scanned. A scanning laser range finder is a solution, or mechanical additions are necessary, for example in the form of a rotating platform to cover the region of interest. However, some disadvantages should also be taken into account. All laser-based systems represent a potential safety problem in that the intense and often invisible beam can be an eye hazard. A highly accurate beam delivery system is needed for high accuracy and the wide dynamic range of the returning anergy complicates the design of the detector electronics. Laser sources typically suffer from low overall power efficiency. Finally, lasing materials are often unstable and possess short lifetimes, resulting in reliability problems. Ranging is accomplished by pulsed TOF methods, Continuous Wave (CW) PM or Frequency Modulation (FM). Infrared systems can be found in ranging equipment based on triangulation, TOF and proximity. In- frared systems emit a pulse of light at a wavelength of around 700-900 nm and are much less powerful than lasers. Therefore, the effective range is shorter than its alternative. Similar to the laser, infrared distance sensors produce narrow beams of light that are highly accurate but suffer from reflectivity and angular inaccuracies, even more violent than the acoustical solutions. IR sensors can be used in combi- nation with a lens to obtain wide angle solutions, but this requires mechanical additions. Infrared light is suited for indoor activities because man-made objects tend to reflect infrared energy well. Infrared loses its usefulness outdoors due to the inherent radiation emitted by the natural terrain, roadways, and objects. Therefore, a consequent disadvantage of infrared is the sensitivity to ambient conditions, especially lighting.

2.2.3 Electromagnetic Radio Detecting And Ranging (RADAR) is the determination of the distance and bearing to an object and/or its speed relative to an observer which is calculated through the measurement of reflected elec- tromagnetic waves. The properties of the received echoes are used to form a picture or determine cer- tain information about the objects that cause the echoes. Specific advantages of radar sensing include

21 Traineeship Report November 17, 2010 2.3. CANDIDATE SYSTEMS OVERVIEW CHAPTER 2. SENSORS the ability to be unaffected by smoke, dust or haze-filled environments. Radar systems can produce high accuracies in terms of target discrimination and range computation. Radars are also effective at measuring speed of moving objects by Doppler shift methods, wherein the magnitude of the frequency shift of an energy wave reflected of a mobile target is proportional to its relative velocity. Pulsed energy systems can detect targets up to distances on the order of tens of kilo- meters, relying on the measurement of the round trip time of a propagating wave. The high speed of propagation of the emitted energy makes short distance measurements difficult for this type of system. The extremely sharp short-duration signals which must be generated and detected are expansive and complicated to realize. Continuous wave systems, on the other hand, are effective at shorter ranges because phase shift measurements are non dependent on the wave velocity. Systems employing a single antenna have the principal advantage that the antenna will collect all the returned energy which falls upon it. The energy is returned from a beam that is inversely proportional to the diameter of the reflector. The disadvantages include the need to manipulate a large diameter an- tenna system when the application requires narrow beams, and the effects of vibration and wind which can necessitate a massive supporting structure. Phased-array antenna configurations present an alter- native arrangement, that eliminates the problem of using a large antenna system and the sensitivity to wind. However, the resulting smaller coverage area decreases overall effectiveness and the requirement for electronically variable phase control increases the system complexity. The portion of the electromagnetic spectrum considered to be the useful frequency for practical radar is between 3 and 10 GHz. The primary electromagnetic source used in most modern conventional radar systems is microwave energy. Microwave energy is ideally suited for long-range sensing because the resolution is sufficient, attenuation of the beams in the atmosphere is minimal and low-mode guid- ing structures can be constructed. An advantage of this system is the all-weather capability. However, they are susceptible to specular reflections at the target surface. Shorter wavelengths can be used to produce systems with high angular resolution and small-aperture antennas. Higher angular resolution is possible at longer wavelengths, but the antenna size becomes very large. For these reasons, con- ventional radar-systems operating in the microwave portion of the energy have less applicability to the high-resolution collision avoidance needs of a mobile robotic platform. As an alternative to microwave radar systems, millimeter wave radar systems can be used. It involves using a much shorter (500 micrometer to 1 centimeter) wavelength, inducing small-sized antenna apertures. Consequently, more information can be obtained about the nature of targets than at larger wavelengths because of reducing scattering of the reflected signal by objects. The overall physical size of the system is reduced, but the smaller apertures result in less collected energy. This limits the ef- fective range of the system. The resulting atmospheric attenuation also prevents the operation in all weather conditions. Unfortunately, even today not many affordable commercial millimeter wave radars are available and are therefore not investigated.

In the next section an overview is shown for all candidate systems to be considered. A summary with all advantages and disadvantages is added to the and based on the characteristics of the sensor types, conclusions can be drawn and a decision can be made for what sensor is to be used.

2.3 Candidate systems overview

Table 2.1 shows an overview and summary of the results on the investigation of the sensor types. The information in this table is obtained from data sheets supplied by the manufacturers and websites from distributors. An additional small summary of the strengths and weaknesses of the particular sensor types is included.

22 Traineeship Report November 17, 2010 2.3. CANDIDATE SYSTEMS OVERVIEW CHAPTER 2. SENSORS ) ◦ Sensor Cost estimation per piece Info provided Strenghts Weaknesses Minimum range (cm) Maximum range (cm) Accuracy on moving objects Angle of operation ( Laser Displace- e1500,- Distance Material properties Price, range, low 3 40 - 10 ment sensor non-issue, accuracy power-efficiency) Photo-Electric e200,- Distance Material properties Minimum range 20 50 - 151 Triangulation non-issue, accuracy Optical Time of e4000,- Distance Material properties short angle, price, 10 200 - 0.1 Flight non-issue, accuracy minimum range Microwave Time e500,- Distance All weather capabil- minimum range, sus- 15 1500 + "wide" of Flight Radar ity, high-angular res- ceptible to specular olution reflections Optical Reflec- e170,- Material properties Short angle 0.1 70 +/- 2 tion non-issue, accuracy Inductive Sensor e50,- Metal/non- accuracy on metal, only metal objects, 0 6 + 15 metal not influenced by range movement Capacitive Sensor e100,- Object material properties range 0 3 + presence non-issue, accuracy Switch e10,- Object material properties range 0 0 + - presence non-issue, accuracy SLRF e1000- Distance material properties price, low power- 6 400 +/- 240 e5000,- non-issue, accuracy efficiency

Ultrasonic Prox- e20,- Object accuracy, wide beam dependency on sound 3 300 - 20 imity Sensors presence absorbation, Ultrasonic Range e30- Distance accuracy, wide beam, dependency on sound 3 600 +/- 25 Finder e40,- ease of use absorbation Infra-Red Range e10- Distance price, simplicity dependency of light, 0 80 +/- 3 Finder e20,- short angle 1 With the use of a prismatic reflector

Table 2.1: Candidate systems

Most information of Table 2.1 is obtained from the data sheets of the sensors stated in Table 2.2.

Type Product Laser displacement Idec MX1A Photo electric triangulation Idec S1AD Optical time-of-flight Sick DME 2000/3000 Microwave time-of-flight radar AM Sensors MSM10500 Inductive Contrinex DW-AD-70-M30 Capacitive Carlo Gavazzi EC M30 DC Scanning Laser Range Finder Hokuyo UBG-04LX-F01 Ultrasonic range finder Devantech SRF-08 Infrared range finder Sharp GP2D12-15 Optical reflection Eaton E67

Table 2.2: Sensors

23 Traineeship Report November 17, 2010 2.4. DISCUSSION CHAPTER 2. SENSORS

Sensors are found from Conrad Electronic Benelux BV, Farnell, Robot-Electronics or [14].

2.4 Discussion

The laser displacement sensor and scanning laser ranger finder are no valuable options for this par- ticular scope. The low cost-benefit for these devices is decisive. The SLRF currently installed on the robot is suitable for range detection, however the technical limitations explained in 2.2.2 and 1 are still unsolved when another laser (or SLRF) is implemented. Therefore, choosing a laser (range finder) is not valuable. The photo-electric triangulation system has a minimum range of 15 cm and is therefore not suitable. Due to technical limitations, the sensors cannot be positioned more inwards on the robot’s platform. This is due to the fact that a large part of the surface is destined to support several devices such as the Mac Mini for the Robot Operating System (ROS) interfacing. Then, the angle of operation is short. This however can be solved by using a prismatic reflector which will complicate the design (as a reflector will have to be placed for every sensor and will need a support) and is therefore not favorable. The microwave time-of-flight radar system is an interesting option, however it is very expensive and the again minimum range is an issue. Optical reflection is not very expensive and is still very accurate, however they have a very short angle of operation. Provided the requirement of a full coverage of the area surrounding the robot from 50 cm, the amount of sensors needed to obtain a satisfactory result is substantial, making it still a very expensive solution. The inductive, capacitive, and switch systems can be excluded as well. The sensors are useful for spe- cific product solutions, but not for this project. However, a considerable option is to implement a capacitive sensor or possibly a switch for extra collision prevention next to a separate close range sys- tem. Infrared systems can be a solution concerning their price and accuracy. Though the short angle of operation, sensitivity to ambient conditions and weak performance on sensing windows shed light on problems implementing these type of sensors. In this case, the ultrasonic (or SONAR) range finder, is the best solution. They are easy to implement, have a wide angle and are relatively cheap for achieving the 50 cm coverage requirement. With the exception of sound absorbtion, all primary disadvantages and problems regarding the sonar technique can be overcome. Timing issues due to the cross-talk problem can be solved by using groups of sonar devices. The "missing" of doorways can be solved by choosing a system that sends multiple bursts, capable of determining different ranges of objects during one measurement poll. This is further ex- plained in the following chapter. Ultrasonic proximity sensors are eliminated due to fact that only object presence is supplied by proximity sensors. The mobile robot platform should be able to perform actions based on different distances from an object, which is not possible when an object can only be detected.

24 Traineeship Report November 17, 2010 Chapter 3

Sonar Systems

3.1 Sonar selection

Since a decision has been made for what type of sensor will be used, the selection to the sonar sensor intended for the ranging is further specified. In Table 3.1 four sensors are stated that are used in most mobile robot solutions for ranging and object detection. The information provided in this table is collected from the data sheets of the products and the websites of the manufacturers.

Sensor Range (cm) Angle (◦) Refresh Rate (Hz) Frequency (kHz) Price per piece Parallax PING))) 2-300 20 >55 40 e22,98 Devantech SRF-08 3-600 28 >15 40 e29,42 LV-MaxSonar-EZ1 0-645 18.5 20 42 e21,15 SensComp Series 6000 2.5-1050 21 5 50 e15.881 1 Price of necessary ranging module is excluded

Table 3.1: Sonar systems

Prices are obtained from [1] and converted from pounds to euros with the exchange currency of 1:1.15375 (live rate of 05-10-’10). The angles of operation are derived from the data sheets. The manufacturers measure the angle using different objects, but the commonly used objects are either cylindrical or round that have the property to reflect light in all directions. This causes the sensor to sense the object from a wide angle. If for instance a wall, a board or a square pole is the object of interest, due to specular reflections the angle of operation will be much smaller, as shown in the data sheet of the Parallax PING))) (Appendix B). For this reason, taking the inaccuracy of the orientation when the sensor is mounted to the platform into account, the ratio between the widest and smallest angle obtained from the data sheet is used for every sensor, provided that only the ideal situation is given in the other data sheets. The ratio is approximately 2:1. The angle of operation is then pessimistic, but will evade small angular displacements and inaccuracies due to mounting or varying ambient conditions. All four sonar sensors use the TOF technique, section 2.1.3, to determine the range of an object. To calculate the amount of sensors needed, a Matlab m-file (Appendix E.1.2) is built that contains the robot’s surface (80x37cm) and the position of the first 10 sensors. To suppress the complexity of the calculations for the positioning and orientation of the sensors, and due to uncertainties in the shape of the surface, it is assumed that the surface of the robot’s platform is rectangular. Then, the orientation (in x,y) is calculated keeping the full coverage at 50 cm in mind, and sensors are added at variable positions to achieve the goal of this project. The results are shown in Figure 3.1.

25 Traineeship Report November 17, 2010 3.1. SONAR SELECTION CHAPTER 3. SONAR SYSTEMS

150 150 150

100 100 100

50 50 50

0 0 0

−50 −50 −50

−100 −100 −100

−150 −150 −150

−150 −100 −50 0 50 100 150 −150 −100 −50 0 50 100 150 −150 −100 −50 0 50 100 150

(a) Parallax PING and LV-MaxSonar- (b) Devantech SRF08 (c) Senscomp Series 6000 EZ1

Figure 3.1: Sensor placements and orientations

The Parallax PING))) and LV-MaxSonar-EZ1 (Figure 3.1a) sensor systems both emit an acoustic wave with an operational angle, in worst case, of 10 degrees. This induces the necessity of 28 sensors to obtain a full coverage at 50 cm. The Devantech SRF-08 (Figure 3.1b), with an operational angle of 14 degrees, in worst case, necessitates 20 sensors mounted to the mobile robot’s platform. The SensComp Series 6000 Transducer (Figure 3.1c) has the ability to detect objects with an operation angle of 12 degrees, in worst case. Therefore, the amount of sensor needed is 24. The square surface centered on the figures is the mobile robot’s platform. The colored triangles sur- rounding the platform are the representations of the FOV of the selected sonar sensor. The rounded rectangular crossing the FOV of the sensors is the 50 cm boundary line. By calculating the orientat- ing and placement on the platform for each sensor, using regular goniometric formulas, the amount of sensors are calculated. Clearly, the FOV of all sensors cross at this boundary. An example of the calculations is shown below.

Figure 3.2: Sensor placement and orientation for Senscomp Series 6000

26 Traineeship Report November 17, 2010 3.2. DISCUSSION CHAPTER 3. SONAR SYSTEMS

β = θ1 + α (3.1) R D = (3.2) tan β −1 R θ2 = tan ( ) + α (3.3) L + D The resulting amount of sensors can than be used to estimate the cost price for the badge of sensors needed for the mobile robot. Table 3.2 expresses these values.

Sensor Price per piece Amount of sensors needed Cost estimation Parallax PING))) e22,98 28 e640,- Devantech SRF-08 e29,42 20 e588,- LV-MaxSonar-EZ1 e21,15 24 e507,- SensComp Series 6000 e15.881 24 e997,- 2 1 Price of necessary ranging module is excluded 2 Cost estimation includes Senscomp 6500 ranging module for every sensor, e25,68 per piece

Table 3.2: Sonar systems and cost estimation

3.2 Discussion

Comparing the cost estimation combined with the angle of operation, both the LV-MaxSonar-EZ1 and the Devantech SRF-08 come forward. The Senscomp transducers require an additional ranging mod- ule or smart sensor, approaching e1000,- for the entire package of transducer with ranging module or the use of a smart sensor. This erases the Senscomp transducer as a valuable option. The Parallax Ping))) sensor is significantly less expensive than the SensComp transducers, but due to the small angle of operation requires an amount of 28 sensors to obtain full coverage. The Parallax PING))) sensor module is equipped with 3 lines, Vdd (+5V DC), Vss (ground) and SIG (Input/Out- put (I/O) line) (Appendix B), providing the ability to easily connect and interpret the data. However, in this case multiple sensors need to be used. The capability of the sonar systems to operate in a network- ing environment is key to suppress the complexity of this project. However, due to this single I/O pin, networking is impossible and the use of separate I/O pins for every individual sensor inevitable. This increases cost, programming development and the complexity of the system. The small angle of opera- tion, combined with the poor support of networking the sensors makes the PING))) a weak contender, leaving the Devantech and LV-MaxSonar as the two remaining possible options. The Devantech features an advantageous FOV but this comes with a higher cost. Both the LV-MaxSonar as the Devantech operate at a refresh rate of 15-20 Hz. The LV-MaxSonar is slightly faster, but the De- vantech offers the ability to alter the range or analogue gain (Appendix A). Lowering the range or gain would effectively decrease the range, and increase the refresh rate. The Devantech uses an Inter- Integrated Circuit (I2C) communication protocol, ideally suited for networking capabilities. Up to 16 sonar sensors can be serially connected to one I2C bus and can be approached by their individual addresses. The LV-MaxSonar features RX and TX lines, but no addressing. Therefore, for its com- munication and networking capabilities, its wide FOV, the ability to improve the refresh rate and its reasonable pricing, the Devantech SRF-08 is the sensor of choice.

27 Traineeship Report November 17, 2010 3.3. IMPLEMENTATION CHAPTER 3. SONAR SYSTEMS

3.3 Implementation

The Devantech sensor modules are equipped with an I2C communication bus, which is capable of providing the data necessary to obtain range information and of supporting 16 sensors on one bus. Provided this information, only two separate buses are necessary to access all sensors. The properties, a short description of the I2C bus and how to obtain an operational network that encloses the array of sensors, is further explained in chapter 5. Mounting the hardware to the robot’s surface will take place after experiments and simulations have prevailed. The obtained values from the experiments provide the accurate location and orientation of each individual sonar sensor to the mobile robot’s platform. Two remaining possible problems are cross-talk and the refresh rate. These will be discussed in the next sections.

3.3.1 Cross-talk Cross-talk occurs when a pulse of energy is emitted from one transmitter and an object with a certain shape causes the energy to be reflected in the direction of a receiver from a different system. As a countermeasure, the devices are arranged in groups. In order to suppress cross-talk, but also to improve the refresh rate of the entire system, a compromise is made between safety and speed. The more sensors are placed in one group, thus are fired at the same time, the less time it is required to poll the entire robot’s surroundings. However, this enhances the chance of ranging errors due to cross-talk. Therefore, the group of sensors is selected in such way that spacing between the individual sensors is optimal and the refresh rate is improved in comparison to firing the sonar systems one-by-one. The sensors are arranged as shown in Figure 3.3

(a) Group 1 (b) Group 2 (c) Group 3

(d) Group 4 (e) Group 5

Figure 3.3: The groups of sensors operating on the mobile platform

Each individual sensor that is a member of an active group receives the start ranging, or ’ping’, com- mand. Then, when all sensors within the group are fired, the first member’s ranging information is

28 Traineeship Report November 17, 2010 3.3. IMPLEMENTATION CHAPTER 3. SONAR SYSTEMS acquired and interpreted. Consequently, the following group is initiated by ranging the first sensor of this group, followed by the second sensor, and so on. The command to return the ranging information is sequenced immediately after the poll command, to ensure performance with respect to the refresh rate. When a read command is received by the sensor while it is still ranging, the hexadecimal number 0xFF is returned, meaning the sensor is unable to return information. The ranging information is available when the latest echo is returned, or when a time-out has occurred. Therefore, a continuous stream of read commands are send until eventually ranging information is provided by the sensor. In first instance, this sequence is simulated with Matlab, including a random polygon representing an object. The goal is to obtain some information and details with regard to the sensor’s characteristics and the optimal range/analogue gain setting.

3.3.2 Refresh rate The Devantech SRF-08 sensors can be fired every 65 ms (Appendix A). Some data acquisition and interpretation is necessary, before the next fire command can be send. Summing these factors, the time necessary to fire, read and collect the data for one sensor is ∼ 80 ms. Provided that the sensors operate in groups of 4, see section 3.3.1, the realtime refresh rate of the realtime surrounding of the mobile robot is tt = Ag · t = 5 · 80 = 400ms (3.4) where Ag is the amount of groups, t the update time in ms and tt the total time in ms. In the situation where an object is 1.5 m away and for some reason the object is still not detected, the subsequent moment the object can be detected by one sensor is 400 ms later. Or, when the mobile robot is driving the maximum speed of 0.5 m/s, the object is at a distance of 1.3 m. Taking this into consideration, the total refresh rate of the robot’s surrounding is sufficient for the detection of and manoeuvring around objects. However, this is a theoretical approach and therefore to properly prove the previous statement, experiments will be carried out. When the outcome of the experiments are negative, meaning the refresh rate is not sufficient, the Devantech SRF-08 provides a feature to decrease the range and analogue gain of the device. Changing the range can be done by setting the value of register one to: R = (Reg · 43) + 43 (3.5) where R is the range in mm and Reg the value of the Range Register. This information is provided by the data sheet of the manufacturer. However, this alteration is not effectively the range itself, but the time at which the sensor is able to provide the ranging information. In other words, it lowers the refresh rate of the sensor without changing anything to the emitted pulses. This causes the problem of when operating in a group of sensors, one sensor can still be firing echoes while it’s ranging information is already gathered and another sensor is fired simultaneously. The effect of cross-talk will occur. To suppress this effect, the analogue gain of the analogue states can be set. This allows the sensor to emit a weaker pulse, causing the effective range to decrease. The table of values to set the analogue gain can be found in the data sheet, Appendix A.

29 Traineeship Report November 17, 2010 Chapter 4

Simulations

This chapter describes the proceedings to simulate the behavior of the sensors and the results found by these simulations.

4.1 Simulation sensor behavior

The behavior of the sensors are simulated by fixing the robot’s surface and let random polygons, repre- senting random objects, pass by in the surroundings of the robot. To do so, in first instance the polygons are created, then randomly chosen and placed somewhere on the map together with the robot. By mov- ing the polygon in a random direction, the polygon, or object, will eventually move towards the FOV of one or more senors. The detection of the object is based on supplying the minimal distance between the actual position of the sensor and the object. Figure 4.1 illustrates the behavior of the sensor. As the sensor only provides the distance reading of the object, it is uncertain where the actual object is lo- cated in the robot’s surroundings. Therefore illustrative circular arcs are shown providing the possible location of the object within the FOV of the sensor ([15]).

Figure 4.1: Example of sonar object detection The location, orientation and curvature of the is calculated by the distance reading of the sensor, resulting in the radius of the arc, and the field of view of the sensor. The outer points of the arc are calculated by the following equations

Px,1 = xs + R cos(θ + α − φ) (4.1) Py,1 = ys + R sin(θ + α − φ) (4.2) Px,2 = xs + R cos(θ − α − φ) (4.3) Py,2 = ys + R sin(θ − α − φ) (4.4) where xs, ys is the position of the sensor, Pxi , Py1 the outer points on the arc, R the range information, or distance to object, θ the orientation of sensor, φ the orientation of robot to x, y axis (rad/s) and α is

30 Traineeship Report November 17, 2010 4.1. SIMULATION SENSOR BEHAVIOR CHAPTER 4. SIMULATIONS the angle of operation. In the Cartesian coordinate system, the circle with center (a,b) and radius r is the set of points (x,y) such that (x − a)2 + (y − b)2 = r2 (4.5) This equation of the circle follows from the Pythagorean theorem applied to any point on the circle, as shown in Figure 4.2.

Figure 4.2: Circle of radius r = 1, center (a, b) = (1.2, -0.5) Using equation (4.5) the points of the circular arc are calculated by the following set of equations. As 2 2 2 (y − ys) = R − (x − xs) (4.6) we find p 2 2 p 2 2 y − ys = + R − (x − xs) or y − ys = − R − (x − xs) (4.7)

Using equation (4.5), for −π < φ, θ, α ≤ π the points of the arc in the sensor’s FOV are calculated by π  π π  π equations (4.8) if and only if (−θ + 2 ) + φ + α < 2 and (−θ + 2 ) + φ − α > 2 .

for xs − R < x < Px,1 and for xs − R < x < Px,2 p 2 2 p 2 2 y = ys − R − (x − xs) y = ys + R − (x − xs) (4.8) with x, y being the points on the circle representing the object’s apparent location.

π  π π  π Else if (−θ + 2 ) + φ + α > 2 and (−θ + 2 ) + φ − α < 2 then:

for xs + R < x < Px,1 and for Px,2 < x < xs + R p 2 2 p 2 2 y = ys + R − (x − xs) y = ys − R − (x − xs) (4.9)

π  π π  π else if (−θ + 2 ) + φ + α > 2 and (−θ + 2 ) + φ − α < − 2 then:

for Px,2 < x < Px,1 p 2 2 y = ys − R − (x − xs) (4.10) else

for Px,1 < x < Px,2 p 2 2 y = ys + R − (x − xs) (4.11)

31 Traineeship Report November 17, 2010 4.1. SIMULATION SENSOR BEHAVIOR CHAPTER 4. SIMULATIONS

The conditions above for the use of equation (4.8), (4.9), (4.10) or (4.11), to calculate the points on the circular arc are the consequence of the orientation of the robot’s surface and the sensors. When an object is detected from a sensor located at the rear of the robot (and the robot is moving straight ahead in a positive y direction), the points representing the possible location of the object are only the negative part of the square root from the outcome stated in equation (4.7). A similar approach is used for the situation where the right outer fraction of the angle operation is located at a higher angle than 90 degrees (where straight ahead is 0 degrees) and the left outer fraction is situated at a lower angle than 90 degrees, or vice versa. The resulting plots of a couple of polygons moving through the FOV of the sensors at random speed, position and orientation, is shown in Figure 4.3.

200 200 200

150 150 150

100 100 100

50 50 50

0 0 0

−50 −50 −50

−100 −100 −100

−150 −150 −150

−200 −200 −200 −200 −100 0 100 200 −200 −100 0 100 200 −200 −100 0 100 200

(a) Circular object (b) Cornered object (c) Polygon object

Figure 4.3: Sensor simulation of behavior on different objects

This shows the problem originating from the sensor’s ability to only providing distance information. In figure (4.3a), where the two sensors returning a distance reading, the circular object is situated entirely inside the FOV of the left sensor. However, due to the overlap, the right sensor is returning distance information equally. This induces that accurate localization of the object is not possible using this information only. A solution to this problem can be to turn the robot and use timing when a distance reading is obtained, to calculate the highly likely location of the object. Other solutions include using the range reading from 2 sensors to obtain the angle of elevation by:

2 2 2 ! −1 d + R1 − R2 α1 = cos (4.12) 2dR1

2 2 2 ! −1 d + R2 − R1 α2 = π − cos (4.13) 2dR2 where αx is the angle of elevation of sensor x, Rx the range reading of sensor x and d is the distance between both sensors. Equations (4.12) and (4.13) are only accurate when a sharp cornered or small object is used where the range reading of both sensors are situated to the exact same point of the object. This problem however, is not entirely the goal of this project. The navigation map will by created by the images of the 3d camera and the reading of the SLRF. The sonar system will function as a supporting role during mapping. The problem will only be taken into account when unsatisfactory results are obtained during the experiments, shown in chapter 6. Further information on target localization and classification can be found in [9].

32 Traineeship Report November 17, 2010 4.2. SIMULATION ROBOT NAVIGATION WITH SONAR CHAPTER 4. SIMULATIONS

4.2 Simulation robot navigation with sonar

As the simulations with fixed platform are successful, the ability to map the mobile robot’s environment is simulated by placing the robot inside a randomly created 2-dimensional map and let the platform move about avoiding collisions. The maps are created with Microsoft Paint and are shown in Figure 4.4.

(a) Office (b) Random environment

Figure 4.4: Maps for simulation of moving mobile robot

4.2.1 Kinematics of differential steering The platform used in phase 1 for the TSR project is the Pioneer 3-DX. The Pioneer platform (Figure 4.5) is equipped with a differential steered drive system. This means that two wheels are mounted on a single axis and are independently powered and controlled, thus providing both drive and steering functions. This means the mobile robot will move about in an intuitive manner. If both wheels turn in tandem, the robot moves in a straight line. If one wheel rotates faster than the other, the robot follows a curved path inward towards the slower wheel. If the wheels turn in opposite direction at equal speed, the robot pivots.

Figure 4.5: The Pioneer platform with differential steering

To move the platform about in the environment, the robot should be able to navigate depending on the input of the sensors. To do so, the equations distinguished in the following section will be used. To derive a model of the system, the speeds of the wheels are assumed to be constant. If both wheels are moving in the same velocity, the robot travels in a straight line and the equation for its trajectory is trivial. The velocity of the left and right wheel of the robot determines the direction and velocity of the robot’s platform. The combined velocity of the wheels equals the velocity of the mobile robot divided

33 Traineeship Report November 17, 2010 4.2. SIMULATION ROBOT NAVIGATION WITH SONAR CHAPTER 4. SIMULATIONS by two ([16]): vL + vR vt = (4.14) 2 where vL is the velocity of the left wheel (m/s), vR the velocity of the right wheel, and vT the total velocity of the mobile robot. The following is considered when the wheels travel at different velocity, Figure 4.6. As the robot changes position, all points on the robot may be in motion. To develop a forward kinematic equation for the motion of a differential steering system, a frame of reference in which an arbitrary chosen point is treated as stationary. All other points in the system are treated as moving relative to the reference point. The robot is considered a rigid body.

Figure 4.6: Wheels at different velocities

The point that we select as our reference is the center point of the left wheel. This is the point where an idealized wheel makes contact with the floor. Again, all motion in this frame of reference is treated in relative to the left-wheel point. Because the right wheel is mounted perpendicular to the axle, its motion within the frame of reference follows a circular arc with a radius corresponding to the length of the axle (from hub center to hub center). Now the central point itself may be in motion, so the actual path of the right wheel will not necessarily correspond to that particular circular arc. But the change in orientation φ is not restricted to the robot’s frame of reference. Because we treat the robot as a rigid body, all points in the system undergo the same change in orientation. If we pivot the robot 10 degrees about the left wheel, all points undergo a 10 degree change in orientation. And any change in orientation in the special frame of reference is, in fact, equivalent to that of the more general case. Based on these observations, we can derive a differential equation describing the change in orientation with respect to time. The definition of an angle given in radians is the length of a circular arc divided by the radius of that circle. The relative velocity of the right wheel gives us that length of arc per unit time. The length from the wheel to the center point gives us the radius [8]. Combining these facts, we have: dφ vL − vR = (4.15) dt d where d is the distance of mobile robot’s wheels (m), t is the time in seconds and φ the orientation in rad/s. Integrating (4.15) and taking the initial orientation of the robot as φ(0) = φ0, we find a function for calculating the robot’s orientation as a function of wheel velocity and time:

vL − vR φ(t) = φ0 + t (4.16) d Combining equation (4.14) with the knowledge of the orientation as a function of time, the following

34 Traineeship Report November 17, 2010 4.2. SIMULATION ROBOT NAVIGATION WITH SONAR CHAPTER 4. SIMULATIONS differential equations are achieved: dx = vt cos φ(t) (4.17) dt dy = vt sin φ(t) (4.18) dt

Integrating over 0 to t, using equation (4.16) and applying the initial position of the robot x(0) = x0 we find: Z t x(t) = vt cos φ(t)dt (4.19) 0 Z t vL − vR  = vt cos t + φ0 dt (4.20) 0 d " # vtd vL − vR  x(t) = x0 + sin t + φ0 − sin φ0 (4.21) vL − vR d

Equally, we find for y(t) by applying the initial position y(0) = y0: " # vtd vL − vR  y(t) = y0 − cos t + φ0 − cos φ0 (4.22) vL − vR d

The equations above can be simplified for the situation that the robot is moving when both wheels are rotating at equal speeds:

xnew = vR · t · cos φ + xold (4.23)

ynew = vL · t · sin φ + yold (4.24)

4.2.2 Results navigation simulations The mobile robot is now able to manoeuvre in its environment. To avoid hitting obstacles all sonar sensors should be able to detect for example a ’wall’, ’desk’ or ’chair’, represented in the environment by the black lines as the circumference of the object. When the simulation is started, an x, y coordinate system is obtained, giving logic ’0’ at the x, y coordinate with a white pixel, and ’1’ at a black level pixel. Then, the surface of the FOV of the ranging sensors is determined to identify objects being sensed inside the FOV of the active sensor. If an object is inside the FOV of that sensor, the circular arc is shown which represents the detected object. To represent a 2-D map of the environment, scanned by the sonar sensors, a separate empty map is included that will be filled with circular arcs while the mobile robot is manoeuvring throughout the map. To obtain a simulation that is as close to reality as possible, the timing and grouping functionality is implemented. All groups are scanned separately, by issuing a ranging command, wait for 80 ms, at which a read command is issued for every sensor within a group. The results are shown in Figure 4.7 and 4.8.

35 Traineeship Report November 17, 2010 4.2. SIMULATION ROBOT NAVIGATION WITH SONAR CHAPTER 4. SIMULATIONS

Robot succesfully reached its goal!

500

400

300

200

100

200 400 600 800 1000 1200

Figure 4.7: Simulation of mobile robot navigation throughout an office This result shows the array of sensors being able to relative accurately map a 2-dimensional environ- ment. The walls, table and circular object are all clearly distinctive. However, small gaps and alleys are troublesome due to the aforementioned wide FOV and lack of angle information. When small open- ings appear in the neighborhood of the mobile robot, the sensors detect the object left and right of the opening (or gap) and causes the opening to appear ’closed’ on the navigation map. This is a problem solved by: − using the SLRF and/or 3-D camera, as both are able to detect openings; − let the robot approach the observed object, possibly inducing a clear zone when the robot is close enough; − make use of multiple bursts. The other bursts could detect an object further away, at which the software could indicate the presence of a door opening (or gap).

500

400

300

200

100

200 400 600 800 1000 1200

36 Traineeship Report November 17, 2010 4.2. SIMULATION ROBOT NAVIGATION WITH SONAR CHAPTER 4. SIMULATIONS

Figure 4.8: Simulation of mobile robot navigation throughout a map

Figure 4.8 shows that wider gaps and alleys are better interpreted by the sensors. Mapping appears to be accurate. Walls have a ’danger zone’, as they appear to be more inward than they are in practice, but this issue can be used to an advantage with respect to safety. All simulations are build in Matlab. The M-files, the programs and functions that contain the code to run the simulations, are shown in Appendix E.1.

37 Traineeship Report November 17, 2010 Chapter 5

Data Acquisition

To acquire high resolution data describing the robot’s physical surroundings from the Devantech sen- sors, the embedded I2C communication bus is used. I2C is a two-wire, bi-directional serial bus that provides a simple and efficient method of data exchange between devices. It is most suitable for ap- plications requiring occasional communication over a short distance between many devices. The I2C standard is a true multi-master bus including collision detection and arbitration that prevents data corruption if two or more masters attempt to control the bus simultaneously [5].

5.1 The physical I2C bus

The I2C bus consists of two lines, the Serial Clock Line (SCL) and Serial Data Line (SDA). SCL is used to synchronize all data transfers over the I2C bus. All devices connected to these two signals must have open drain or open collector outputs. This induces the chip can drive its output low, but not high. Both lines must be pulled-up to VCC (positive voltage) by external resistors. Data is transferred between a Master and Slave synchronously to SCL on the SDA line on a -to-byte basis. See Figure 5.1 for an illustration of the I2C networking principle.

Figure 5.1: Overview I2C connection

The pull-up resistors are used for noise immunity as well. A value between 1k5 to 10k Ω is sufficient. The master in the I2C environment is always the device that drives the SCL clock line. The slaves are the devices that respond to the master. A slave is not able to initiate a transfer over the bus. One master is usually permitted per bus, although up to 16 slaves can be connected to the bus.

5.2 PC to I2C adapter

The following information is based on the manufacturer’s data sheet [7]. The PC to I2C (or abbreviated to I2C2PC) adapter (Figure 5.2) , from [6], is used for the communication

38 Traineeship Report November 17, 2010 5.3. PROTOCOL AND COMMUNICATION CHAPTER 5. DATA ACQUISITION between the I2C bus and a computer. It features both an USB and RS232 serial interface and uses a BL233B I2C-Serial Integrated Circuit (IC). The adapter consists of 3 separate I2C buses, thus all sonar devices can be driven by the adapter. It can be USB or externally powered and contains Vdd and Ground (GND) lines to power the I2C devices. I2C2PC is not an I2C slave or an I2C monitor. It is a master on the I2C/SPI bus only.

Figure 5.2: The PC to I2C Adapter

The I2C2PC appears as a device when either the USB or RS232 are used. The default parameters of the I2C2PC device are: • baudrate: 57600 • parity: none • databits: 8 • stopbits: 1 • handshake: Request To Send/Clear To Send (RTS/CTS) The I2C2PC adapter uses an Future Technology Devices International (FTDI) chip to handle all USB communications, which allows the use of FTDI’s Virtual Component Object Model (COM) Port drivers. Once the drivers are installed, communication with any I2C device connected to the I2C2PC adapter can be completed through the use of built-in commands. These commands are set forth in section 5.3. The adapter consists of pull-up resistors of 1k5 for the three buses.

5.3 Protocol and communication

To communicate with the I2C devices, the standard communication consists of four parts: 1. START signal generation 2. Slave address transfer 3. Data transfer 4. STOP signal generation

Figure 5.3: Overview I2C communication protocol

39 Traineeship Report November 17, 2010 5.3. PROTOCOL AND COMMUNICATION CHAPTER 5. DATA ACQUISITION

The data on the SDA line must be stable during the HIGH period of the clock. The HIGH or LOW state of the data line can only change when the clock signal on the SCL line is LOW (see Figure 5.4)[11].

Figure 5.4: Bit transfer on I2C bus

5.3.1 Start signal When the bus is free/idle, meaning no master device is engaging the bus (both SCL and SDA lines are high), a master can initiate a transfer by sending a START signal. A START signal, usually referred to as the S-bit, is defined as a high-to-low transition of SDA while SCL is high. The START signal denotes the beginning of a new data transfer. A Repeated START is a START signal without first generating a STOP signal. The master uses this method to communicate with another slave or the same slave in a different transfer direction (e.g. from writing to a device to reading from a device) without releasing the bus.

5.3.2 Slave address transfer The first byte of data transferred by the master immediately after the START signal is the slave address. This is a seven-bits calling address followed by a Read/Write (R/W) bit. The R/W bit signals the slave the data transfer direction. No two slaves in the system can have the same address. Only the slave with an address that matches the one transmitted by the master will respond by returning an acknowledge bit by pulling the SDA low at the 9th SCL clock cycle.

5.3.3 Data transfer Once successful slave addressing has been achieved, the data transfer can proceed on a byte-by-byte basis in the direction specified by the R/W bit sent by the master. Each transferred byte is followed by an acknowledge bit on the 9th SCL clock cycle. If the slave signals a No Acknowledge, the master can generate a STOP signal to abort the data transfer or generate a Repeated START signal and start a new transfer cycle. If the master, as the receiving device, does not acknowledge the slave, the slave releases the SDA line for the master to generate a STOP or Repeated START signal.

5.3.4 Stop signal The master can terminate the communication by generating a STOP signal. A STOP signal, usually referred to as the P -bit, is defined as a low-to-high transition of SDA while SCL is at logical ‘1’.

40 Traineeship Report November 17, 2010 5.4. THE I2C SOFTWARE PROTOCOL CHAPTER 5. DATA ACQUISITION

5.4 The I2C software protocol

The first message, the start sequence indicating the start of a transaction, will alert all slave devices on the bus. Next, the master sends out the device address. The slave that matches this address will continue with the transaction, any others will ignore the rest of this transaction. Having addressed the slave device, the master is required to address the internal location or register number inside the slave that it wishes to write to or read from. The SRF08 contains 36 accessible internal registers. Having sent the I2C and internal register address, data transfer can proceed. The first sent data byte is placed at the requested register number. Subsequent sent data are automatically placed at the incremented register number. When the master has finished writing all data to the slave, it sends a stop sequence which completes the transaction. In short the steps to initiate a data transaction is:

1. Send a start sequence

2. Send the I2C address of the slave with the R/W bit low (even address)

3. Send the internal register number to write to

4. Send the size of the to be send data byte(s)

5. (Optionally) Send any data bytes

6. Send the stop sequence

As mentioned above, when during the command frame another "S" is been send, the slave I2C will interpret this a repeated start (re-start) sequence, after which it is possible for the master device to request a given amount of data from the given address:

1. Send a start sequence

2. Send the I2C address of the slave with the R/W bit low (even address)

3. Send the internal register number to write to

4. (Optionally) Send the size of the to be send data byte(s)

5. (Optionally) Send any data bytes

6. Send the start sequence

7. Send the I2C address of the slave with the R/W bit high (address + 1)

8. Send the size of the to be received data

9. Send the stop sequence

The re-start sequence forces the I2C to reset its internal bus logic. Items 6 and 7 can be replaced by the "R" command (read sequence) when only data needs to be received from the I2C slave.

41 Traineeship Report November 17, 2010 5.5. EXAMPLE COMMUNICATION CHAPTER 5. DATA ACQUISITION

5.5 Example communication

This section shows an example and overview of the communication between the PC and the Devantech sonar sensors. In this example the ranging command is issued (Table 5.1) and afterwards the ranging information from the first echo is being requested (Table 5.2).

Primary Device Device Number Data Primary USB-I2C Address Internal of Data Bytes USB-I2C Com- + R/W Address Bytes Com- mand bit mand Byte I2C-Start Addr+R/W Register Byte Data I2C-Stop Type Count Example "S" 0xE0 0x00 0x01 0x51 "P" To Send 0x53 0x45 30 0x30 30 0x30 31 0x35 31 0x50 Meaning Start Sonar I2C Sonar One Com- Ranging Stop Datas- Address Command mand cm Datas- tream Register Byte tream Follows

Table 5.1: Send Ranging Command for USB to I2C

What has to be taken in mind is that the hexadecimal address, for example ’0xE0’, is seen as a set of 2 characters. These (ASCII) characters need to be send in the hexadecimal format. Therefore, in this chase, the character ’E’ equals ’0x45’ and ’0’ equals ’0x30’. [9]

Primary Device Device Primary Device Number Primary USB-I2C Address Internal USB-I2C Address of Data USB-I2C Com- + R/W Address Com- + R/W Bytes Com- mand bit mand bit mand Byte I2C-Start Addr+R/W Register I2C-Start Addr+R/W Byte I2C-Stop Type Count Example "S" 0xE0 0x02 "S" 0xE1 0x01 "P" To Send 0x53 0x45 30 0x30 32 0x53 0x45 31 0x30 31 0x50 Meaning USB-I2C Sonar I2C Distance Re-start Sonar I2C Read One Stop command Address of First Address + Byte Datas- Echo Read Bit tream (+1)

Table 5.2: Receive Ranging Command for USB to I2C

As 20 sensors are needed to obtain full coverage of the surroundings of the robot, and only 16 devices are supported on one I2C bus, two buses need to be used. The active bus can be selected by sending the command ’G#’ where # is the bus number (# ∈ {1, 2, 3}), or in hexadecimal format ’0x47NN’, with NN ∈ {30, 31, 32}. The sensors are mounted to the robot in the order shown in Figure 5.5, where the sensors in the lower right corner (with addresses ’E0’, ’E2’, ’E4’, and ’E6’) are connected to bus 2, and the others to bus 1.

42 Traineeship Report November 17, 2010 5.5. EXAMPLE COMMUNICATION CHAPTER 5. DATA ACQUISITION

Figure 5.5: Overview order of sensors mounted to mobile robot

The software used to communicate with the sonar systems is Matlab. Matlab supports the use of serial (RS232) communication. The built M-files, providing the capability to handle the communication and data acquisition, is attached in Appendix E.2. Figure 5.6 shows the flow diagram of the communication with the Devantech sonar sensors in the situation that 4 sensors are placed in one group.

Figure 5.6: Flow diagram of communication with sensors when ranging is issued

The flow diagram illustrates that in first instance ranging is initiated for all sensors within one group,

43 Traineeship Report November 17, 2010 5.5. EXAMPLE COMMUNICATION CHAPTER 5. DATA ACQUISITION after which the first sensor that has received the ranging command receives the first read command. When this sensor returns 0xFF, the software repeats the previous step. These steps continue until an actual ranging distance is received. This is repeated until all range readings are received after which the following group of sensors is activated. All I2C commands used in this setup are stated in Appendix D.

44 Traineeship Report November 17, 2010 Chapter 6

Experiments

To determine the practical behavior of the sensors, investigating the difference between the theoretical approach and the actual behavior, experiments are carried out. From these experiments conclusions can be drawn concerning the assumptions made on the theoretical approach, the calculations, positioning and orientation of the sensors, the timing and the sensitivity to "cross-talk". The first experiments are carried out by using one or two sensors for determining the behavior and characteristics of the sensors in ranging. Then, the sensors are mounted to the robot according to the calculated positions and orientations, to determine the behavior of the sensors in a sensor array.

6.1 Sensor characteristics

6.1.1 Measured straight-line distance The first experiment is to determine the error in the distance measurement of the sensors in a straight line. The experiment consists of one sensor pointed perpendicular to a wall. The sensor is mounted to a 44cm tall table, and the table is moved further and closer to the wall to determine the error and/or offset of the sensor at different distances. The results are shown in Table 6.1

45 Traineeship Report November 17, 2010 6.1. SENSOR CHARACTERISTICS CHAPTER 6. EXPERIMENTS

Measurement Real Distance (cm) Measured Distance (cm) Error 1 12 12 0,00% 2 22 21 4,55% 3 33 34 -3,03% 4 38 37 2,63% 5 45 46 -2,22% 6 55 56 -1,82% 7 58 59 -1,72% 8 69 68 1,45% 9 71 71 0,00% 10 80 80 0,00% 11 89 88 1,12% 12 90 90 0,00% 13 96 97 -1,04% 14 104 105 -0,96% 15 113 113 0,00% 16 121 121 0,00% 17 132 108 18,18% 18 133 120 9,77% 19 163 163 0,00% 20 169 107 36,69%

Table 6.1: Measurement results of ranging a wall directly in front of the sensor

The mean error derived from these experiments is 3,18%. The measurements that could not bring unambiguous results have been omitted from Table 6.1. See Appendix C for the unprocessed mea- surement table. It can safely be said that the sonar system is sufficiently accurate for straight line measurements for the scope of this project. More accurate results, for example up to mm’s, is not necessary as the performance of the robot will not be significantly improved. The robot has to perform actions on feedback of the sensors, with steps on sensitivity in centimeters. The following figure shows a chart comparing the real distance to the measured distance.

Figure 6.1: Comparison chart of real distance to measured distance Clearly, at larger distances the errors become somewhat larger, however, this can be a consequence of the height of the sensor from the floor. As the sensor emits acoustic waves in a conic shape, it is plausible that the floor reflects the waves in prior to the wall, in other words, the distance to the floor is measured. To support this theory, equation (6.1) is used to calculated the angle of elevation from the sensor to the floor. 180  44  α = sin−1 ≈ 24 (6.1) π 107 An angle of 24 degrees provides, according to the data sheet, a possible detection of an object.

46 Traineeship Report November 17, 2010 6.1. SENSOR CHARACTERISTICS CHAPTER 6. EXPERIMENTS

6.1.2 Cross-talk To investigate the sensor’s sensitivity to cross-talk, two sensors are used, pointed in parallel, perpen- dicular to a wall. The sensors are again mounted to a 44cm tall table and are placed 5 cm apart. In first instance, a ranging command is send to the first sensor, consequently the second and finally the ranging registers of both sensors are read. The second experiment is handled vice versa, that is, the second sensors is fired first, consequently the first. Results of both experiments are shown in Table 6.2.

Measurement Sensor 1 Sensor 2 Measurement Sensor 1 Sensor 2 Distance(cm) Distance(cm) Distance(cm) Distance(cm) 1 57 54 1 57 57 2 57 2 2 53 57 3 57 3 3 49 57 4 57 53 4 54 57 5 57 51 5 53 57 6 57 51 6 54 57 7 57 3 7 36 57 8 57 32 8 51 57 9 57 38 9 39 57 10 57 51 10 43 57 11 57 37 11 36 57 12 57 53 12 51 57 13 57 48 13 52 57 14 57 37 14 54 57 15 57 2 15 53 57 16 57 6 16 49 57 17 57 36 17 42 57 18 57 46 18 52 57 19 57 36 19 50 57 20 57 33 20 40 57 (a) Fire first Sensor first (b) Fire second sensor first

Table 6.2: Cross-talk measurements

Figure 6.2: Resulting graph of cross-talk measurement

Both experiments show that the second fired sensor returns no unambiguous results. The sensor is clearly receiving echoes of the first fired sensor and is therefore unable to accurately measure the object of interest. To prevent the occurrence of this issue, as stated in section 3.3.1, the sensors are fired in groups. The theoretical approach of firing the sensors in groups of 4 has to be put to test, to

47 Traineeship Report November 17, 2010 6.1. SENSOR CHARACTERISTICS CHAPTER 6. EXPERIMENTS investigate whether the cross-talk is suppressed with this orientation or that less sensors need to be fired sequentially (section 6.2.1).

6.1.3 Angular measurements to pole To investigate the angular performance or accuracy of the sensors, a 8x8cm pole is situated in an angle to a sensor mounted to the 44cm tall table. Figure 6.3 shows the set-up of the experiment. The table is moved in the direction of the arrows to investigate the maximum angle of operation of the sensors possible to retain ranging results of the object.

Figure 6.3: Plan of angular measurement of square pole

Results are shown in Table 6.3.

Measu- Distance Deviation Range Corre- rement D (cm) L (cm) reading sponding R (cm) Angle α 1 55 51 74 34.6 2 58 53 76 34.9 3 73 68 87 38.0 4 75 70 88 38.5

Table 6.3: Results of angular measurement of square pole

Only the maximum values for angular ranging are shown. The results of all measurements are shown in Appendix C. Corresponding angles are calculated by 180  L  α = tan−1 (6.2) π R Clearly, using a pole as an object returns optimal values of the maximum angular value. The achieved results are even more optimistic than the FOV stated by the manufacturers. Because of its size, shape

48 Traineeship Report November 17, 2010 6.1. SENSOR CHARACTERISTICS CHAPTER 6. EXPERIMENTS and rounded corners, the pole is able to reflect the acoustic wave in multiple directions, inducing an optimal angle of operation for the sensors. However, less optimal objects, like straight walls (when measured in an angle), need to be used for ranging experiments to test the angle of operation of 14 degrees assumed during the simulations. The following section shows the results of those experiments.

6.1.4 Maximum angle measurements to wall To acquire the worst case scenario angle of operation in practice, the sensor, still mounted to the table, is placed in a certain angle to a wall (Figure 6.4). First, the angle of the tabel/sensor with respect to the wall is measured by: 180 D  ϕ = cos−1 (6.3) π L Then, the distance perpendicular to the wall is measured, and the range information returned from the sensor is used to determine the angle of operation. By moving the table away from the sensor, while still maintaining the angle of the table to the wall, a new range is found and the angle of operation (FOV) can then by calculated by 180 D  α = ϕ − cos−1 (6.4) π R The experimental results are shown in Table 6.4.

Figure 6.4: Plan of angular measurement of a wall

49 Traineeship Report November 17, 2010 6.2. SENSORS MOUNTED TO ROBOT PLATFORM CHAPTER 6. EXPERIMENTS

Measu- Distance Range Corre- Angular Measurements rement from R(cm) sponding 90 wall angle α 80 D(cm) 1 19 23 3,4 70 2 25 29 7,2 3 30 34 9,6 60 4 36 40 11,8 5 44 47 17,1 50 6 48 51 17,9 40

7 55 58 19,1 Distance to Object (cm) 8 62 65 20,2 30 9 72 74 24,3 10 81 84 22,3 20 0 5 10 15 20 25 Angle (degrees) Table 6.4: Measurement results of rang- ing a wall in angular direction with re- Figure 6.5: Resulting graph of angular spect to the sensor measurement The resulting graph, Figure 6.5, has a strong resemblance to a conic shape, if we take in mind that only one sensitive side of the sensor is measured. Comparing these results to the beam pattern shown in the Figure 6.6, the measured angles are smaller. This proves that the manufacturer is supplying optimistic values, or at least the object used in those tests are easier to be detected by the sensors. The assumptions made in section 3.1, are not far from practice. To be certain, the arrangement and amount of sensors obtained by the calculations are maintained for the experiments. This way, the sensitivity to errors for difficult objects (hard to range), errors in placement and mounting, and errors due to varying ambient conditions is reduced. The following section states the experiments carried out with all the sensors mounted to the platform.

Figure 6.6: Beam Pattern according to data sheet

6.2 Sensors mounted to robot platform

To investigate the behavior of the sensors operating in a ring in real-time and to approach the actual implementation, the sensors are temporary mounted with tape to the robot’s platform at their calcu- lated positions and orientations (figure 6.7). The goal is to investigate the ability of the sensors to range (small) objects while moving, the ability to range corners (even when a wall is made of glass) and to de- tect doorways. To do so, in first instance the grouping principle of 4 sensors per group is implemented and the detection of a chair is realized.

50 Traineeship Report November 17, 2010 6.2. SENSORS MOUNTED TO ROBOT PLATFORM CHAPTER 6. EXPERIMENTS

Figure 6.7: Overview of mobile robot platform with Devantech SRF-08 sonar sensors mounted to platform for experimental purposes

This first experiment will also shed light on the cross-talk problem, and whether the grouping principle is sufficient or has to be changed.

6.2.1 Cross-talk The robot is placed in front of a chair at approximately 2 meters distance. The experiments take place inside an office, thus multiple objects are situated throughout the robot’s environment. By actuating the robot to a velocity of about 0.2 m/s, the sensors are able to relative accurately range objects standing from 10 to about 180cm away. However, objects that are further away or when no object is present (in the theoretical maximum range of 6 meters) the sensors appear to range an object at about 2m. A trivial explanation for this is that the mounted sensors are not accurately mounted and definitely not tilted to their theoretical optimal orientation (of about 20 degrees). Unfortunately the limitation of time prevents the correct mounting of the sensors by providing a neat mechanical solution. Therefore, the given problem is neglected and results are obtained by ranging to about 180cm, which is already above specifications. With the use of 5 groups (Table 6.5a) the ranging is executed and results are obtained. The results will be shown at my presentation by means of a short movie and attached to the digital version of this report. The results show that sensors are measuring different distances (mostly on or around 20cm’s in difference) with every new burst of acoustic waves. Therefore, the grouping of 2 (Table 6.5a) was implemented and put to the test. The obtained distance measurements with the grouping of 2 are significantly more accurate and therefore it can be concluded that cross-talk is still an issue, even with the use of grouping (of 4). The problem however with the use of groups of 2 (Table 6.5b) is that the time ranging the entire environment of the robot is doubled. Therefore, changing the analogue gain and range is inevitable.

51 Traineeship Report November 17, 2010 6.2. SENSORS MOUNTED TO ROBOT PLATFORM CHAPTER 6. EXPERIMENTS

Group Bus Number Address Group Bus Number Address 1 1 1 E0 1 1 1 E0 1 11 F4 2 20 E6 2 20 E6 2 1 3 E4 1 10 F2 1 18 E2 2 1 3 E4 3 1 5 E8 1 13 F8 1 16 FE 2 18 E2 4 1 7 EC 1 8 EE 1 14 FA 3 1 5 E8 5 1 9 F0 1 15 FC 1 12 F6 1 16 FE 6 1 11 F4 1 6 EA 1 10 F2 4 1 7 EC 7 1 13 F8 2 17 E0 1 8 EE 1 14 FA 8 1 15 FC 1 4 E6 1 6 EA 5 1 9 F0 9 2 17 E0 2 19 E4 1 4 E6 1 12 F6 10 2 19 E4 1 2 E2 1 2 E2 (a) Five groups of four sensors (b) Ten groups of two sensors

Table 6.5: Sensor grouping overview

Experiments are needed to acquire the optimal gain and range setting for a compromise between re- fresh rate and gain. This requires more time than I unfortunately have available and therefore this is set forth in chapter 7, for future work and recommendations.

6.2.2 Ranging a corner The ability to range a corner for the array of sensors is investigated by placing and moving the robot towards a corner with on one side a concrete wall, the other a wall of glass (Figure 6.8). This sheds light on the behavior of the sensors when sensing see-through objects as well.

Figure 6.8: Overview of robot and mounted sensors ranging the corner with wall of glass

The results of the experiments are shown in Figure 6.10. The screenshots of the ranging data of the sensors (similar to the screenshot in section 4) are all taken after placing the robot at respectively 60, 40, 30 and 20 cm away from one side of the wall, while being 35cm away from the other wall.

52 Traineeship Report November 17, 2010 6.2. SENSORS MOUNTED TO ROBOT PLATFORM CHAPTER 6. EXPERIMENTS

200 200

150 150

100 100

50 50

0 0

−50 −50

−100 −100

−150 −150

−200 −200 −200 −150 −100 −50 0 50 100 150 200 −200 −150 −100 −50 0 50 100 150 200

(a) Corner at 60 cm distance (b) Corner at 40 cm distance

200 200

150 150

100 100

50 50

0 0

−50 −50

−100 −100

−150 −150

−200 −200 −200 −150 −100 −50 0 50 100 150 200 −200 −150 −100 −50 0 50 100 150 200

(c) Corner at 30 cm distance (d) Corner at 20 cm distance

Figure 6.9: Results of ranging a corner with wall of glass

The results clearly show that sensing a corner is possible, provided that the sensors are mounted at their exact calculated position and orientation. The wall of glass presents no issue for the sonar devices.

6.2.3 Ranging a doorway An accuracy problem for sonar sensors is the inability to determine the exact location of a ranged object. Because of its FOV, the representation of the apparent location of the sensed object will be a circular arc at the ranged distance, as explained in section 4.1. This problem presents difficulties in ranging gaps like small corridors and doorways. To illustrate the problem, experiments are carried out by placing the robot with the mounted sensors straight in front of a doorway. The results are shown in Figure 6.10.

(a) Doorway at 60 cm distance (b) Doorway at 40 cm distance

Figure 6.10: Results of ranging a doorway

Figure 6.10a presents the inability of the sensors to locate the doorway. The sensors mounted at the front end of the mobile platform all return similar ranging distances, indicating that the sensors re- ceived an acoustic wave due to reflection of the wall surrounding the doorway. After translation of the robot to about 40 cm away of the doorway, the sensor in the front center of the robot suddenly indi-

53 Traineeship Report November 17, 2010 6.2. SENSORS MOUNTED TO ROBOT PLATFORM CHAPTER 6. EXPERIMENTS cates no object at 40 cm, indicating a clear path at that distance. Thus, the doorway has been notified. Therefore, the conclusion can be drawn that to indicate a doorway (with these dimensions), the robot should approach a sensed object to about 40 cm. A clear pathway should be possible to be sensed from a larger distance (as the width of a pathway is usually wider than a doorway). More easily, as the sensors initial goal is to support the SLRF and 3d camera, is to include the ranging results from those sensors to accurately map the environment. Another option is to include multiple bursts, that could possibly help to identify doorways. However, using multiple bursts has the potential to capture echoes from objects that are not of any interest, inducing ranging errors.

54 Traineeship Report November 17, 2010 Chapter 7

Concluding Remarks and Future Work

The goal of this internship was to analyse, model, simulate and implement a sensor for close-range appli- cations for the purpose of the TSR robot. The chosen sonar sensor appears to contain the appropriate characteristics for close-range applications on the mobile platform of the TSR robot. The angle of oper- ation, the accuracy, the networking abilities and mimimum/maximum range are all sufficient. The straight line measurements are successful in proving the accuracy of the sensors. The angle of op- eration assumed during the simulations approximates the actual angle of operation in reality, as been obtained with the angular measurements. The behavior of the sensors in a ring, mounted to the robot’s platform, appear to be sufficient for mapping and navigation, as proved by the simulations and exper- iments. However, the experiments also show that mounting the sensors at their optimal position, ori- entation and tilting is key to achieve accurate ranging and successful navigation results. Due to limited time, this correct mounting is not achieved. Mechanical supports are necessary to mount the sensors successfully to the platform. To do so, the mechanical engineering part of the TSR group should design and implement such a structure on the current phase 1 robot. Experiments done afterwards can return ambiguous results regarding the actual role and advantages of the sensors in navigation and mapping. Other than that, the implementation with the ROS system is still work in progress. Currently, I have arranged with Debjyoti Bera and Bram van de Klundert (both from the Computer Science Departement of the TU/e) to develop a ROS node for the communication of the sonar sensors. I have supplied them the necessary information to program the nodes. So far however, they have not been able to success- fully implement the ROS node. To improve the navigational performance, it is recommended to improve the timing and to do experi- ments with multiple bursts. The first can be done by reducing gain and range to increase the refresh rate, the latter to read more echoes than currently being read. Experiments need to be carried out to ob- tain optimal range and gain settings and to investigate whether the use of multiple bursts is an asset in ranging doorways and small pathways. Currently, the timing is sufficient for the robot to take actions, even during maximum velocity, but it’s not optimal by far and can be improved. The amount of sensors can possibly be reduced by interchanging two or three sensors with only one sensor and an added moving platform. In this way, the moving platform can be used to rotate the sensor at the speed at which the sensors in the original setup are ranging. This could however increase complexity of the mechanical and software system. Calculations on costs for the initial system com- pared to the one with a moving platform could provide an outcome regarding this proposal. Another proposal to investigate is a different place to mount the sensors to the robot. In this report, and during the internship, it is assumed that the sensors will be mounted to the robot’s platform, which is the obvious choice. However, other possibilities cannot be neglected. The problem with mounting the sensors on top of the robot’s platform is that low objects standing nearby the robot are impossible to be detected. The FOV of the sensors are not wide enough to detect small objects at very short distances to

55 Traineeship Report November 17, 2010 CHAPTER 7. CONCLUDING REMARKS AND FUTURE WORK the robot. To solve this problem, the sensors could possibly be mounted at the lower end of the robot. To prevent detection of the floor, the sensors should be tilted more upwards (as opposed to the situa- tions where the sensors are mounted on top of the robot’s surface), but in theory, this should provide a better solution for the detection of small objects nearby. However, this proposal causes the tilting to be very critical. If the sensors are not tilted enough upwards, they could still detect a floor, possibly even at significant short distances. If the sensors are tilted too far, objects at larger distances cannot be detected at all. Therefore, experiments in this area are inevitable as well. After implementing the sensors in ROS, achieving optimal timing settings and obtaining all necessary information for mounting and communication for the sensors, I am confident that the sonar system is a good asset for the TSR project for successful navigation and mapping.

56 Traineeship Report November 17, 2010 Acronyms

COM Component Object Model ...... 39 CW Continuous Wave...... 21 DCT Dynamics and Control Technology ...... 10 FM Frequency Modulation ...... 21 FOV Field-Of-View ...... 7 FTDI Future Technology Devices International ...... 39 GND Ground...... 39 IC Integrated Circuit ...... 39 I2C Inter-Integrated Circuit ...... 27 I/O Input/Output ...... 27 LED Light-Emitting Diode ...... 15 PM Phase Modulation ...... 17 RADAR Radio Detecting And Ranging ...... 21 RF Radio Frequency ...... 14 ROS Robot Operating System ...... 24 RTS/CTS Request To Send/Clear To Send ...... 39 R/W Read/Write ...... 40 SCL Serial Clock Line ...... 38 SDA Serial Data Line ...... 38 SLAM Simultaneous Localization And Mapping ...... 13 SLRF Scanning Laser Range Finder...... 11 SONAR Sound Navigation and Ranging...... 19 TOF Time-Of-Flight ...... 14 TSR Tele-Service Robot ...... 3 USB Universal Serial Bus ...... 9

57 Traineeship Report November 17, 2010 References

[1] Active robots. http://www.active-robots.htm. [2] E.N. da C. Andrade. Doppler and the doppler effect. Endeavour, XVIII(69), January 1959. [3] J. Dixon and O. Henlich. Mobile robot navigation. http://www.fermentas.com/techinfo/ nucleicacids/maplambda.htm, June 1997. Imperial College. [4] H.R. Everett, D.E. DeMuth, and E.H. Stiz. Survey of collision avoidance and ranging sensors for mobile robots. Technical Report 1194, Naval Comand Control and Ocean Surveillance Center, RDT&E Division, San Diego, California, December 1992. [5] R. Herveille. I2c-master core specification. Technical report, Open-Cores.org, July 2003. [6] I2CChip.com. http://www.i2cchip.com. [7] I2Cchip.com. Pc to i2c adapter data sheet, 2002. [8] G.W. Lucas. A tutorial and elementary trajectory model for the differential steering system of robot wheel actuators. http://rossum.sourceforge.net/papers/DiffSteer/DiffSteer. html, 2000-2001. The Rossum Project. [9] J.S. Maxwell. A low cost solution to motion tracking using an array of sonar sensors and an inertial measurement unit. Master’s thesis, Russ College of Engineering and Technology of Ohio University, Athens, Ohio, August 2009. [10] M.P.W.J. van Osch. User requirements and system requirements tsr, March 2010. [11] Philips Semiconductors. The i2c-bus specification. Version 2.1, January 2000. [12] W.H. Stricland and R.H. King. Characteristics of ultrasonic ranging sensors in an underground environment. Report of Investigations 9452, United States Department of the Interior, Bureau of Mines, 1993. [13] R. Volpe and R. Ivlev. The prototype safety system for robots near flight hardware. Technical report, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California, May 1994. [14] R. Volpe and R. Ivlev. A survey and experimental evaluation of proximity sensors for space robotics. Technical Report 2, Jet Propulsion Laboratory, California Institute of Technology, April 2009. [15] S.A. Walter. The sonar ring: Obstacle detection for a mobile robot. Technical report, Computer Science Department, General Motors Research Laboratories, Warren, Michigan, May 2001. [16] K. Wu. Realtime control of a mobile robot using matlab. Master’s thesis, Electrical Engineering and Computer Science, The University of Applied Science Hamburg, Hamburg, October 2004.

58 Traineeship Report November 17, 2010 Appendix A

Datasheet Devantech SRF08

59 Traineeship Report November 17, 2010 Devantech SRF08 UltraSonic Ranger

This Devantech high performance ultrasonic range finder is compact and measures an amazingly wide range from 3cm to 6m. The SRF08 interfaces to your via the industry standard IIC bus. This ranger is perfect for your robot, or any other projects requiring accurate ranging information. There is even a built-in light sensor on the front of the module. You can also get a nifty Lynxmotion SRF08 Housing for one ranger or for two rangers.

Specifications Beam Pattern see graph Voltage 5v Current 15mA Typ. 3mA Standby Frequency 40KHz Maximum Range 6 m Minimum Range 3 cm Max Analogue Variable to 1025 in 32 steps Gain Connection Standard IIC Bus Light Sensor Front facing light sensor Fully timed echo, freeing host computer of Timing task Echo Multiple echo - keeps looking after first echo Units Range reported n uS, mm or inches Weight 0.4 oz. Size 43mm w x 20mm d x 17mm h Specifications subject to change without notice

Mechatronic Systems MechBot Sensor Overview, Page 1 of 16

Beam Pattern

Dimensions

Mechatronic Systems MechBot Sensor Overview, Page 2 of 16

Purchasing the SRF08 Acroname (www.acroname.com) Price: $52.00 each Part Number: R145-SRF08

Technical Specifications for the SRF08 (Below is from http://www.robot-electronics.co.uk/htm/srf08tech.shtml), Gerry [email protected] Communication with the SRF08 ultrasonic rangefinder is via the I2C bus. This is available on popular controllers such as the OOPic and Stamp BS2p, as well as a wide variety of micro-controllers. To the programmer the SRF08 behaves in the same way as the ubiquitous 24xx series 's, except that the I2C address is different. The default shipped address of the SRF08 is 0xE0. It can be changed by the user to any of 16 addresses E0, E2, E4, E6, E8, EA, EC, EE, F0, F2, F4, F6, F8, FA, FC or FE, therefore up to 16 sonar's can be used. In addition to the above addresses, all sonar's on the I2C bus will respond to address 0 - the General Broadcast address. This means that writing a ranging command to I2C address 0 (0x00) will start all sonar's ranging at the same time. This should be useful in ANN Mode (See below). The results must be read individually from each sonar's real address. We have examples of using the SRF08 module with a wide range of popular controllers.

Connections The "Do Not Connect" pin should be left unconnected. It is actually the CPU MCLR line and is used once only in our workshop to program the PIC16F872 on-board after assembly, and has an internal pull-up resistor. The SCL and SDA lines should each have a pull-up resistor to +5v somewhere on the I2C bus. You only need one pair of resistors, not a pair for every module. They are normally located with the bus master rather than the slaves. The SRF08 is always a slave - never a bus master. If you need them, I recommend 1.8k resistors. Some modules such as the OOPic already have pull-up resistors and you do not need to add any more.

Registers The SRF08 appears as a set of 36 registers.

Mechatronic Systems MechBot Sensor Overview, Page 3 of 16 Location Read Write

Software 0 Command Register Revision

Max Gain Register 1 Light Sensor (default 31)

1st Echo High Range Register (default 2 Byte 255)

1st Echo Low 3 N/A Byte

~~~~ ~~~~ ~~~~

17th Echo High 34 N/A Byte

17th Echo Low 35 N/A Byte

Only locations 0, 1 and 2 can be written to. Location 0 is the command register and is used to start a ranging session. It cannot be read. Reading from location 0 returns the SRF08 software revision. By default, the ranging lasts for 65mS, but can be changed by writing to the range register at location 2. If you do so, then you will likely need to change the analogue gain by writing to location 1. See the Changing Range and Analogue Gain sections below. Location 1 is the onboard light sensor. This data is updated every time a new ranging command has completed and can be read when range data is read. The next two locations, 2 and 3, are the 16bit unsigned result from the latest ranging - high byte first. The meaning of this value depends on the command used, and is either the range in inches, or the range in cm or the flight time in uS. A value of zero indicates that no objects were detected. There are up to a further 16 results indicating echo's from more distant objects.

Commands The are three commands to initiate a ranging (80 to 82), to return the result in inches, centimeters or microseconds. There is also an ANN mode (Artificial Neural Network) mode which is described later and a set of commands to change the I2C address. Command Action Decimal Hex 80 0x50 Ranging Mode - Result in inches 81 0x51 Ranging Mode - Result in centimeters 82 0x52 Ranging Mode - Result in micro-seconds

83 0x53 ANN Mode - Result in inches 84 0x54 ANN Mode - Result in centimeters 85 0x55 ANN Mode - Result in micro-seconds

Mechatronic Systems MechBot Sensor Overview, Page 4 of 16 160 0xA0 1st in sequence to change I2C address 165 0xA5 3rd in sequence to change I2C address 170 0xAA 2nd in sequence to change I2C address Ranging Mode To initiate a ranging, write one of the above commands to the command register and wait the required amount of time for completion and read as many results as you wish. The echo buffer is cleared at the start of each ranging. The first echo range is placed in locations 2,3. the second in 4,5, etc. If a location (high and low bytes) is 0, then there will be no further reading in the rest of the registers. The default and recommended time for completion of ranging is 65mS, however you can shorten this by writing to the range register before issuing a ranging command. Light sensor data at location 1 will also have been updated after a ranging command.

ANN Mode ANN mode (Artificial Neural Network) is designed to provide the multi echo data in a way that is easier to input to a neural network, at least I hope it is - I've not actually done it yet. ANN mode provides a 32 byte buffer (locations 4 to 35 inclusive) where each byte represents the 65536uS maximum flight time divided into 32 chunks of 2048uS each - equivalent to about 352mm of range. If an echo is received within a bytes time slot then it will be set to no-zero, otherwise it will be zero. So if an echo is received from within the first 352mm, location 4 will be non-zero. If an object is detected 3m away the location 12 will be non-zero (3000/352 = 8) (8+4=12). Arranging the data like this should be better for a neural net than the other formats. The input to your network should be 0 if the byte is zero and 1 if its non-zero. I have a SOFM (Self Organizing Feature Map) in mind for the neural net, but will hopefully be useful for any type. Location 4 Location 5 Location 6 Location 7 Locations 8 - 35 0 - 352mm 353 - 705mm 706 - 1057mm 1058 - 1410mm and so on Locations 2,3 contain the range of the nearest object converted to inches, cm or uS and is the same as for Ranging Mode.

Checking for Completion of Ranging You do not have to use a timer on your own controller to wait for ranging to finish. You can take advantage of the fact that the SRF08 will not respond to any I2C activity whilst ranging. Therefore, if you try to read from the SRF08 (we use the software revision number a location 0) then you will get 255 (0xFF) whilst ranging. This is because the I2C data line (SDA) is pulled high if nothing is driving it. As soon as the ranging is complete the SRF08 will again respond to the I2C bus, so just keep reading the register until its not 255 (0xFF) anymore. You can then read the sonar data. Your controller can take advantage of this to perform other tasks while the SRF08 is ranging.

Changing the Range The maximum range of the SRF08 is set by an internal timer. By default, this is 65mS or the equivalent of 11 metres of range. This is much further than the 6 metres the SRF08 is actually capable of. It is possible to reduce the time the SRF08 listens for an echo, and hence the range, by writing to the range register at location 2. The range can be set in steps of about 43mm (0.043m or 1.68 inches) up to 11 metres. The range is ((Range Register x 43mm) + 43mm) so setting the Range Register to 0 (0x00) gives a maximum range of 43mm. Setting the Range Register to 1 (0x01) gives a maximum range of 86mm. More usefully, 24 (0x18) gives a range of 1 metre and 140 (0x8C) is 6 metres. Setting 255 (0xFF) gives the original 11 metres (255 x 43 + 43 is 11008mm). There are two reasons you may wish to reduce the range.

Mechatronic Systems MechBot Sensor Overview, Page 5 of 17 1. To get at the range information quicker 2. To be able to fire the SRF08 at a faster rate. If you only wish to get at the range information a bit sooner and will continue to fire the SRF08 at 65ms of slower, then all will be well. However if you wish to fire the SRF08 at a faster rate than 65mS, you will definitely need to reduce the gain - see next section. The range is set to maximum every time the SRF08 is powered-up. If you need a different range, change it once as part of your system initialization code.

Analogue Gain The analogue gain register sets the Maximum gain of the analogue stages. To set the maximum gain, just write one of these values to the gain register at location 1. During a ranging, the analogue gain starts off at its minimum value of 94. This is increased at approx. 70uS intervals up to the maximum gain setting, set by register 1. Maximum possible gain is reached after about 390mm of range. The purpose of providing a limit to the maximum gain is to allow you to fire the sonar more rapidly than 65mS. Since the ranging can be very short, a new ranging can be initiated as soon as the previous range data has been read. A potential hazard with this is that the second ranging may pick up a distant echo returning from the previous "ping", give a false result of a close by object when there is none. To reduce this possibility, the maximum gain can be reduced to limit the modules sensitivity to the weaker distant echo, whilst still able to detect close by objects. The maximum gain setting is stored only in the CPU's RAM and is initialized to maximum on power-up, so if you only want do a ranging every 65mS, or longer, you can ignore the Range and Gain Registers. Note - Effective in Ranging Mode only, in ANN mode, gain is controlled automatically. Gain Register Maximum Analogue Gain Decimal Hex 0 0x00 Set Maximum Analogue Gain to 94 1 0x01 Set Maximum Analogue Gain to 97 2 0x02 Set Maximum Analogue Gain to 100 3 0x03 Set Maximum Analogue Gain to 103 4 0x04 Set Maximum Analogue Gain to 107 5 0x05 Set Maximum Analogue Gain to 110 6 0x06 Set Maximum Analogue Gain to 114 7 0x07 Set Maximum Analogue Gain to 118 8 0x08 Set Maximum Analogue Gain to 123 9 0x09 Set Maximum Analogue Gain to 128 10 0x0A Set Maximum Analogue Gain to 133 11 0x0B Set Maximum Analogue Gain to 139 12 0x0C Set Maximum Analogue Gain to 145 13 0x0D Set Maximum Analogue Gain to 152 14 0x0E Set Maximum Analogue Gain to 159 15 0x0F Set Maximum Analogue Gain to 168 16 0x10 Set Maximum Analogue Gain to 177 17 0x11 Set Maximum Analogue Gain to 187 18 0x12 Set Maximum Analogue Gain to 199 19 0x13 Set Maximum Analogue Gain to 212 20 0x14 Set Maximum Analogue Gain to 227 21 0x15 Set Maximum Analogue Gain to 245 22 0x16 Set Maximum Analogue Gain to 265 23 0x17 Set Maximum Analogue Gain to 288 24 0x18 Set Maximum Analogue Gain to 317

Mechatronic Systems MechBot Sensor Overview, Page 6 of 17 25 0x18 Set Maximum Analogue Gain to 352 26 0x20 Set Maximum Analogue Gain to 395 27 0x21 Set Maximum Analogue Gain to 450 28 0x22 Set Maximum Analogue Gain to 524 29 0x23 Set Maximum Analogue Gain to 626 30 0x24 Set Maximum Analogue Gain to 777 31 0x25 Set Maximum Analogue Gain to 1025 Note that the relationship between the Gain Register setting and the actual gain is not a linear one. Also there is no magic formula to say "use this gain setting with that range setting". It depends on the size, shape and material of the object and what else is around in the room. Try playing with different settings until you get the result you want. If you appear to get false readings, it may be echo's from previous "pings", try going back to firing the SRF08 every 65mS or longer (slower). If you are in any doubt about the Range and Gain Registers, remember they are automatically set by the SRF08 to their default values when it is powered-up. You can ignore and forget about them and the SRF08 will work fine, detecting objects up to 6 metres away every 65mS or slower.

Light Sensor The SRF08 has a light sensor on-board. A reading of the light intensity is made by the SRF08 each time a ranging takes place in either Ranging or ANN Modes ( The A/D conversion is actually done just before the "ping" whilst the +/- 10v generator is stabilizing). The reading increases as the brightness increases, so you will get a maximum value in bright light and minimum value in darkness. It should get close to 2-3 in complete darkness and up to about 248 (0xF8) in bright light. The light intensity can be read from the Light Sensor Register at location 1 at the same time that you are reading the range data.

LED The red LED is used to flash out a code for the I2C address on power-up (see below). It also gives a brief flash during the "ping" whilst ranging.

Changing the I2C Bus Address To change the I2C address of the SRF08 you must have only one sonar on the bus. Write the 3 sequence commands in the correct order followed by the address. Example; to change the address of a sonar currently at 0xE0 (the default shipped address) to 0xF2, write the following to address 0xE0; (0xA0, 0xAA, 0xA5, 0xF2 ). These commands must be sent in the correct sequence to change the I2C address, additionally, No other command may be issued in the middle of the sequence. The sequence must be sent to the command register at location 0, which means 4 separate write transactions on the I2C bus. When done, you should label the sonar with its address, however if you do forget, just power it up without sending any commands. The SRF08 will flash its address out on the LED. One long flash followed by a number of shorter flashes indicating its address. The flashing is terminated immediately on sending a command the SRF08. Address Long Flash Short flashes Decimal Hex 224 E0 1 0 226 E2 1 1 228 E4 1 2 230 E6 1 3 232 E8 1 4 234 EA 1 5 236 EC 1 6

Mechatronic Systems MechBot Sensor Overview, Page 7 of 17 238 EE 1 7 240 F0 1 8 242 F2 1 9 244 F4 1 10 246 F6 1 11 248 F8 1 12 250 FA 1 13 252 FC 1 14 254 FE 1 15 Take care not to set more than one sonar to the same address, there will be a bus collision and very unpredictable results.

Current Consumption Average current consumption measured on our prototype is around 12mA during ranging, and 3mA standby. The module will automatically go to standby mode after a ranging, whilst waiting for a new command on the I2C bus. The actual measured current profile is as follows; Operation Current Duration Ranging command received - Power 275mA 3uS on +/- 10v generator Stabilization 25mA 600uS 8 cycles of 40kHz "ping" 40mA 200uS 65mS Ranging 11mA max Standby 3mA indefinite The above values are for guidance only, they are not tested on production units.

Code From http://www.robot-electronics.co.uk/htm/srf08bx24.shtml

Connecting Multiple SRF08 Sonar Modules to the BX-24

Mechatronic Systems MechBot Sensor Overview, Page 8 of 17

Introduction The SRF08 modules use the I2C bus for communication. This example shows how to connect two SRF08's to the BX-24, however it is expandable to up 16 SRF08's on the I2C bus. The SDA (data) and SCL (clock) lines are connected to pins 13 and 14 on the BX-24. The BX-24 does not have I2C communication, so the example provided here uses a combination of bit bashing and the SHIFTIN and SHIFTOUT commands instead. The BX-24 internal 5v regulator is not suitable for powering much external circuitry. I therefore recommend you use a separate 5v regulator as shown below..

Circuit for connecting two SRF08 Sonar Modules to the BX-24

The schematic above shows 1k8 pull-up resistors on the SCL and SDA lines to Vdd. This is for good noise immunity, however any value up to 4k7 should be OK.

Mechatronic Systems MechBot Sensor Overview, Page 9 of 16 Changing the SRF08 I2C Address Before you can use the SRF08's you will need to re-program their I2C addresses from the default address of 0xE0 they are supplied with. The program below will do this. Make sure you only have one SRF08 connected when you do this. You only have to change the SRF08_NEW_ADDRESS constant in the program below to the address you want. For example if you want the SRF08 to be at hex address 0xF2, then change SRF08_NEW_ADDRESS to read;

Const SRF08_NEW_ADDRESS As Byte = $Hf2 ' Place new address for SRF08 here

Now download the program to the BX-24, you will see rapid brief flashes on the Red Led on the SRF08 indicating that the change of address was successful. If you set your Monitor port on the PC, you will see the LDR and first range displayed on screen. It is wise to make a note of the new address on the SRF08 itself. It is easy to forget which is which otherwise. To use the example code described later on this page, set one SRF08 to address 0xE0 and the other to 0xE2. The following program can be downloaded here.

'*********************************************************** '** ** '** I2C Routines for the Basic BX-24 ** '** to change the I2C address of the SRF08 ** '** ** '** Copyright 2002 - Devantech Ltd ** '** Commercial use of this software is prohibited ** '** Private and educational use only is permitted ** '** ** '** Written by Gerald Coe - February 2002 ** '** ** '*********************************************************** Const SRF08_NEW_ADDRESS As Byte = &He0 ' Place new address for SRF08 here 'available addresses are: e0, e2, e4, e6, e8, ea, ec, ee ' f0, f2, f4, f6, f8, fa, fc, fe Const SCL As Byte = 14 ' I2C clock - choose any pins you wish for SCL and SDA Const SDA As Byte = 13 ' I2C data Const GB As Byte = 0 ' I2C General Broadcast address Const CmdReg As Byte = 0 ' SRF08 command register Const LdrReg As Byte = 1 ' Address of Light Sensor Register in SRF08 Const RangeReg As Byte = 2 ' Address of Range Register in SRF08 Const RangeCmd As Byte = 81 ' Ranging command - 80 for inches, 81 for cm, 82 for uS Dim I2cAck As Boolean ' Acknowledge flag Sub Main() Dim Ldr As Byte Dim Range As New UnsignedInteger Call PutPin(SCL, bxOutputHigh)

Mechatronic Systems MechBot Sensor Overview, Page 10 of 16 Call PutPin(SDA, bxOutputHigh) Call Delay(1.0) ' Delay just to be sure SRF08 is out of reset Call I2cByteWrite(GB, CmdReg, &Ha0) ' 1st command in address change sequence Call I2cByteWrite(GB, CmdReg, &Haa) ' 2nd command in address change sequence Call I2cByteWrite(GB, CmdReg, &Ha5) ' 3rd command in address change sequence Call I2cByteWrite(GB, CmdReg, SRF08_NEW_ADDRESS) ' The new I2C address ' That's the address changed, now perform SRF08 Ranging in an endless loop at the new address Do Call I2cByteWrite(SRF08_NEW_ADDRESS, CmdReg, RangeCmd) ' Start Ranging in Cm Call Delay(0.07) ' 70mS wait for ranging to complete Ldr = I2cByteRead(SRF08_NEW_ADDRESS, LdrReg) ' Read light sensor Range = I2cWordRead(SRF08_NEW_ADDRESS, RangeReg) ' Read Range Register debug.Print "LDR = "; CStr(Ldr); ", Range = "; CStr(Range) Loop End Sub '------' I2C subroutines follow '------' writes I2cData to I2cReg at I2cAddr Sub I2cByteWrite(ByVal I2cAddr As Byte, ByVal I2cReg As Byte, ByVal I2cData As Byte) Call I2cStart() Call I2cOutByte(I2cAddr) ' send device address Call I2cOutByte(I2cReg) ' send register address Call I2cOutByte(I2cData) ' send the data Call I2cStop() End Sub Function I2CByteRead(ByVal I2cAddr As Byte, ByVal I2cReg As Byte) As Byte Call I2cStart() Call I2cOutByte(I2cAddr) ' send device address Call I2cOutByte(I2cReg) ' send register address Call I2cStart() ' repeated start I2cAddr = I2cAddr+1 Call I2cOutByte(I2cAddr) ' send device address with read set I2cAck = False ' setup to send Nak I2cByteRead = I2cInByte() ' get data byte with Nak Call I2cStop() End Function Function I2CWordRead(ByVal I2cAddr As Byte, ByVal I2cReg As Byte) As UnsignedInteger Set I2CWordRead = New UnsignedInteger Call I2cStart() Call I2cOutByte(I2cAddr) ' send device address Call I2cOutByte(I2cReg) ' send register address Call I2cStart() ' repeated start

Mechatronic Systems MechBot Sensor Overview, Page 11 of 16 I2cAddr = I2cAddr+1 Call I2cOutByte(I2cAddr) ' send device address with read set I2cAck = True ' setup to send Ack I2cWordRead = CuInt(I2cInByte()*256) I2cAck = False ' setup to send Nak I2cWordRead = I2cWordRead + CuInt(I2cInByte()) Call I2cStop() End Function Sub I2cOutByte(I2cData As Byte) Call ShiftOut(SDA, SCL, 8, I2cData) ' shift data out Call PutPin(SDA, bxInputTristate) ' turn SDA around Call PutPin(SCL, bxOutputHigh) ' and clock in the ack' bit Call PutPin(SCL, bxOutputLow) End Sub Function I2cInByte() As Byte I2cInByte = ShiftIn(SDA, SCL, 8) If I2cAck=True Then Call PutPin(SDA, bxOutputLow) Else Call PutPin(SDA, bxOutputHigh) End If Call PutPin(SCL, bxOutputHigh) ' clock out the ack' bit Call PutPin(SCL, bxOutputLow) End Function Sub I2cStart() ' I2C start bit sequence Call PutPin(SDA, bxOutputHigh) Call PutPin(SCL, bxOutputHigh) Call PutPin(SDA, bxOutputLow) Call PutPin(SCL, bxOutputLow) End Sub Sub I2cStop() ' I2C stop bit sequence Call PutPin(SDA, bxOutputLow) Call PutPin(SCL, bxOutputHigh) Call PutPin(SDA, bxOutputHigh) End Sub

Displaying Light Sensor and Range readings in a PC Debug window Now that you have your SRF08's re-programmed to their new I2C addresses (0xE0 and 0xE2) the following sample code will display the light sensor reading and the 1st range reading, for each SRF08 in the monitor port window on the PC . The sample code below can be downloaded here. '***********************************************************

Mechatronic Systems MechBot Sensor Overview, Page 12 of 16 '** ** '** I2C Routines for the BX-24 ** '** to demonstrate the use of multiple SRF08's ** '** ** '** Copyright 2002 - Devantech Ltd ** '** Commercial use of this software is prohibited ** '** Private and educational use only is permitted ** '** ** '** Written by Gerald Coe - February 2002 ** '** ** '*********************************************************** Const SCL As Byte = 14 ' I2C clock - choose any pins you wish for SCL and SDA Const SDA As Byte = 13 ' I2C data Const CmdReg As Byte = 0 ' SRF08 command register Const LdrReg As Byte = 1 ' Address of Light Sensor Register in SRF08 Const RangeReg As Byte = 2 ' Address of Range Register in SRF08 Const RangeCmd As Byte = 81 ' Ranging command - 80 for inches, 81 for cm, 82 for uS ' Note that SRF08's must have been previously set to these addresses Const Sonar1 As Byte = &He0 ' 1st SRF08 at I2C address 0Xe0 Const Sonar2 As Byte = &He2 ' 2nd SRF08 at I2C address 0Xe2 Dim I2cAck As Boolean ' Acknowledge flag Sub Main() Dim Ldr1 As Byte Dim Range1 As New UnsignedInteger Dim Ldr2 As Byte Dim Range2 As New UnsignedInteger Call PutPin(SCL, bxOutputHigh) Call PutPin(SDA, bxOutputHigh) Do ' 1st SRF08 Ranger Call I2cByteWrite(Sonar1, CmdReg, RangeCmd) ' Start Ranging in Cm Call Delay(0.07) ' 70mS wait for ranging to complete Ldr1 = I2cByteRead(Sonar1, LdrReg) ' Read light sensor Range1 = I2cWordRead(Sonar1, RangeReg) ' Read Range Register ' 2nd SRF08 Ranger Call I2cByteWrite(Sonar2, CmdReg, RangeCmd) ' Start Ranging in Cm Call Delay(0.07) ' 70mS wait for ranging to complete Ldr2 = I2cByteRead(Sonar2, LdrReg) ' Read light sensor Range2 = I2cWordRead(Sonar2, RangeReg) ' Read Range Register debug.Print "LDR1 = "; CStr(Ldr1); ", Range1 = "; CStr(Range1); _ " LDR2 = "; CStr(Ldr2); ", Range2 = "; CStr(Range2) Loop End Sub

Mechatronic Systems MechBot Sensor Overview, Page 13 of 16 '------' I2C subroutines follow '------' writes I2cData to I2cReg at I2cAddr Sub I2cByteWrite(ByVal I2cAddr As Byte, ByVal I2cReg As Byte, ByVal I2cData As Byte) Call I2cStart() Call I2cOutByte(I2cAddr) ' send device address Call I2cOutByte(I2cReg) ' send register address Call I2cOutByte(I2cData) ' send the data Call I2cStop() End Sub Function I2CByteRead(ByVal I2cAddr As Byte, ByVal I2cReg As Byte) As Byte Call I2cStart() Call I2cOutByte(I2cAddr) ' send device address Call I2cOutByte(I2cReg) ' send register address Call I2cStart() ' repeated start I2cAddr = I2cAddr+1 Call I2cOutByte(I2cAddr) ' send device address with read set I2cAck = False ' setup to send Nak I2cByteRead = I2cInByte() ' get data byte with Nak Call I2cStop() End Function Function I2CWordRead(ByVal I2cAddr As Byte, ByVal I2cReg As Byte) As UnsignedInteger Set I2CWordRead = New UnsignedInteger Call I2cStart() Call I2cOutByte(I2cAddr) ' send device address Call I2cOutByte(I2cReg) ' send register address Call I2cStart() ' repeated start I2cAddr = I2cAddr+1 Call I2cOutByte(I2cAddr) ' send device address with read set I2cAck = True ' setup to send Ack I2cWordRead = CuInt(I2cInByte()*256) I2cAck = False ' setup to send Nak I2cWordRead = I2cWordRead + CuInt(I2cInByte()) Call I2cStop() End Function Sub I2cOutByte(I2cData As Byte) Call ShiftOut(SDA, SCL, 8, I2cData) ' shift data out Call PutPin(SDA, bxInputTristate) ' turn SDA around Call PutPin(SCL, bxOutputHigh) ' and clock in the ack' bit Call PutPin(SCL, bxOutputLow) End Sub Function I2cInByte() As Byte I2cInByte = ShiftIn(SDA, SCL, 8)

Mechatronic Systems MechBot Sensor Overview, Page 14 of 16 If I2cAck=True Then Call PutPin(SDA, bxOutputLow) Else Call PutPin(SDA, bxOutputHigh) End If Call PutPin(SCL, bxOutputHigh) ' clock out the ack' bit Call PutPin(SCL, bxOutputLow) End Function Sub I2cStart() ' I2C start bit sequence Call PutPin(SDA, bxOutputHigh) Call PutPin(SCL, bxOutputHigh) Call PutPin(SDA, bxOutputLow) Call PutPin(SCL, bxOutputLow) End Sub Sub I2cStop() ' I2C stop bit sequence Call PutPin(SDA, bxOutputLow) Call PutPin(SCL, bxOutputHigh) Call PutPin(SDA, bxOutputHigh) End Sub

You can find more information on the SRF08 here

SRF08 Ultra sonic range finder - a little History

The SRF08 is an evolutionary step from the SRF04, developed to improve on the following features of the SRF04. The key points are;

1. The maximum range of 3m can be limiting in some situations. 2. The 36mS timeout + 10mS recharge is rather long – equivalent to almost 8m on a 3m product. 3. The SRF04 requires 2 I/O pins per sonar. 32 I/O lines for a 16 sonar system. 4. The users host processor is required to time the returning echo 5. The 50mA maximum current is too high – 800mA for 16 sonar’s. 6. Only a single returning echo is possible. 7. The SRF04 can’t see the light (read on) The 3m limit of the SRF04 is imposed by the need not to have a gain so high that the cross coupling between transmit an receive transducers causes the op-amps saturate at close range. If they did then the system could not tell the difference between the cross coupling and a legitimate returning echo. The SRF08 uses a digital pot to vary the gain as the range increases. This allows a higher overall gain to be set and consequently better range. The Typical range we are seeing on the prototype is 6m and we have had it up to 11m for a large object. This is too sensitive because it detects small close by anomalies in the floor that the robot really ought to ignore. The gain was therefore deliberately reduced to around 6m. The 36mS timeout of the SRF04 was imposed because the PIC12C508 processor used only has a single timer, and this is used for tone detection of the returning echo. The watchdog timer is used to time out the ranging. This could only be set in increments of 18mS. Whilst 18mS is just enough –

Mechatronic Systems MechBot Sensor Overview, Page 15 of 16 about 3m range – it is a “typical” value only and not guaranteed, so the real range could be less depending on ambient temperature and chip tolerances. A further 10mS is specified in order to recharge the +/- 10v supplies for the op-amp and comparator. The max232 IC is switched off during echo timing to reduce noise in the op-amps. With the SRF08 the analog circuit has been changed to a single 5v supply, so the max232 (actually an ST232) does not need to charge up a 22uF capacitor, only the 100n’s. Recharge time now drops to just 600uS and is taken care of by the processor automatically when a new reading is requested. A change of processor from the PIC12C508 to the PIC16F872 means more timers are available and the SRF08 is not stuck with the 36mS watchdog timer. However one of the problems with terminating the ranging early is that the in-flight “ping” does not know this. It quite happily bounces off a far wall and returns. Now if it happens to return just after you have started a new ranging, the sonar will pick up this earlier “ping” and think there is an object much closer than there really is. The SRF08 allows the maximum gain to be limited to reduce this possibility. The number of I/O lines required by multiple sonar’s has been an issue with some users. There is also a problem which has been identified with the , which does not treat all I/O lines equally when timing. When using 16 sonar’s, 32 I/O lines are required. This can be reduced to 17 by gating the 16 echo pulse outputs together with 16 diodes. A further reduction to 6 I/O lines can be achieved by using a 4 to 16 line decoder such as the CD4514B. This involves the user building additional circuitry. The SRF08 uses the I2C interface so all 16 sonar’s can be controlled using just 2 I/O lines. The I2C bus interface is available on popular controllers such as the OOPic, and of course cheap processors such as many of the PIC family. On the SRF04, the users host processor is required to time the returning echo. This has been an issue when using the Stamp, as it does not treat all I/O lines equally. This is an internal problem with the Stamp and was discovered by Jim Fry of Lynxmotion. The SRF08 does its own internal timing and sends you the result. The 50mA max. current required by the SRF04, whilst already far better than the 150mA (2.5A peak) of the Polaroid units, has been further reduced to 15mA nominal and around 3mA in standby. The SRF08 automatically goes into standby when it has completed each ranging, and powers up again when it receives the next command. Because of the way the SRF04 works, only a single echo can be received. After this the module powers up its +/- 10v generators again ready for the next trigger pulse. With the SRF08 multiple echo’s can be received. A buffer stores the first 16 echo’s received. The idea is to be able so see through open doorways where a standard sonar would just see the door frame. Finally, to make the SRF08 even more useful, I included a light sensor. This is readable over the I2C bus just as the sonar data is.

Mechatronic Systems MechBot Sensor Overview, Page 16 of 16 Appendix B

Datasheet Parallax PING)))

76 Traineeship Report November 17, 2010

Web Site: www.parallax.com Office: (916) 624-8333 Forums: forums.parallax.com Fax: (916) 624-8003 Sales: [email protected] Sales: (888) 512-1024 Technical: [email protected] Tech Support: (888) 997-8267

PING)))™ Ultrasonic Distance Sensor (#28015) The Parallax PING))) ultrasonic distance sensor provides precise, non-contact distance measurements from about 2 cm (0.8 inches) to 3 meters (3.3 yards). It is very easy to connect to such as the BASIC Stamp®, SX or Propeller chip, requiring only one I/O pin.

The PING))) sensor works by transmitting an ultrasonic (well above human hearing range) burst and providing an output pulse that corresponds to the time required for the burst echo to return to the sensor. By measuring the echo pulse width, the distance to target can easily be calculated.

Features Key Specifications y Range: 2 cm to 3 m (0.8 in to 3.3 yd) y Supply voltage: +5 VDC y Burst indicator LED shows sensor y Supply current: 30 mA typ; 35 mA max activity y Communication: Positive TTL pulse y Bidirectional TTL pulse interface on a y Package: 3-pin SIP, 0.1” spacing single I/O pin can communicate with 5 V (ground, power, signal) TTL or 3.3 V CMOS microcontrollers y Operating temperature: 0 – 70° C. Input trigger: positive TTL pulse, 2 µs y y Size: 22 mm H x 46 mm W x 16 mm D min, 5 µs typ. (0.84 in x 1.8 in x 0.6 in) Echo pulse: positive TTL pulse, 115 µs y y Weight: 9 g (0.32 oz) minimum to 18.5 ms maximum. y RoHS Compliant

Pin Definitions GND Ground (Vss) 5 V 5 VDC (Vdd) SIG Signal (I/O pin)

The PING))) sensor has a male 3-pin header used to supply ground, power (+5 VDC) and signal. The header may be plugged into a directly into solderless breadboard, or into a standard 3- wire extension cable (Parallax part #805-000012).

Copyright © Parallax Inc. PING))) Ultrasonic Distance Sensor (#28015) v1.6 9/11/2009 Page 1 of 12 Dimensions

Communication Protocol The PING))) sensor detects objects by emitting a short ultrasonic burst and then "listening" for the echo. Under control of a host microcontroller (trigger pulse), the sensor emits a short 40 kHz (ultrasonic) burst. This burst travels through the air, hits an object and then bounces back to the sensor. The PING))) sensor provides an output pulse to the host that will terminate when the echo is detected, hence the width of this pulse corresponds to the distance to the target.

Host Device Input Trigger Pulse tOUT 2 µs (min), 5 µs typical

PING))) Echo Holdoff tHOLDOFF 750 µs Sensor Burst Frequency tBURST 200 µs @ 40 kHz

Echo Return Pulse Minimum tIN-MIN 115 µs

Echo Return Pulse Maximum tIN-MAX 18.5 ms Delay before next measurement 200 µs

Copyright © Parallax Inc. PING))) Ultrasonic Distance Sensor (#28015) v1.6 9/11/2009 Page 2 of 12 Practical Considerations for Use

Object Positioning The PING))) sensor cannot accurately measure the distance to an object that: a) is more than 3 meters away, b) that has its reflective surface at a shallow angle so that sound will not be reflected back towards the sensor, or c) is too small to reflect enough sound back to the sensor. In addition, if your PING))) sensor is mounted low on your device, you may detect sound reflecting off of the floor. a. b. c.

Target Object Material In addition, objects that absorb sound or have a soft or irregular surface, such as a stuffed animal, may not reflect enough sound to be detected accurately. The PING))) sensor will detect the surface of water, however it is not rated for outdoor use or continual use in a wet environment. Condensation on its transducers may affect performance and lifespan of the device.

Air Temperature Temperature has an effect on the speed of sound in air that is measurable by the PING))) sensor. If the temperature (°C) is known, the formula is:

Cair = 331.5 + ()0.6× TC m/s

The percent error over the sensor’s operating range of 0 to 70 ° C is significant, in the magnitude of 11 to 12 percent. The use of conversion constants to account for air temperature may be incorporated into your program (as is the case in the example BS2 program given in the Example Programs section below). Percent error and conversion constant calculations are introduced in Chapter 2 of Smart Sensors and Applications, a Stamps in Class text available for download from the 28029 product page at www.parallax.com.

Copyright © Parallax Inc. PING))) Ultrasonic Distance Sensor (#28015) v1.6 9/11/2009 Page 3 of 12 Test Data The test data on the following pages is based on the PING))) sensor, tested in the Parallax lab, while connected to a BASIC Stamp microcontroller module. The test surface was a linoleum floor, so the sensor was elevated to minimize floor reflections in the data. All tests were conducted at room temperature, indoors, in a protected environment. The target was always centered at the same elevation as the PING))) sensor.

Test 1 Sensor Elevation: 40 in. (101.6 cm) Target: 3.5 in. (8.9 cm) diameter cylinder, 4 ft. (121.9 cm) tall – vertical orientation

Copyright © Parallax Inc. PING))) Ultrasonic Distance Sensor (#28015) v1.6 9/11/2009 Page 4 of 12 Test 2 Sensor Elevation: 40 in. (101.6 cm) Target: 12 in. x 12 in. (30.5 cm x 30.5 cm) cardboard, mounted on 1 in. (2.5 cm) pole Target positioned parallel to backplane of sensor

Copyright © Parallax Inc. PING))) Ultrasonic Distance Sensor (#28015) v1.6 9/11/2009 Page 5 of 12 Example Programs and Applications

BASIC Stamp 2 This circuit allows you to quickly connect your PING))) sensor to a BASIC Stamp® 2 via the Board of Education® breadboard area. The PING))) module’s GND pin connects to Vss, the 5 V pin connects to Vdd, and the SIG pin connects to I/O pin P15. This circuit will work with the example BASIC Stamp program listed below.

Extension Cable and Port Cautions for the Board of Education

If you are connecting your PING))) sensor to a Board of Education platform using an extension cable, follow these steps:

1. When plugging the cable onto the PING))) sensor, connect Black to GND, Red to 5 V, and White to SIG. 2. Check to see if your Board of Education servo ports have a jumper, as shown at right. 3. If your Board of Education servo ports have a jumper, set it to Vdd as shown. Then plug the cable into the port, matching the wire color to the labels next to the port. 4. If your Board of Education servo ports do not have a jumper, do not use them with the PING))) sensor. These ports only provide Vin, not Vdd, and this may damage your PING))) sensor. Go to the next step. 5. Connect the cable directly to the breadboard with a 3-pin header as shown above. Then, use jumper wires to connect Black to Vss, Red to Vdd, and White to I/O pin P15.

Board of Education Servo Port Jumper, Set to Vdd

Copyright © Parallax Inc. PING))) Ultrasonic Distance Sensor (#28015) v1.6 9/11/2009 Page 6 of 12 Example Program: PingMeasureCmAndIn.bs2

This example BS2 program is an excerpt from Chapter 2 of the Stamps in Class text Smart Sensors and Applications. Additional PBASIC programs, one for the BS1 and another than runs on any model of BASIC Stamp 2 (BS2, BS2e, BS2sx, BS2p, BS2pe, BS2px) can be downloaded from the 28015 product page.

' Smart Sensors and Applications - PingMeasureCmAndIn.bs2 ' Measure distance with Ping))) sensor and display in both in & cm

' {$STAMP BS2} ' {$PBASIC 2.5}

' Conversion constants for room temperature measurements. CmConstant CON 2260 InConstant CON 890 cmDistance VAR Word inDistance VAR Word time VAR Word

DO

PULSOUT 15, 5 PULSIN 15, 1, time

cmDistance = cmConstant ** time inDistance = inConstant ** time

DEBUG HOME, DEC3 cmDistance, " cm" DEBUG CR, DEC3 inDistance, " in"

PAUSE 100

LOOP

Copyright © Parallax Inc. PING))) Ultrasonic Distance Sensor (#28015) v1.6 9/11/2009 Page 7 of 12 Propeller Microcontroller {{ *************************************** * Ping))) Object V1.1 * * (C) 2006 Parallax, Inc. * * Author: Chris Savage & Jeff Martin * * Started: 05-08-2006 * ***************************************

Interface to Ping))) sensor and measure its ultrasonic travel time. Measurements can be in units of time or distance. Each method requires one parameter, Pin, that is the I/O pin that is connected to the Ping)))'s signal line.

┌───────────────────┐ │┌───┐ ┌───┐│ Connection To Propeller ││ ‣ │ PING))) │ ‣ ││ Remember PING))) Requires │└───┘ └───┘│ +5V Power Supply │ GND +5V SIG │ └─────┬───┬───┬─────┘ │ │  1K  └┘ └ Pin

------REVISION HISTORY------v1.1 - Updated 03/20/2007 to change SIG resistor from 10K to 1K }}

CON TO_IN = 73_746 ' Inches TO_CM = 29_034 ' Centimeters

PUB Ticks(Pin) : Microseconds | cnt1, cnt2 ''Return Ping)))'s one-way ultrasonic travel time in microseconds

outa[Pin]~ ' Clear I/O Pin dira[Pin]~~ ' Make Pin Output outa[Pin]~~ ' Set I/O Pin outa[Pin]~ ' Clear I/O Pin (> 2 μs pulse) dira[Pin]~ ' Make I/O Pin Input waitpne(0, |< Pin, 0) ' Wait For Pin To Go HIGH cnt1 := cnt ' Store Current Counter Value waitpeq(0, |< Pin, 0) ' Wait For Pin To Go LOW cnt2 := cnt ' Store New Counter Value Microseconds := (||(cnt1 - cnt2) / (clkfreq / 1_000_000)) >> 1 ' Return Time in μs

PUB Inches(Pin) : Distance ''Measure object distance in inches

Distance := Ticks(Pin) * 1_000 / TO_IN ' Distance In Inches

PUB Centimeters(Pin) : Distance ''Measure object distance in centimeters

Distance := Millimeters(Pin) / 10 ' Distance In Centimeters

PUB Millimeters(Pin) : Distance ''Measure object distance in millimeters

Distance := Ticks(Pin) * 10_000 / TO_CM ' Distance In Millimeters

Copyright © Parallax Inc. PING))) Ultrasonic Distance Sensor (#28015) v1.6 9/11/2009 Page 8 of 12 The ping.spin object is used in an example project with the Parallax 4 x 20 Serial LCD (#27979) to display distance measurements. The complete Project Archive can be downloaded from the Propeller Object Exchange at http://obex.parallax.com.

─────────────────────────────────────── Parallax Propeller Chip Project Archive ───────────────────────────────────────

Project : "ping_demo"

Archived : Tuesday, December 18, 2007 at 3:29:46 PM

Tool : Propeller Tool version 1.05.8

ping_demo.spin │ ├──Debug_Lcd.spin │ │ │ ├──Serial_Lcd.spin │ │ │ │ │ └──Simple_Serial.spin │ │ │ └──Simple_Numbers.spin │ └──ping.spin

Copyright © Parallax Inc. PING))) Ultrasonic Distance Sensor (#28015) v1.6 9/11/2009 Page 9 of 12 Javelin Stamp Microcontroller This class file implements several methods for using the PING))) sensor with the Javelin Stamp module. package stamp.peripheral.sensor; import stamp.core.*;

/** * This class provides an interface to the Parallax PING))) ultrasonic * range finder module. *

* Usage:
* * Ping range = new Ping(CPU.pin0); // trigger and echo on P0 * *

* Detailed documentation for the PING))) Sensor can be found at:
* http://www.parallax.com/detail.asp?product_id=28015 *

* * @version 1.0 03 FEB 2005 */ public final class Ping {

private int ioPin;

/** * Creates PING))) range finder object * * @param ioPin PING))) trigger and echo return pin */ public Ping (int ioPin) { this.ioPin = ioPin; }

/** * Returns raw distance value from the PING))) sensor. * * @return Raw distance value from PING))) */ public int getRaw() {

int echoRaw = 0;

CPU.writePin(ioPin, false); // setup for high-going pulse CPU.pulseOut(1, ioPin); // send trigger pulse echoRaw = CPU.pulseIn(2171, ioPin, true); // measure echo return

// return echo pulse if in range; zero if out-of-range return (echoRaw < 2131) ? echoRaw : 0; }

/* * The PING))) returns a pulse width of 73.746 uS per inch. Since the * Javelin pulseIn() round-trip echo time is in 8.68 uS units, this is the * same as a one-way trip in 4.34 uS units. Dividing 73.746 by 4.34 we * get a time-per-inch conversion factor of 16.9922 (x 0.058851). *

Copyright © Parallax Inc. PING))) Ultrasonic Distance Sensor (#28015) v1.6 9/11/2009 Page 10 of 12 * Values to derive conversion factors are selected to prevent roll-over * past the 15-bit positive values of Javelin Stamp integers. */

/** * @return PING))) distance value in inches */ public int getIn() { return (getRaw() * 3 / 51); // raw * 0.058824 }

/** * @return PING))) distance value in tenths of inches */ public int getIn10() { return (getRaw() * 3 / 5); // raw / 1.6667 }

/* * The PING))) returns a pulse width of 29.033 uS per centimeter. As the * Javelin pulseIn() round-trip echo time is in 8.68 uS units, this is the * same as a one-way trip in 4.34 uS units. Dividing 29.033 by 4.34 we * get a time-per-centimeter conversion factor of 6.6896. * * Values to derive conversion factors are selected to prevent roll-over * past the 15-bit positive values of Javelin Stamp integers. */

/** * @return PING))) distance value in centimeters */ public int getCm() { return (getRaw() * 3 / 20); // raw / 6.6667 }

/** * @return PING))) distance value in millimeters */ public int getMm() { return (getRaw() * 3 / 2); // raw / 0.6667 } }

This simple demo illustrates the use of the PING))) ultrasonic range finder class with the Javelin Stamp: import stamp.core.*; import stamp.peripheral.sensor.Ping;

public class testPing {

public static final char HOME = 0x01;

public static void main() {

Ping range = new Ping(CPU.pin0); StringBuffer msg = new StringBuffer();

int distance;

Copyright © Parallax Inc. PING))) Ultrasonic Distance Sensor (#28015) v1.6 9/11/2009 Page 11 of 12

while (true) { // measure distance to target in inches distance = range.getIn();

// create and display measurement message msg.clear(); msg.append(HOME); msg.append(distance); msg.append(" \" \n"); System.out.print(msg.toString());

// wait 0.5 seconds between readings CPU.delay(5000); } } }

Resources and Downloads You can find additional resources for the PING))) sensor by searching the following product pages at www.parallax.com: y Smart Sensors and Applications (a Stamps in Class text), #28029 y PING))) Mounting Bracket Kit – a servo-driven mount designed to attach to a Boe-Bot robot, #570-28015 y Extension cable with 3-in header, #805-00011 (10-in.) or #805-00012 (14-in.)

A video of a Boe-Bot robot using the PING))) sensor to scan its surroundings then drive to the closest object can be found under Resources > Video Library > Boe-Bot Robot Video Gallery.

Copyright © Parallax Inc. PING))) Ultrasonic Distance Sensor (#28015) v1.6 9/11/2009 Page 12 of 12 Appendix C

Additional Measurement Tables and Results

Measurement Real Distance (cm) Measured Distance (cm) Error 1 12 12 0,00% 2 22 21 4,55% 3 33 34 -3,03% 4 38 37 2,63% 5 45 46 -2,22% 6 55 56 -1,82% 7 58 59 -1,72% 8 69 68 1,45% 9 71 71 0,00% 10 80 80 0,00% 11 89 88 1,12% 12 90 90 0,00% 13 96 97 -1,04% 14 104 105 -0,96% 15 113 113 0,00% 16 121 121 0,00% 17 129 129/102 18 132 108 18,18% 19 133 120 9,77% 20 144 130/145 21 151 151/130 22 163 163 0,00% 23 169 107 36,69% 24 200 138, 204, 145

Table C.1: Measurement results of ranging a wall directly in front of the sensor

89 Traineeship Report November 17, 2010 APPENDIX C. ADDITIONAL MEASUREMENT TABLES AND RESULTS

Measurement Distance D (cm) Deviation L (cm) Range reading R (cm) 1 55 9 54 2 55 17 56 3 55 23 57 4 55 31 61 5 55 38 66 6 55 44 70 7 55 50 80 8 55 51 74 9 55 52 171 10 55 53 170 11 56 53 173/250 12 57 53 259 13 58 53 76 14 59 53 83 15 61 53 79 16 73 8 71 17 73 16 73 18 73 23 75 19 73 35 80 20 73 41 82 21 73 44 84 22 73 57 86 23 73 66 85 24 73 67 86 25 73 68 87 26 73 69 87/163 27 73 70 133/162 28 73 74 159 29 73 76 158 30 74 70 163 31 75 70 88

Table C.2: Measurement results of angular ranging

Measurement Distance Sen- Distance to Angle Range Angle result- Operational sor burst to wall (cm) (cm) ing from mea- angle wall surement 1 32 21 49,0 25 32,9 16,1 2 38 23 52,8 32 44,0 8,7 3 48 23 61,4 38 52,8 8,6 4 54 33 52,3 45 42,8 9,5 5 48 37 39,6 39 18,4 21,1 6 105 36 69,9 51 45,1 24,8 7 120 36 72,5 52 46,2 26,4 8 29 25 30,5 26 15,9 14,5

Table C.3: First measurement results of angular ranging to wall

90 Traineeship Report November 17, 2010 Appendix D

USB-I2C Devantech SRF-08 commands

Start Device Device Number of Databytes Stop Addr+R/W Internal Data bytes Address Byte "S" 0xE0 0x00 0x01 0x51 "P" To Send 0x53 0x45 30 0x30 30 0x30 31 0x35 31 0x50

Table D.1: Start Ranging (cm)

Start Device Device Re-start Device Number of Stop Addr+R/W Internal Addr+R/W Databytes Address Byte "S" 0xE0 0x02 "S" 0xE1 0x01 P To Send 0x53 0x45 30 0x30 32 0x53 0x45 31 0x30 31 0x50

Table D.2: Request ranging information first echo

Start Device Device Read Read one Stop Addr+R/W Internal byte Address Byte "S" 0xE0 0x00 "R" 0x01 "P" To Send 0x53 0x45 30 0x30 30 0x52 0x30 31 0x50

Table D.3: Request software revision

To change the address of the sensor device to for example E6, only the device that needs it’s address changed should be connected to the I2C bus. Then, the commands {A0 AA A5 E6} should be send sequentially as follows:

Start Device Device Number of Stop Addr+R/W Internal Data bytes Address Byte "S" 0x00 0x00 0xA0 "P" To Send 0x53 0x30 30 0x30 30 0x41 30 0x50

Table D.4: First command to change address

91 Traineeship Report November 17, 2010 APPENDIX D. USB-I2C DEVANTECH SRF-08 COMMANDS

Start Device Device Number of Stop Addr+R/W Internal Data bytes Address Byte "S" 0x00 0x00 0xAA "P" To Send 0x53 0x30 30 0x30 30 0x41 41 0x50

Table D.5: Second command to change address

Start Device Device Number of Stop Addr+R/W Internal Data bytes Address Byte "S" 0x00 0x00 0xA5 "P" To Send 0x53 0x30 30 0x30 30 0x41 35 0x50

Table D.6: Third command to change address

Start Device Device Number of Stop Addr+R/W Internal Data bytes Address Byte "S" 0x00 0x00 0xE6 "P" To Send 0x53 0x30 30 0x30 30 0x45 36 0x50

Table D.7: Fourth command to change address

92 Traineeship Report November 17, 2010 Appendix E

Matlab M-Files

E.1 Simulations

E.1.1 Calculate position and orientation sensors First section; Calculation (of angle of 10 degrees and smaller):

1 % % 2 % Created by: N.F. Jansen% 3 % On: 03−10−2010 % 4 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 5 6 clear all 7 close all 8 clc 9 10 Lsurface = 80; 11 Wsurface = 37; 12 13 x0 = Wsurface/2; 14 y0 = Lsurface/2; 15 16 thetasurface = 0*2*pi/360; 17 ntheta = 14*2*pi/360; 18 Ls = 140*cos(ntheta); 19 range = 50; 20 21 % To accurately calculate the locations and orientations, the angle of 22 % operation is split up in this if−elseif−else loop 23 if ntheta ≤ 10*2*pi/360 24 Nsensors = 19; % amount of sensors 25 %Firstx,y positions 26 S_{1} = {0,40}; 27 S_{2} = {−10,40}; 28 S_{3} = {10,40}; 29 S_{4} = {−18,40}; 30 S_{5} = {18,40}; 31 S_{6} = {−18,25}; 32 S_{7} = {18,25}; 33 S_{8} = {−18, 15}; 34 S_{9} = {18, 15}; 35 S_{10} = {−18, 5};

93 Traineeship Report November 17, 2010 E.1. SIMULATIONS APPENDIX E. MATLAB M-FILES

36 S_{11} = {18, 5}; 37 S_{12} = {−18, −5}; 38 S_{13} = {18, −5}; 39 S_{14} = {−18, −15}; 40 S_{15} = {18, −15}; 41 S_{16} = {−18, −25}; 42 S_{17} = {18, −25}; 43 S_{18} = {−18, −33}; 44 S_{19} = {18, −33}; 45 S_{20} = {−18, −40}; 46 S_{21} = {18, −40}; 47 48 % Calculate the orientation of sensor 2,3 49 S_{1}{3} = pi/2; 50 P1P0 = range/tan(thetasurface + (pi/2−ntheta)); 51 PsP1 = sqrt((S_{1}{2}−S_{2}{2})^2+(S_{1}{1}−S_{2}{1})^2)−P1P0; 52 PsP2 = sqrt(PsP1^2+range^2); 53 S_{2}{3} = acos((PsP1^2+PsP2^2−range^2)/(2*PsP1*PsP2))+ntheta+thetasurface; 54 S_{3}{3} = pi−thetasurface−acos((PsP1^2+PsP2^2−range^2)/(2*PsP1*PsP2))−ntheta; 55 56 % Calculate the orientation of sensor 4,5 57 Ls4s2 = abs(S_{4}{1})−abs(S_{2}{1}); 58 Ls2r = range/(cos(pi/2−(pi−(S_{2}{3}+ntheta)))); 59 Ls3 = sqrt(Ls4s2^2+Ls2r^2−2*Ls4s2*Ls2r*cos(pi−(S_{2}{3}+ntheta))); 60 s4alpha = acos((Ls4s2^2+Ls3^2−Ls2r^2)/(2*Ls4s2*Ls3)); 61 S_{4}{3} = s4alpha + ntheta; 62 S_{5}{3} = pi−S_{4}{3}; 63 64 % Calculate the orientation of sensor 6,7 65 Ls6s4 = S_{4}{2}−S_{6}{2}; 66 Ls4r = (range−0.15*range)/sin(pi/2−(pi−(S_{4}{3}+ntheta))); 67 Ls6 = sqrt(Ls6s4^2+Ls4r^2−2*Ls6s4*Ls4r*cos(3*pi/2−(S_{4}{3}+ntheta))); 68 s6alpha = acos((Ls6s4^2+Ls6^2−Ls4r^2)/(2*Ls6s4*Ls6)); 69 S_{6}{3} = pi/2+s6alpha+ntheta; 70 S_{7}{3} = pi−S_{6}{3}; 71 72 % Calculate the orientation of sensor 8,9 73 s8alpha = S_{6}{3}+ntheta−pi/2; 74 Ls8 = range/tan(s8alpha); 75 S_{8}{3} = pi/2+atan(range/abs(abs((abs(S_{6}{2})−abs(S_{8}{2}))+Ls8)))+ntheta; 76 S_{9}{3} = pi−S_{8}{3}; 77 78 % Calculate the orientation of sensor 9,10 79 s10alpha = S_{8}{3}+ntheta−pi/2; 80 Ls10 = range/tan(s10alpha); 81 S_{10}{3} = pi/2+atan(range/abs(abs((abs(S_{8}{2})−abs(S_{10}{2}))+Ls10)))+ntheta; 82 S_{11}{3} = pi−S_{10}{3}; 83 84 % Calculate the orientation of sensor 11,12 85 s12alpha = S_{10}{3}+ntheta−pi/2; 86 Ls12 = range/tan(s12alpha); 87 if abs(abs(S_{10}{2})−abs(S_{12}{2}))+Ls12<0 88 S_{12}{3} = pi/2+(pi−atan(range/abs((abs(abs(S_{10}{2})−abs(S_{12}{2}))+Ls12))))+ntheta; 89 else 90 S_{12}{3} = pi/2+atan(range/(abs(abs(S_{10}{2})−abs(S_{12}{2}))+Ls12))+ntheta−0.2; 91 end 92 S_{13}{3} = pi−S_{12}{3}; 93 94 % Calculate the orientation of sensor 13,14

94 Traineeship Report November 17, 2010 E.1. SIMULATIONS APPENDIX E. MATLAB M-FILES

95 s14alpha = S_{12}{3}+ntheta−pi/2; 96 Ls14 = range/tan(s14alpha); 97 if abs(abs(S_{12}{2})−abs(S_{14}{2}))+Ls14<0 98 S_{14}{3} = pi/2+(pi−atan(range/abs((abs(abs(S_{12}{2})−abs(S_{14}{2})))+Ls14)))+ntheta; 99 else 100 S_{14}{3} = pi/2+atan(range/abs((abs(abs(S_{12}{2})−abs(S_{14}{2})))+Ls14))+ntheta; 101 end 102 S_{15}{3} = pi−S_{14}{3}; 103 104 % Calculate the orientation of sensor 15,16 105 s16alpha = S_{14}{3}+ntheta−pi/2; 106 Ls16 = (range+0.03*range)/tan(s16alpha); 107 if abs(abs(S_{14}{2})−abs(S_{16}{2}))+Ls16<0 108 S_{16}{3} = pi/2+(pi−atan(range/abs((abs(abs(S_{14}{2})−abs(S_{16}{2})))+Ls16)))+ntheta; 109 else 110 S_{16}{3} = pi/2+atan(range/abs((abs(abs(S_{14}{2})−abs(S_{16}{2})))+Ls16))+ntheta; 111 end 112 S_{17}{3} = pi−S_{16}{3}; 113 114 % Calculate the orientation of sensor 17,18 115 s18alpha = S_{16}{3}+ntheta−pi/2; 116 Ls18 = (range+0.03*range)/tan(s18alpha); 117 if abs(abs(S_{16}{2})−abs(S_{18}{2}))+Ls18<0 118 S_{18}{3} = pi/2+(pi−atan(range/abs((abs(abs(S_{16}{2})−abs(S_{18}{2})))+Ls18)))+ntheta; 119 else 120 S_{16}{3} = pi/2+atan(range/abs((abs(abs(S_{16}{2})−abs(S_{18}{2})))+Ls18))+ntheta; 121 end 122 S_{19}{3} = pi−S_{18}{3}; 123 124 % Calculate the orientation of sensor 19,20 125 Xs18 = range/tan(S_{18}{3}+ntheta−pi); 126 if ((S_{18}{3}+ntheta)*360/(2*pi))<270 127 X18 = S_{18}{1}−Xs18; 128 else 129 X18 = S_{18}{1}+Xs18; 130 end 131 132 % Calculate the orientation of sensor 20,21 133 s20alpha = S_{18}{3}+ntheta−pi; 134 Ls20 = (range)/tan(s18alpha); 135 S_{20}{3} = pi+atan(range/abs((abs(abs(S_{18}{1})−abs(S_{20}{1})))+Ls20))+ntheta−1.2; 136 S_{21}{3} = pi−S_{20}{3}; 137 138 % Calculate the position and orientation of other sensors 139 Xs20 = range/tan(S_{20}{3}+ntheta−pi); 140 if ((S_{20}{3}+ntheta)*360/(2*pi))<270 141 X20 = S_{20}{1}−Xs20; 142 else 143 X20 = S_{18}{1}+Xs20; 144 end 145 146 Ns = 22; 147 Posx=0; 148 while X18<−5 149 Dist1819 = 2*abs(X20); 150 L = 2*range*tan(ntheta); 151 if Dist1819

95 Traineeship Report November 17, 2010 E.1. SIMULATIONS APPENDIX E. MATLAB M-FILES

154 X18=0; 155 else 156 s20alpha = S_{Ns−2}{3}+ntheta−pi; 157 Ls20 = range/tan(s20alpha); 158 S_{Ns} = {−8+Posx, −40}; 159 S_{Ns}{3} = pi+atan(range/abs((abs(abs(S_{Ns−2}{1})−abs(S_{Ns}{1})))+Ls20))+ntheta; 160 S_{Ns+1} = {8−Posx, −40}; 161 Posx = Posx+2;

Other angles (<14 and 14>) are handled in a different loop. The second section is how to print the figure containing the robot and the FOV of the sensors:

1 Ns=Ns+2; 2 Posx = Posx+5; 3 Nsensors = Nsensors+2; 4 Xs14 = range/tan(S_{Ns−2}{3}+ntheta−pi); 5 X14 = S_{Ns−2}{1}−Xs14; 6 end 7 end 8 end 9 10 nx1_{Nsensors}=[]; 11 nx2_{Nsensors}=[]; 12 ny1_{Nsensors}=[]; 13 ny2_{Nsensors}=[]; 14 15 %Calculate FOV of sensors 16 for N=1:Nsensors 17 nx1_{N}=Ls*cos(S_{N}{3}−ntheta)+S_{N}{1}; 18 ny1_{N}=Ls*sin(S_{N}{3}−ntheta)+S_{N}{2}; 19 nx2_{N}=Ls*cos(S_{N}{3}+ntheta)+S_{N}{1}; 20 ny2_{N}=Ls*sin(S_{N}{3}+ntheta)+S_{N}{2}; 21 end 22 23 x_{Nsensors} = []; 24 y_{Nsensors} = []; 25 col_ = ['r';'b';'g';'c';'m';'y';'k']; 26 27 % Show figure with robot and FOV of sensors 28 h = rectangle('Position',[−x0,−y0,Wsurface,Lsurface],... 29 'Curvature',[1, 0.4]); 30 hold on 31 xs = [ −19, 19, 19, −19]; 32 ys = [ −40, −40, 40, 40]; 33 fill (xs, ys,'w') 34 c=1; 35 for x=1:Nsensors 36 x_{x} = [S_{x}{1}, nx1_{x}, nx2_{x}]; 37 y_{x} = [S_{x}{2}, ny1_{x}, ny2_{x}]; 38 fill ( x_{x}, y_{x}, col_(c) ) 39 if c==7 40 c=1; 41 else 42 c = c+1; 43 end 44 end 45 hold off 46 h1 = rectangle('Position',[−x0−range,−y0−range,Wsurface+2*range,Lsurface+2*range],...

96 Traineeship Report November 17, 2010 E.1. SIMULATIONS APPENDIX E. MATLAB M-FILES

47 'Curvature',[1, 0.2]); 48 daspect([1,1,1]); 49 axis([−160 160 −170 170]) 50 grid on

E.1.2 Simulation random polygon in environment robot

1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2 % Name: SensorSimulation% 3 % Function: Simulation with polygons moving through robot's surroundings% 4 % % 5 % Created by: N.F. Jansen% 6 % On: 03−10−2010 % 7 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 8 9 clear all 10 close all 11 clc 12 13 syms x 14 load SensorLocations.mat 15 % with angle of 14 degrees! (20 sensors) 16 % tilt sensors about 17 degrees to not detect the floor within2 meters! 17 %(at 30 cm) 18 ntheta = 14*2*pi/360; 19 Lsurface = 80; 20 Wsurface = 37; 21 22 x0 = Wsurface/2; 23 y0 = Lsurface/2; 24 25 %p= get(0,'monitorpositions'); 26 % scrsz= get(0,'ScreenSize'); 27 % figure('OuterPosition',[p(1,1)p(2,1)+283 abs(p(1,1)) 768]) 28 29 xs = [ −x0, x0, x0, −x0]; 30 ys = [ −y0, −y0, y0, y0]; 31 fill (xs, ys,'w') 32 hold on 33 34 % draw robot platform 35 Ls = 140*cos(ntheta); 36 for N = 1:size(S_,2); 37 nx1_{N}=Ls*cos(S_{N}{3}−ntheta)+S_{N}{1}; 38 ny1_{N}=Ls*sin(S_{N}{3}−ntheta)+S_{N}{2}; 39 nx2_{N}=Ls*cos(S_{N}{3}+ntheta)+S_{N}{1}; 40 ny2_{N}=Ls*sin(S_{N}{3}+ntheta)+S_{N}{2}; 41 end 42 for x = 1:size(S_,2) 43 x_{x} = [S_{x}{1}, nx1_{x}, nx2_{x}]; 44 y_{x} = [S_{x}{2}, ny1_{x}, ny2_{x}]; 45 fill ( x_{x}, y_{x},'w') 46 end 47 48 49 r=1;

97 Traineeship Report November 17, 2010 E.1. SIMULATIONS APPENDIX E. MATLAB M-FILES

50 51 % Randomly choosea location in the environment for the polygon 52 xp = −200+300*rand(1,1); 53 yp = 50+110*rand(1,1); 54 sgn = rand(1,1); 55 if sgn>0.5 56 yp = −yp−40; 57 end 58 59 60 % Randomly choosea polygon 61 A = uint8(1+6*rand(1,1)); 62 63 b = 1; 64 h_{20} = []; 65 while(b<20) 66 % Draw the polygon 67 [px, py] = drawpolygon(A, xp, yp); 68 % Calculate the distance of the polygon to the robot. 69 [S_{r}{4}, qx, qy] = calcDist2Obj(x_{r}, y_{r}, px, py); 70 h = (qx, qy,'.b'); 71 yp = yp − 5; 72 xp = xp − 2; 73 a = 0; 74 75 % Calculate and show distance on FOV sensors. 76 while a < 5 77 r = 1+2*a; 78 S_{r}{4} = calcDist2Obj(x_{r}, y_{r}, px, py); 79 if not(isempty(h_{r})) 80 % Delete last reading if necessary 81 delete(h_{r}); 82 h_{r} = []; 83 end 84 if S_{r}{4} > 0 85 h_{r} = drawreading(S_, r, ntheta); 86 end 87 88 r = 11+2*a; 89 S_{r}{4} = calcDist2Obj(x_{r}, y_{r}, px, py); 90 if not(isempty(h_{r})) 91 delete(h_{r}); 92 h_{r} = []; 93 end 94 if S_{r}{4} > 0 95 h_{r} = drawreading(S_, r, ntheta); 96 end 97 98 r = 20−2*a; 99 S_{r}{4} = calcDist2Obj(x_{r}, y_{r}, px, py); 100 if not(isempty(h_{r})) 101 delete(h_{r}); 102 h_{r} = []; 103 end 104 if S_{r}{4} > 0 105 h_{r} = drawreading(S_, r, ntheta); 106 end 107 108 r = 10−2*a;

98 Traineeship Report November 17, 2010 E.1. SIMULATIONS APPENDIX E. MATLAB M-FILES

109 S_{r}{4} = calcDist2Obj(x_{r}, y_{r}, px, py); 110 if not(isempty(h_{r})) 111 delete(h_{r}); 112 h_{r} = []; 113 end 114 if S_{r}{4} > 0 115 h_{r} = drawreading(S_, r, ntheta); 116 end 117 pause(0.08); 118 a = a + 1; 119 end 120 delete(h) 121 122 daspect([1,1,1]); 123 b = b + 1; 124 end

E.1.3 Function CalcDist2Obj

1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2 % Name: RobotPathPlanning.m% 3 % Function: Calculates points inside polygon and returns minimum distance% 4 % % 5 % Created by: N.F. Jansen% 6 % On: 03−10−2010 % 7 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 8 9 10 function [minDist, qx, qy] = calcDist2Obj(xv, yv, px, py) 11 12 %t is the cumulative arclength along the edges of the polygon. 13 t = cumsum(sqrt([0,diff(px(:)').^2] + [0,diff(py(:)').^2])); 14 15 % The total distance around the polygon ist(end) 16 tmax = t(end); 17 18 % createa piecewise linear spline for each of px and py, 19 % asa function of the cumulative chordwise arclength. 20 splx = mkpp(t,[diff(px(:))./diff(t'),px(1:(end−1))']); 21 sply = mkpp(t,[diff(py(:))./diff(t'),py(1:(end−1))']); 22 23 % now interpolate the polygon splines, splx and sply. 24 % Nt is the number of points to generate around the 25 % polygon. The first and last points should be replicates 26 % at least to within floating point trash.) 27 Nt = 500; 28 tint = linspace(0,tmax,Nt); 29 30 qx = ppval(splx,tint); 31 qy = ppval(sply,tint); 32 33 %p= get(0,'monitorpositions'); 34 %scrsz= get(0,'ScreenSize'); 35 %figure('OuterPosition',[p(1,1) −433 abs(p(1,1)) 768]) 36 %plot the polygon itself, as well as the generated points. 37 %plot(px,py,'k−v',qx,qy,'ro')

99 Traineeship Report November 17, 2010 E.1. SIMULATIONS APPENDIX E. MATLAB M-FILES

38 in = inpolygon(qx,qy,xv,yv); 39 %plot(qx(in),qy(in),'.g',qx(¬in),qy(¬in),'.b') 40 %h= plot(qx, qy,'.b'); 41 minDist = min(sqrt((qy(in)−yv(1)).^2+(qx(in)−xv(1)).^2)); 42 grid on 43 daspect([1 1 1])

E.1.4 Function Drawreading

1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2 % Name: drawreading.m% 3 % Function: draws circular arc indiciting possible location object% 4 % % 5 % Created by: N.F. Jansen% 6 % On: 03−10−2010 % 7 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 8 9 function h = drawreading(S_, r, ntheta) 10 11 P_{1}{1} = S_{r}{1}+S_{r}{4}*cos(S_{r}{3}+ntheta); 12 P_{1}{2} = S_{r}{2}+S_{r}{4}*sin(S_{r}{3}+ntheta); 13 P_{2}{1} = S_{r}{1}+S_{r}{4}*cos(S_{r}{3}−ntheta); 14 P_{2}{2} = S_{r}{2}+S_{r}{4}*sin(S_{r}{3}−ntheta); 15 16 if P_{1}{2} < 0 && P_{2}{2} > 0 17 x = P_{1}{1}:−0.1:S_{r}{1}−S_{r}{4}; 18 y = S_{r}{2}−sqrt(S_{r}{4}^2−(x−S_{r}{1}).^2); 19 x2 = S_{r}{1}−S_{r}{4}+0.1:0.1:P_{2}{1}; 20 y2 = S_{r}{2}+sqrt(S_{r}{4}^2−(x2−S_{r}{1}).^2); 21 x(size(x,2):size(x,2)+size(x2,2)−1) = x2; 22 y(size(y,2):size(y,2)+size(y2,2)−1) = y2; 23 h = plot(x,y,'r','LineWidth',4); 24 elseif P_{2}{2} < 0 && P_{1}{2} > 0 25 x = P_{1}{1}:0.1:S_{r}{1}+S_{r}{4}; 26 y = S_{r}{2}+sqrt(S_{r}{4}^2−(x−S_{r}{1}).^2); 27 x2 = S_{r}{1}+S_{r}{4}:−0.1:P_{2}{1}; 28 y2 = S_{r}{2}−sqrt(S_{r}{4}^2−(x2−S_{r}{1}).^2); 29 x(size(x,2):size(x,2)+size(x2,2)−1) = x2; 30 y(size(y,2):size(y,2)+size(y2,2)−1) = y2; 31 h = plot(x,real(y),'r','LineWidth',4); 32 elseif P_{1}{2} < 0 && P_{2}{2} < 0 33 x = P_{2}{1}:0.1:P_{1}{1}; 34 y = −(−S_{r}{2}+sqrt(S_{r}{4}^2−(x−S_{r}{1}).^2)); 35 h = plot(x,y,'r','LineWidth',4); 36 elseif (S_{r}{3}*360/(2*pi)) < −0 37 x = P_{1}{1}:0.1:S_{r}{1}+S_{r}{4}; 38 y = −S_{r}{2}+sqrt(S_{r}{4}^2−(x−S_{r}{1}).^2); 39 x2 = S_{r}{1}+S_{r}{4}:−0.1:P_{2}{1}; 40 y2 = −(−S_{r}{2}+sqrt(S_{r}{4}^2−(x2−S_{r}{1}).^2)); 41 x(size(x,2):size(x,2)+size(x2,2)−1) = x2; 42 y(size(y,2):size(y,2)+size(y2,2)−1) = y2; 43 h = plot(x,y,'r','LineWidth',4); 44 elseif (S_{r}{3}*360/(2*pi)) > 180 45 x = P_{1}{1}:−0.1:S_{r}{1}−S_{r}{4}; 46 y = −(−S_{r}{2}+sqrt(S_{r}{4}^2−(x−S_{r}{1}).^2)); 47 x2 = S_{r}{1}−S_{r}{4}+0.1:0.1:P_{2}{1};

100 Traineeship Report November 17, 2010 E.1. SIMULATIONS APPENDIX E. MATLAB M-FILES

48 y2 = −S_{r}{2}+sqrt(S_{r}{4}^2−(x2−S_{r}{1}).^2); 49 x(size(x,2):size(x,2)+size(x2,2)−1) = x2; 50 y(size(y,2):size(y,2)+size(y2,2)−1) = y2; 51 h = plot(x,y,'r','LineWidth',4); 52 else 53 x = P_{1}{1}:0.1:P_{2}{1}; 54 y = S_{r}{2}+sqrt(S_{r}{4}^2−(x−S_{r}{1}).^2); 55 h = plot(x,y,'r','LineWidth',4); 56 end 57 %plot the line

E.1.5 Simulation robot navigation with sonar

1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2 % Name: RobotPathPlanning.m% 3 % Function: Main function of simulation of path planning with sonar% 4 % % 5 % Created by: N.F. Jansen% 6 % On: 03−10−2010 % 7 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 8 9 clear all; 10 close all; 11 clc 12 13 syms vl vr theta 14 load SensorLocations.mat 15 ntheta = 14*2*pi/360; 16 % create map 17 course = (imread('office.png')); %image of'office' 18 map = (imread('Emptyoffice.png')); 19 % get'obstacles' 20 [Yobst, Xobst]=find(course(:,:,1)==0); 21 22 % plot figure window at laptop screen full size 23 %p= get(0,'monitorpositions'); 24 %scrsz= get(0,'ScreenSize'); 25 %figure('OuterPosition',[p(1,1)p(2,1)+283 abs(p(1,1)) 768]) 26 27 subplot(2,1,1) 28 imagesc(course), axis image on; colormap gray; 29 set(gca,'YDir','normal') 30 31 %robot initial positions 32 title('Click to specify robots starting position'); 33 %collect input point for robot starting posn. 34 [x0 y0] = ginput(1); 35 posn = [x0, y0, 0]; %x,y, theta 36 title('Click to specify robots inital heading'); 37 %collect input point for robot heading 38 [y x] = ginput(1); 39 y = y − posn(1); 40 x = x − posn(2); 41 posn(3) = cart2pol(x,y); 42 set(gca,'YDir','normal') 43 [vx_, vy_] = drawrobot(posn, course);

101 Traineeship Report November 17, 2010 E.1. SIMULATIONS APPENDIX E. MATLAB M-FILES

44 45 46 title('Click to specify robots goal'); 47 [xg yg] = ginput(1); 48 posgoal = [xg, yg, 0]; %x,y, theta 49 %collect input point for robot goal 50 51 subplot(2,1,2) 52 imagesc(map), axis image on; colormap gray; 53 set(gca,'YDir','normal') 54 hold on 55 56 wdia = 40; %distance between robot's wheels(pixels) 57 t=1; 58 vR = 5; 59 vL = 5; 60 61 Robj_{size(S_,2)} = []; 62 for i=1:1000 63 %get speed differences& sums 64 a=0; 65 Robj_{size(S_,2)} = []; 66 while a < 5 67 if (abs(posn(1)) ≤ abs(xg)+5) && (abs(posn(1)) ≥ abs(xg)−5) 68 if (abs(posn(2)) ≤ abs(yg)+5) && (abs(posn(2)) ≥ abs(yg)−5) 69 title('Robot succesfully reached its goal!'); 70 vR = 0; 71 vL = 0; 72 return 73 end 74 end 75 vdiff = vR−vL; 76 vsum = vR+vL; 77 78 %calculate new angle(pretty simple) 79 posn(3) = posn(3) + vdiff*t/wdia; 80 81 if(vdiff == 0) 82 %calculate new[yx] if wheels moving together. 83 posn(1) = vL*t*sin(posn(3))+posn(1); 84 posn(2) = vR*t*cos(posn(3))+posn(2); 85 else 86 %calculate new[yx] if wheels moving at unequal speeds. 87 posn(1) = posn(1) − wdia*vsum/(2*vdiff)*(cos(vdiff*t/wdia+posn(3))−cos(posn(3))); 88 posn(2) = posn(2) + wdia*vsum/(2*vdiff)*(sin(vdiff*t/wdia+posn(3))−sin(posn(3))); 89 end 90 % draw robot and return polygon of FOV sonar 91 subplot(2,1,1) 92 [vx_, vy_]= drawrobot(posn,course); 93 %drawnow; 94 95 % obtain angle of direction in region −2*pi<0<2 *pi 96 if posn(3)>(2*pi) 97 posn(3) = posn(3)−(2*pi); 98 elseif posn(3)<−(2*pi) 99 posn(3) = posn(3)+(2*pi); 100 end 101 102 % calculate and draw distance robot to object in sensor FOV

102 Traineeship Report November 17, 2010 E.1. SIMULATIONS APPENDIX E. MATLAB M-FILES

103 Q = 1+2*a; 104 in = inpolygon(Xobst, Yobst, vx_{Q}, vy_{Q}); 105 Robj_{Q} = min(sqrt((Yobst(in)−vy_{Q}(1)).^2+(Xobst(in)−vx_{Q}(1)).^2)); 106 if Robj_{Q} > 0 107 subplot(2,1,1) 108 drawreadpath(S_, vx_{Q},vy_{Q}, posn, Q, ntheta, Robj_{Q}); 109 subplot(2,1,2) 110 drawreadpath(S_, vx_{Q},vy_{Q}, posn, Q, ntheta, Robj_{Q}); 111 end 112 Ss(1) = Q; 113 114 Q = 11+2*a; 115 in = inpolygon(Xobst, Yobst, vx_{Q}, vy_{Q}); 116 Robj_{Q} = min(sqrt((Yobst(in)−vy_{Q}(1)).^2+(Xobst(in)−vx_{Q}(1)).^2)); 117 if Robj_{Q} > 0 118 subplot(2,1,1) 119 drawreadpath(S_, vx_{Q},vy_{Q}, posn, Q, ntheta, Robj_{Q}); 120 subplot(2,1,2) 121 drawreadpath(S_, vx_{Q},vy_{Q}, posn, Q, ntheta, Robj_{Q}); 122 end 123 Ss(2) = Q; 124 125 Q = 20−2*a; 126 in = inpolygon(Xobst, Yobst, vx_{Q}, vy_{Q}); 127 Robj_{Q} = min(sqrt((Yobst(in)−vy_{Q}(1)).^2+(Xobst(in)−vx_{Q}(1)).^2)); 128 if Robj_{Q} > 0 129 subplot(2,1,1) 130 drawreadpath(S_, vx_{Q},vy_{Q}, posn, Q, ntheta, Robj_{Q}); 131 subplot(2,1,2) 132 drawreadpath(S_, vx_{Q},vy_{Q}, posn, Q, ntheta, Robj_{Q}); 133 end 134 Ss(3) = Q; 135 136 Q = 10−2*a; 137 in = inpolygon(Xobst, Yobst, vx_{Q}, vy_{Q}); 138 Robj_{Q} = min(sqrt((Yobst(in)−vy_{Q}(1)).^2+(Xobst(in)−vx_{Q}(1)).^2)); 139 if Robj_{Q} > 0 140 subplot(2,1,1) 141 drawreadpath(S_, vx_{Q},vy_{Q}, posn, Q, ntheta, Robj_{Q}); 142 subplot(2,1,2) 143 drawreadpath(S_, vx_{Q},vy_{Q}, posn, Q, ntheta, Robj_{Q}); 144 end 145 Ss(4) = Q; 146 147 for l = 1:4 148 if (vdiff==0) 149 if Ss(l) < 4 150 if Robj_{Ss(l)} < 10 151 vR = −5; 152 vL = −5; 153 end 154 if Ss(l) == 3 155 if Robj_{Ss(l)} < 50 156 vR = −3; 157 vL = 3; 158 elseif Robj_{Ss(l)} < 80 159 vR = 0; 160 vL = 5; 161 elseif Robj_{Ss(l)} < 900

103 Traineeship Report November 17, 2010 E.1. SIMULATIONS APPENDIX E. MATLAB M-FILES

162 vR = 3; 163 vL = 5; 164 else 165 vR = 5; 166 vL = 5; 167 end 168 elseif Ss(l) == 2 169 if Robj_{Ss(l)} < 50 170 vR = 3; 171 vL = −3; 172 elseif Robj_{Ss(l)} < 80 173 vR = 5; 174 vL = 0; 175 elseif Robj_{Ss(l)} < 900 176 vR = 5; 177 vL = 3; 178 else 179 vR = 5; 180 vL = 5; 181 end 182 end 183 end 184 else 185 if Ss(l) == 3 186 if Robj_{Ss(l)} < 10 187 vR = −5; 188 vL = −5; 189 elseif Robj_{Ss(l)} < 50 190 vR = −5; 191 vL = 5; 192 elseif Robj_{Ss(l)} < 90 193 vR = 0; 194 vL = 5; 195 else 196 vR = 5; 197 vL = 5; 198 end 199 elseif Ss(l) == 2 200 if Robj_{Ss(l)} < 10 201 vR = −5; 202 vL = −5; 203 elseif Robj_{Ss(l)} < 50 204 vR = 5; 205 vL = −5; 206 elseif Robj_{Ss(l)} < 90 207 vR = 5; 208 vL = 0; 209 else 210 vR = 5; 211 vL = 5; 212 end 213 end 214 215 end 216 end 217 218 219 pause(0.08); 220 a = a + 1;

104 Traineeship Report November 17, 2010 E.1. SIMULATIONS APPENDIX E. MATLAB M-FILES

221 end 222 end 223 224 225 %robot representation 226 r = 40; 227 vt = (vl+vr)/2; 228 omega = (vl−vr)/r; 229 xdot = vt*cos(theta); 230 ydot = vt*sin(theta); 231 thetadot = omega; 232 233 %goal and orientation 234 xf = 200; 235 yf = 300; 236 thetaf = 30*2*pi/360; 237 238 239 hold off

E.1.6 Function Drawrobot

1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2 % Name: RobotPathPlanning.m% 3 % Function: Plots robot's surface and FOV of sensors in course map% 4 % % 5 % Created by: N.F. Jansen% 6 % On: 03−10−2010 % 7 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 8 9 10 function [vx_, vy_] = drawrobot(posn, course) 11 12 hold off; 13 %show the course 14 imagesc(course), axis image off; colormap gray; 15 load SensorLocations.mat 16 hold on; 17 18 Lsurface = 80; 19 Wsurface = 37; 20 21 alpha = posn(3); % make relative angle 22 23 CentRect = [posn(1); posn(2)]; 24 L = sqrt(Wsurface^2+Lsurface^2); 25 beta = atan((Wsurface/2)/(Lsurface/2)); 26 xa = CentRect(1)+(L/2)*sin(alpha+beta); 27 ya = CentRect(2)+(L/2)*cos(alpha+beta); 28 xb = CentRect(1)+(L/2)*sin(alpha−beta+pi); 29 yb = CentRect(2)+(L/2)*cos(alpha−beta+pi); 30 xc = CentRect(1)+(L/2)*sin(alpha+beta+pi); 31 yc = CentRect(2)+(L/2)*cos(alpha+beta+pi); 32 xd = CentRect(1)+(L/2)*sin(alpha−beta); 33 yd = CentRect(2)+(L/2)*cos(alpha−beta); 34 rect2.vertices=[xa ya; xb yb; xc yc; xd yd];

105 Traineeship Report November 17, 2010 E.2. COMMUNICATION WITH SENSORS APPENDIX E. MATLAB M-FILES

35 rect2.faces=[1 2 3 4]; 36 37 patch(rect2,'Vertices',rect2.vertices,'FaceColor',[1 0 0]); set(gca,'YDir','normal'); axis equal 38 ntheta = 14*2*pi/360; 39 Ls = 140*cos(ntheta); 40 Rangle = posn(3); 41 for N = 1:size(S_,2); 42 Sch = sqrt(S_{N}{1}^2+S_{N}{2}^2); 43 beta = atan2(S_{N}{1}, S_{N}{2}); 44 x_{N} = [posn(1) + Sch*sin(beta+Rangle)]; 45 y_{N} = [posn(2) + Sch*cos(beta+Rangle)]; 46 47 nx1_{N}=Ls*cos(−Rangle+S_{N}{3}−ntheta)+x_{N}; 48 ny1_{N}=Ls*sin(−Rangle+S_{N}{3}−ntheta)+y_{N}; 49 nx2_{N}=Ls*cos(−Rangle+S_{N}{3}+ntheta)+x_{N}; 50 ny2_{N}=Ls*sin(−Rangle+S_{N}{3}+ntheta)+y_{N}; 51 52 vx_{N} = [x_{N}, nx1_{N}, nx2_{N}, x_{N}]; 53 vy_{N} = [y_{N}, ny1_{N}, ny2_{N}, y_{N}]; 54 55 plot(vx_{N},vy_{N},'b','Linewidth', 2); 56 end 57 end

E.2 Communication with sensors

E.2.1 Main sensor communication

1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2 % Name: MainSensorComm.m% 3 % Function: Main function of communication with Devantech SRF08 sonar% 4 % % 5 % Created by: N.F. Jansen% 6 % On: 03−10−2010 % 7 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 8 9 clear all 10 close all 11 clc 12 13 load SensorLocations.mat 14 15 % makea serial connection to thePC −2−I2C device 16 SerComm = serial('COM7'); 17 set(SerComm,'BaudRate',57600); 18 fopen(SerComm); 19 pause(0.5); 20 21 % with angle of 14 degrees! (20 sensors) 22 % tilt sensors about 17 degrees to not detect the floor within2 meters! 23 %(at 30 cm) 24 ntheta = 14*2*pi/360; 25 Lsurface = 80; 26 Wsurface = 37; 27

106 Traineeship Report November 17, 2010 E.2. COMMUNICATION WITH SENSORS APPENDIX E. MATLAB M-FILES

28 x0 = Wsurface/2; 29 y0 = Lsurface/2; 30 31 %p= get(0,'monitorpositions'); 32 % scrsz= get(0,'ScreenSize'); 33 % figure('OuterPosition',[p(1,1)p(2,1)+283 abs(p(1,1)) 768]) 34 35 xs = [ −x0, x0, x0, −x0]; 36 ys = [ −y0, −y0, y0, y0]; 37 fill (xs, ys,'w') 38 hold on 39 40 Addresses = ['E0';'E2';'E4';'E6';'E8';'EA';'EC';'EE';'F0';'F2';'F4';'F6';'F8';'FA';'FC';'FE']; 41 42 % draw robot platform 43 Ls = 140*cos(ntheta); 44 for N = 1:size(S_,2); 45 nx1_{N}=Ls*cos(S_{N}{3}−ntheta)+S_{N}{1}; 46 ny1_{N}=Ls*sin(S_{N}{3}−ntheta)+S_{N}{2}; 47 nx2_{N}=Ls*cos(S_{N}{3}+ntheta)+S_{N}{1}; 48 ny2_{N}=Ls*sin(S_{N}{3}+ntheta)+S_{N}{2}; 49 end 50 for x = 1:size(S_,2) 51 x_{x} = [S_{x}{1}, nx1_{x}, nx2_{x}]; 52 y_{x} = [S_{x}{2}, ny1_{x}, ny2_{x}]; 53 fill ( x_{x}, y_{x},'w') 54 end 55 56 ReadySensors = 1; 57 58 a=0; 59 60 61 h_{20} = []; 62 for i=1:70 63 64 Echonr = 1; 65 a=0; 66 while a < 5 67 r(1,1) = 1+2*a; 68 if r(1,1) > 16 69 r(1,1) = r(1,1)−16; 70 Bus = 2; 71 else 72 Bus = 1; 73 end 74 StartRanging(SerComm, Addresses(r(1,1),:), Bus); 75 r(1,2) = Bus; 76 77 r(2,1) = 11+2*a; 78 if r(2,1) > 16 79 r(2,1) = r(2,1)−16; 80 Bus = 2; 81 else 82 Bus = 1; 83 end 84 StartRanging(SerComm, Addresses(r(2,1),:), Bus); 85 r(2,2) = Bus; 86

107 Traineeship Report November 17, 2010 E.2. COMMUNICATION WITH SENSORS APPENDIX E. MATLAB M-FILES

87 88 r(3,1) = 20−2*a; 89 if r(3,1) > 16 90 r(3,1) = r(3,1)−16; 91 Bus = 2; 92 else 93 Bus = 1; 94 end 95 StartRanging(SerComm, Addresses(r(3,1),:), Bus); 96 r(3,2) = Bus; 97 98 99 r(4,1) = 10−2*a; 100 if r(4,1) > 16 101 r(4,1) = r(4,1)−16; 102 Bus = 2; 103 else 104 Bus = 1; 105 end 106 StartRanging(SerComm, Addresses(r(4,1),:), Bus); 107 r(4,2) = Bus; 108 109 110 for x=1:4 111 if r(x,2)>1 112 S_{r(x,1)+16}{4} = RecDistCmSens(Addresses(r(x,1),:), SerComm, Echonr, r(x,2)); 113 r(x,1) = r(x,1)+16; 114 else 115 S_{r(x,1)}{4} = RecDistCmSens(Addresses(r(x,1),:), SerComm, Echonr, r(x,2)); 116 end 117 Range(i,a+1,x) =S_{r(x,1)}{4} ; 118 if not(isempty(h_{r(x,1)})) 119 delete(h_{r(x,1)}); 120 h_{r(x,1)} = []; 121 end 122 if S_{r(x,1)}{4} > 0 123 h_{r(x,1)} = drawreading(S_, r(x,1), ntheta); 124 drawnow; 125 end 126 end 127 128 a=a+1; 129 end 130 end 131 132 fclose(SerComm);

E.2.2 Function StartRanging

1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2 % Name: StartRanging .m% 3 % Function: Sends ranging(in cm) command to Devantech sonar sensor% 4 % % 5 % Created by: N.F. Jansen% 6 % On: 03−10−2010 % 7 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

108 Traineeship Report November 17, 2010 E.2. COMMUNICATION WITH SENSORS APPENDIX E. MATLAB M-FILES

8 9 10 function StartRanging(SerialComm, Addr, Bus) 11 12 if Bus>1 13 SendBytes = hex2dec({'47','32'}); 14 fwrite(SerialComm,SendBytes); 15 end 16 17 SendBytes = hex2dec({'53','0','0','30','30','35','31','50'}); %start ranging 18 SendBytes(2:3) = double(Addr); 19 fwrite(SerialComm,SendBytes); 20 21 if Bus>1 22 SendBytes = hex2dec({'47','31'}); 23 fwrite(SerialComm,SendBytes); 24 end 25 end

E.2.3 Function RecDistCmSens

1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2 % Name: RecDistCmSens.m% 3 % Function: Requests and receives ranging information from Devantech% 4 % % 5 % Created by: N.F. Jansen% 6 % On: 03−10−2010 % 7 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 8 9 function DistCm = RecDistCmSens(Addr, SerialComm, Echonr, Bus) 10 11 start = clock; 12 TimeOut = 10; %secs 13 14 if Bus>1 15 SendBytes = hex2dec({'47','32'}); 16 fwrite(SerialComm,SendBytes); 17 end 18 19 SendBytes = hex2dec({'53','0','0','0','0','53','0','0','30','31','50'}); 20 SendBytes(2:3) = double(Addr); 21 SendBytes(4:5) = double(num2str(dec2hex(Echonr*2,2))); 22 SendBytes(7) = double(Addr(1)); 23 SendBytes(8) = double(Addr(2))+1; 24 25 fwrite(SerialComm,SendBytes); 26 byteH = fread(SerialComm,3); 27 28 while byteH(1) == 70 29 fwrite(SerialComm,SendBytes); 30 byteH = fread(SerialComm,3); 31 if(etime(clock,start) > TimeOut) 32 error('Timeout occured! The sonar device is not responding.') 33 end 34 end 35

109 Traineeship Report November 17, 2010 E.2. COMMUNICATION WITH SENSORS APPENDIX E. MATLAB M-FILES

36 SendBytes(4:5) = double(num2str(dec2hex(Echonr*2+1,2))); 37 38 fwrite(SerialComm,SendBytes); 39 byteL = fread(SerialComm,3); 40 41 if hex2dec(char(byteH(2,:))) > 2 42 byteH(2,:) = 48; 43 end 44 45 Sens(1,1) = char(byteH(1,:)); 46 Sens(1,2) = char(byteH(2,:)); 47 Sens(1,3) = char(byteL(1,:)); 48 Sens(1,4) = char(byteL(2,:)); 49 50 51 DistCm = hex2dec(Sens(1,:)); 52 53 if Bus>1 54 SendBytes = hex2dec({'47','31'}); 55 fwrite(SerialComm,SendBytes); 56 end 57 58 end

110 Traineeship Report November 17, 2010