Short Range Object Detection and Avoidance
N.F. Jansen CST 2010.068
Traineeship report
Coach(es): dr. E. García Canseco, TU/e dr. ing. S. Lichiardopol, TU/e ing. R. Niesten, Wingz BV
Supervisor: prof.dr.ir. M. Steinbuch
Eindhoven University of Technology Department of Mechanical Engineering Control Systems Technology
Eindhoven, November, 2010
Abstract
The scope of this internship is to investigate, model, simulate and experiment with a sensor for close range object detection for the purpose of the Tele-Service Robot (TSR) robot. The TSR robot will be implemented in care institutions for the care of elderly and/or disabled. The sensor system should have a supporting role in navigation and mapping of the environment of the robot. Several sensors are investigated, whereas the sonar system is the optimal solution for this application. It’s cost, wide field-of-view, sufficient minimal and maximal distance and networking capabilities of the Devantech SRF-08 sonar sensor is decisive to ultimately choose this sensor system. The positioning, orientation and tilting of the sonar devices is calculated and simulations are made to obtain knowledge about the behavior and characteristics of the sensors working in a ring. Issues surrounding the sensors are mainly erroneous ranging results due to specular reflection, cross-talk and incorrect mounting. Cross- talk can be suppressed by operating in groups, but induces the decrease of refresh rate of the entire robot’s surroundings. Experiments are carried out to investigate the accuracy and sensitivity to ranging errors and cross-talk. Eventually, due to the existing cross-talk, experiments should be carried out to decrease the range and timing to increase the refresh rate because the sensors cannot be fired more than only two at a time. A ROS node is still work in progress but is subject to finish at the end of November. At this point the conclusion can be drawn that the sonar system can be a good asset in navigation and mapping the environment of the robot.
3 Traineeship Report November 17, 2010
Contents
1 Introduction 10
2 Sensors 13 2.1 Ranging techniques ...... 14 2.1.1 Proximity ...... 14 2.1.2 Triangulation ...... 16 2.1.3 Time-Of-Flight ...... 17 2.1.4 Phase Modulation ...... 17 2.1.5 Intensity of reflection ...... 18 2.1.6 Frequency modulation ...... 18 2.2 Sensor technologies ...... 19 2.2.1 Acoustical ...... 19 2.2.2 Optical ...... 21 2.2.3 Electromagnetic ...... 21 2.3 Candidate systems overview ...... 22 2.4 Discussion ...... 24
3 Sonar Systems 25 3.1 Sonar selection ...... 25 3.2 Discussion ...... 27 3.3 Implementation ...... 28 3.3.1 Cross-talk ...... 28 3.3.2 Refresh rate ...... 29
4 Simulations 30 4.1 Simulation sensor behavior ...... 30 4.2 Simulation robot navigation with sonar ...... 33 4.2.1 Kinematics of differential steering ...... 33 4.2.2 Results navigation simulations ...... 35
5 Data Acquisition 38 5.1 The physical I2Cbus...... 38 5.2 PC to I2C adapter ...... 38 5.3 Protocol and communication ...... 39 5.3.1 Start signal ...... 40 5.3.2 Slave address transfer ...... 40 5.3.3 Data transfer ...... 40 5.3.4 Stop signal ...... 40
5 Traineeship Report November 17, 2010 CONTENTS CONTENTS
5.4 The I2C software protocol ...... 41 5.5 Example communication ...... 42
6 Experiments 45 6.1 Sensor characteristics ...... 45 6.1.1 Measured straight-line distance ...... 45 6.1.2 Cross-talk ...... 47 6.1.3 Angular measurements to pole ...... 48 6.1.4 Maximum angle measurements to wall ...... 49 6.2 Sensors mounted to robot platform ...... 50 6.2.1 Cross-talk ...... 51 6.2.2 Ranging a corner ...... 52 6.2.3 Ranging a doorway ...... 53
7 Concluding Remarks and Future Work 55
Acronyms 57
References 58
A Datasheet Devantech SRF08 59
B Datasheet Parallax PING))) 76
C Additional Measurement Tables and Results 89
D USB-I2C Devantech SRF-08 commands 91
E Matlab M-Files 93 E.1 Simulations ...... 93 E.1.1 Calculate position and orientation sensors ...... 93 E.1.2 Simulation random polygon in environment robot ...... 97 E.1.3 Function CalcDist2Obj ...... 99 E.1.4 Function Drawreading ...... 100 E.1.5 Simulation robot navigation with sonar ...... 101 E.1.6 Function Drawrobot ...... 105 E.2 Communication with sensors ...... 106 E.2.1 Main sensor communication ...... 106 E.2.2 Function StartRanging ...... 108 E.2.3 Function RecDistCmSens ...... 109
6 Traineeship Report November 17, 2010 List of Figures
1.1 Overview of TSR setup, adapted from [10] ...... 10 1.2 Overview of partners ...... 11
2.1 Applied technologies for optical proximity sensors, adapted from [4] ...... 15 2.2 Triangulation techniques, adapted from [4] ...... 16 2.3 Phase modulation, adapted from [3] ...... 18 2.4 Frequency modulation, adapted from [4] ...... 18 2.5 Shape and Field-Of-View (FOV) of ultrasonic sensor, adapted from [12] ...... 20 2.6 Ultrasonic ranging errors, adapted from [4] ...... 20
3.1 Sensor placements and orientations ...... 26 3.2 Sensor placement and orientation for Senscomp Series 6000 ...... 26 3.3 The groups of sensors operating on the mobile platform ...... 28
4.1 Example of sonar object detection ...... 30 4.2 Circle of radius r = 1, center (a, b) = (1.2, -0.5) ...... 31 4.3 Sensor simulation of behavior on different objects ...... 32 4.4 Maps for simulation of moving mobile robot ...... 33 4.5 The Pioneer platform with differential steering ...... 33 4.6 Wheels at different velocities ...... 34 4.7 Simulation of mobile robot navigation throughout an office ...... 36 4.8 Simulation of mobile robot navigation throughout a map ...... 36
5.1 Overview I2C connection ...... 38 5.2 The PC to I2C Adapter ...... 39 5.3 Overview I2C communication protocol ...... 39 5.4 Bit transfer on I2Cbus...... 40 5.5 Overview order of sensors mounted to mobile robot ...... 43 5.6 Flow diagram of communication with sensors when ranging is issued ...... 43
6.1 Comparison chart of real distance to measured distance ...... 46 6.2 Resulting graph of cross-talk measurement ...... 47 6.3 Plan of angular measurement of square pole ...... 48 6.4 Plan of angular measurement of a wall ...... 49 6.5 Resulting graph of angular measurement ...... 50 6.6 Beam Pattern according to data sheet ...... 50 6.7 Overview of mobile robot platform with Devantech SRF-08 sonar sensors mounted to platform for experimental purposes ...... 51 6.8 Overview of robot and mounted sensors ranging the corner with wall of glass ...... 52
7 Traineeship Report November 17, 2010 LIST OF FIGURES LIST OF FIGURES
6.9 Results of ranging a corner with wall of glass ...... 53 6.10 Results of ranging a doorway ...... 53
8 Traineeship Report November 17, 2010 List of Tables
2.1 Candidate systems ...... 23 2.2 Sensors ...... 23
3.1 Sonar systems ...... 25 3.2 Sonar systems and cost estimation ...... 27
5.1 Send Ranging Command for Universal Serial Bus (USB) to I2C...... 42 5.2 Receive Ranging Command for USB to I2C...... 42
6.1 Measurement results of ranging a wall directly in front of the sensor ...... 46 6.2 Cross-talk measurements ...... 47 6.3 Results of angular measurement of square pole ...... 48 6.4 Measurement results of ranging a wall in angular direction with respect to the sensor . . 50 6.5 Sensor grouping overview ...... 52
C.1 Measurement results of ranging a wall directly in front of the sensor ...... 89 C.2 Measurement results of angular ranging ...... 90 C.3 First measurement results of angular ranging to wall ...... 90
D.1 Start Ranging (cm) ...... 91 D.2 Request ranging information first echo ...... 91 D.3 Request software revision ...... 91 D.4 First command to change address ...... 91 D.5 Second command to change address ...... 92 D.6 Third command to change address ...... 92 D.7 Fourth command to change address ...... 92
9 Traineeship Report November 17, 2010 Chapter 1
Introduction
This is the final report for the Dynamics and Control Technology (DCT) internship assignment for the Tele-Service Robot (TSR) project, started on the 30th of August and finished with the final presentation at the 11th of November.
Within the TSR project the aim is to build a demonstrator tele-service robot for care and cure. The goal is to develop a care-robot which can take over household and care tasks for the elderly and handi- capped. In first instance, the robot will be placed in care institutions. The robot can be controlled both by a remote operator (nurse) and by the elderly and/or handicapped person requiring care. See Figure 1.1. The robot can also perform certain tasks autonomously and ultimately learn tasks: e.g. avoid an object, manipulate an object using multiple arms, and move to a predefined location. The robot has to be user friendly, both in appearance and in operation. It has to be reliable and operate safely in a human environment. That is, it should not harm people or objects, or itself and comply with safety standards.
Figure 1.1: Overview of TSR setup, adapted from [10]
10 Traineeship Report November 17, 2010 CHAPTER 1. INTRODUCTION
The TSR group consists of several companies and institutions each responsible for a certain part of the project (Figure 1.2). My internship takes place at the department of mechanical engineering at the University of Technology in Eindhoven (TU/e) and Wingz BV at the High Tech Campus in Eindhoven. Wingz is responsible for the electrical hardware on the platform whereas the mechanical engineering department of the TU/e is responsible for the mechanical part of the robot.
Figure 1.2: Overview of partners
The project is divided into iterative phases. For each case, specific requirements are generated. Cur- rently, the project is in Phase 1. The goal of Phase 1 is to build a TSR that allows shared control between the robot and the operator. For this the cockpit of the TSR needs a user friendly interface for manipu- lating the robot and the robot needs to be able to perform certain tasks autonomously (on command of the operator) and make corrections on the inaccurate control provided by the operator. Besides building a functional system the focus in this phase lies on extendibility and safety ([10]). For navigational and mapping purposes, the Phase 1 robot is already equipped with a Scanning Laser Range Finder (SLRF) and a 3d camera. The sensors interact and work together to obtain a map of the robot’s environment after which the robot is able to navigate and manoeuvre to a certain (predefined) position. However, the robot is not able to determine whether objects are appearing behind the robot, which is an issue when backing up. Other than that, due to a limited directional camera and the narrow beam of light being emitted from the SLRF, especially for close range applications, the system lacks per- formance. It is possible to miss an object when the mobile platform is standing close-by, for example when the object is not as high as the laser beam is emitting light. Therefore, the main objective of this project is:
Analyse, model, simulate and implement a sensor for close-range applications for the purpose of the TSR robot
As there are several options available on the market, a decision has to be made for which sensor is
11 Traineeship Report November 17, 2010 CHAPTER 1. INTRODUCTION most suitable for the project. Every type of sensor has their advantages and disadvantages, for example range, and have to be taken into account. The assignment for this internship is first to research sensors on the market today. Second, to theoretically approach the problem for the robot based on simulations made with Matlab and third to do practical experiments with a prototype model of the TSR robot. To obtain full coverage of the ’close range’ (up to about 50 cm) surroundings of the robot, it has to be determined what quantity of sensors is needed. This needs to be calculated with Matlab. Every sensor has a different area of coverage and therefore for every sensor to be considered it has to be calculated what amount of sensors is needed to obtain full coverage of the robot’s area with just a small portion of overlap. From these calculations in Matlab it can be decided, taking price in consideration as well, which sensor is best suitable for the project. The sensor can then be ordered. Then, simulations need to be made with Matlab to obtain knowledge of the behavior of the sensors when a random object (with optimal reflection) approaches the sensors (i.e. when a robot moves to an object). The simulations are graphical. This means that a virtual environment will have to be created. In this environment the area of coverage of the sensors is shown and a random object is created which in first instance will appear outside the range of the sensors. When the simulations are finished and the sensors are delivered, the sensors will be mounted to the platform on a prototype robot. Then, practical experiments are carried out to research the actual behavior of the sensors in a real environment with non-optimal reflective objects. This shows the difference between the theoretical results and the results obtained in practice.
12 Traineeship Report November 17, 2010 Chapter 2
Sensors
The robot must be able to navigate from a certain position to a desired new location and orientation. At the same time it must be able to avoid any contact with fixed or moving objects while enroute. Fundamental in this situation is the use of a sensor that can acquire high resolution data, describing the robot’s physical surroundings. The sensor should operate in a timely, yet practical, fashion in keeping with the limited onboard energy of a mobile vehicle. This matter brings a numerous amount of problems, where (amongst others) the most essential problem in the collision avoidance system is the unknown nature and orientation of the target surface. The system must be able to detect a wide variety of surfaces under varying angles of incidence. The system is already equipped with a SLRF and a stereovision 3d camera for navigational purposes (Simultaneous Localization And Mapping (SLAM)). However, due to technical limitations, this system lacks performance for close range object detection and therefore the main goal for this investigation is the determination of a ranging sensor for close range applications. This sensor is needed to provide the mobile platform with sufficient environmental awareness of its surroundings to allow it to move about in a realistic fashion, avoiding fixed and moving objects. The following considerations are taken into account:
− FOV Must be wide enough with sufficient depth of field to suit the application.
− Range capability The minimum range of detection, as well as the maximum effective range, in this case 50 cm, must be appropriate for the intended use of the sensor.
− Angle of operation Small angles leading to blind spots can deteriorate performance.
− Accuracy, resolution Must be in keeping with the needs of the given task.
− Ability to detect all objects in the environment Objects can absorb emitted energy, targets surfaces can be specular (such as straight walls) as op- posed to diffuse reflectors, ambient conditions and noise can interfere with the sensing process.
− Operate in real time The update frequency must provide data at a sufficient rate taking the vehicle’s speed into ac- count. The sensors should allow the vehicle to stop or alter course to avoid collision even when driving with maximum velocity.
13 Traineeship Report November 17, 2010 2.1. RANGING TECHNIQUES CHAPTER 2. SENSORS
− Concise, easy to interpret data The output should be realistic from the standpoint of processing requirements. Too much data can be as meaningless as not enough data. Where multiple sensors are needed, the effort of combining the sensors and collecting data is a point of interest. − Sensitivity to moving objects When objects are moving or when the mobile vehicle is advancing, especially with Time-Of- Flight (TOF) (section 2.1.3) techniques, errors can occur in ranging. − Power consumption The power requirements should be minimal in keeping with the limited resources onboard a mobile vehicle. − Size The physical size and weight of the sensor system should be practical with regard to the intended vehicle. − Price The benefit-cost ratio should be optimal. This second chapter is intended to provide some basic background on the various non-contact distance measurement techniques available with related discussion of their implementation in the acoustical, optical and electromagnetical portions of the energy spectrum. This background information is based on information from [4], [12] and [13]. An overview of the candidate systems is presented and a decision is made for what type of sensor is optimal based on the considerations above.
2.1 Ranging techniques
2.1.1 Proximity A proximity sensor is able to determine the presence (opposed to the actual range) of nearby objects without any physical contact. The sensors were developed to gain position information in the close- in region (between a fraction of a centimeter and one or two meters), extending the sensing range beyond direct-contact sensors. A proximity sensor often emits an electromagnetic or electrostatic field, or a beam of electromagnetic radiation (infrared for instance) and looks for changes in the field of the return signal. Different proximity targets require different sensors. Permanent-magnet or Hall effect sensors Permanent-magnet sensors are good for sensing ferrous metallic objects over very short ranges. Magnetic sensors detect changes, or disturbances, in magnetic fields that have been created or modified and output information, in the case of the Hall Effect sensor, a voltage indicating the presence of an object. Induction-type proximity switches Induction-type proximity switches are applied to the detection of metal objects which are lo- cated at short range. Typical inductive sensors generate an oscillatory Radio Frequency (RF) field around a coil of wire. The effective inductance of the coil changes when a metal object enters the field. Inductive and permanent-magnet sensors in general have limited use for purposes of object detec- tion, except in very application-specific instances. To be complete, inductive sensors will be taken into consideration for this project.
14 Traineeship Report November 17, 2010 2.1. RANGING TECHNIQUES CHAPTER 2. SENSORS
Ultrasonic proximity sensors Ultrasonic proximity sensors (not be confused with ranging systems) are useful over longer dis- tances (in meters) for detecting most objects, liquid and solid. The system consist of one or two transducers. The first uses a transceiver, which transmits and receives the energy. The latter, uses one that transmits and the other that receives the energy. When no energy is present, the control circuitry indicates no output. When a signal is received and reaches the preset threshold, the sensor output changes state, indicating detection.
Optical proximity sensors Optical proximity sensors can be broken down in three groups [4](Figure 2.1):
(a) Break-Beam (b) Reflective
(c) Diffuse
Figure 2.1: Applied technologies for optical proximity sensors, adapted from [4]
(i). Break-beam The break-beam technique is based on separate transmitting and receiving elements physi- cally located on either side of the region of interest. The transmitter emits a beam of light, often supplied by a Light-Emitting Diode (LED), which is focused on the photosensitive re- ceiver (Figure 2.1a). Any object that passes between the transmitter and receiver breaks the beam, which leads to a disruption of the circuit. (ii). Reflective Figure 2.1b shows the reflectivity technique. This technique is evolved from break-beam through the use of a mirror, or retro-reflector, to reflect the transmitted energy back to a detector located at the position of the transmitter. The object is detected when it breaks the beam. This technique is not te be confused by the technique to measure the intensity of reflection, as the purpose of reflectivity is solely to detect an obstacle, not the range. (iii). Diffuse The diffuse technique is fairly similar to the reflective operation, except that energy is re- flected from the surface of the object of interest, as opposed to a cooperative target reflector (Figure 2.1c).
Capacitive sensors Capacitive sensors are effective for short-range detection of most objects. Such sensors measure the electrical capacitance between a probe and its surrounding environment. As an object ap- pears, the changing geometry and/or dielectric characteristics within the sensing region cause the capacitance to change. This affects the current flow through the probe, causing the sensor to detect an object. The sensors can be used to detect the presence of a wide range of material, but require relatively close range, and is therefore almost unapplicable for this project.
15 Traineeship Report November 17, 2010 2.1. RANGING TECHNIQUES CHAPTER 2. SENSORS
The performance specifications of proximity sensors depend on several factors. Effective range is a function of the physical characteristics of the object to be detected, its speed and direction of motion, the design of the sensor probe, and the quality and quantity of energy it radiates or receives. Distance resolution is dependent upon object size, speed, and generally reduces with increased range. Other important factors are the changes in ambient conditions, variations in reflectivity or other material characteristics of the target and the stability of the electronic circuitry of the sensor.
2.1.2 Triangulation Triangulation is a technique based on a simple trigonometric method for calculating the distances and angles needed to determine the location of the object of interest. An import premise of plane trigonometry is: given the length of a side and two angles of a triangle, it is possible to determine the length of the other sides and the remaining angle using the Law of Sines and/or Cosines [4]. Ranging systems using triangulation for robot navigation and collision avoidance are classified as either active or passive [4]. Passive stereoscopic ranging systems use only the ambient light of the scene to illuminate the target. These sensors are commonly TV cameras, solid state imaging arrays or photodetectors to the locations P1 and P2, showed in Figure 2.2a. Both are focused on the same point, P3, thus forming an imaginary triangle. Active triangulation systems position at either point P1 or P2 a controlled light source, such as a laser, which is directed at the observed point P3. A directional imaging sensor is placed at the remaining point and is also aimed at P3 , see Figure 2.2b.
(a) Passive stereoscopic triangular ranging (b) Active laser triangulation ranging
Figure 2.2: Triangulation techniques, adapted from [4]
With both systems an array of points can be determined by adjusting the incident angles of the detectors and/or the light source in a raster sequence. The resulting range map is a three-dimensional image of the environment in front of the sensor. Limiting factors common to all triangulation sensors include angular measurement inaccuracies (and range) and inaccuracies due to highly absorptive surfaces or
16 Traineeship Report November 17, 2010 2.1. RANGING TECHNIQUES CHAPTER 2. SENSORS mirror surfaces.
2.1.3 Time-Of-Flight The range finding technique Time-Of-Flight (TOF) refers to the time it takes for a pulse of energy to travel from its transmitter to an observed object and back to the receiver [4]. The energy transmission originates typically from an ultrasonic, radio or light source. The relevant parameters are the speed of sound (roughly 340 m/s) and the speed of light (roughly 300 000 000 m/s). Time-Of-Flight systems measure the roundtrip time between an energy pulse emission and the resulting pulse echo from the reflectance of an object. Using elementary physics, the distance (d(t)) is determined by multiplying the velocity (v(t)) of the energy wave by the time required to travel the distance: v(t)t d(t) = (2.1) 2 The measured time represents the traveling twice the distance (from transmitter, to object and back to the receiver) and must therefore be reduced by half to result in the actual range of the target. The advantages of TOF systems arise from the direct nature of their straight-line active sensing. The returned signal follows essentially the same path back to a receiver located in close proximity to the transmitter. The absolute range to an observed point is directly available as output with no complicated analysis required. Furthermore, TOF sensors maintain accuracy as long as reliable signal detection is maintained, while triangulation schemes suffer from decreasing accuracy as range increases. Limitations of TOF systems are primarily related to the properties of the emitted energy, which vary across the spectrum. When light, sound or radio waves strike an object, depending on the object, a portion of the the original system is reflected and returns to the receiver. The remaining energy reflects in scattered directions or is absorbed, depending on the object’s surface, characteristics and the angle of incidence (angle of approach) of the source transmission. The scattered signals can reflect from sec- ondary objects as well and return to the detector at various times. This results in false signals yielding questionable or otherwise noisy data. Instances where no return signal is received can occur because of straight reflection (specular, or the angle of incident equals the angle of reflection) by the object. When the transmission source exceeds a certain critical angular level, the reflected energy will be deflected outside of the sensing envelope of the receiver, inducing no detection of the object of interest. Due to temperature and humidity changes, a variation in propagation speed is eminent inducing in- accuracies in the range computations. This variation is particularly applicable to acoustically based systems. Cross-talk can occur when multiple sensors are operating in the same environment. When a pulse of energy is emitted from one transmitter and an object with a certain shape, causes the energy to be reflected in the direction of a receiver from a different system, that system can misinterpret the reflected energy.
2.1.4 Phase Modulation Phase Modulation (PM) is a method of distance measurement that requires the transmission of a con- tinuous wave of energy in contrast to the pulsed outputs used in direct measurement TOF systems. Phase Modulation involves a determination of the shift in phase of the signal as it returns from a re- flecting object. An unbroken beam of modulated laser energy is directed towards a target; a portion of this wave is reflected by the object and returned to the detector along a directed path. This returned energy is compared to a simultaneously generated reference beam which has been split off from the original signal. The relative phase shift between the two is measured (Figure 2.3). This phase shift is a function of the round trip distance, that the wave has traveled. The accuracy of phase shift approach the accuracy achievable by pulsed laser TOF methods. Greater accuracy can be achieved by integrating over
17 Traineeship Report November 17, 2010 2.1. RANGING TECHNIQUES CHAPTER 2. SENSORS many measurements. However, this is time consuming, which makes it difficult to achieve real-time data. The time measurement problem of the TOF system is eliminated. However, it is replaced by the need for sophisticated phase measurement electronics, making it sufficiently more complex to be used.
Figure 2.3: Phase modulation, adapted from [3]
2.1.5 Intensity of reflection The concept of ranging by intensity reflection is based on an emitter projecting energy towards a sur- face, from which it is reflected, and subsequently sensed by the receiver. The energy is typically acoustic, electromagnetic or optical. This sensor technique calculates the distance to the surface of the object of interest from the sensed intensity of the reflected energy. However, two distinct problems are substan- tial to this method. First, if the surface acts as a mirror, it can reflect the incident energy away from the receiver. Then no surface presence will be detected. Second, if the surface is highly absorptive, none of the incident energy will be reflected, which leads again to no surface being detected.
2.1.6 Frequency modulation Frequency modulation involves the transmission of a continuous electromagnetic wave, modulate by a periodic triangular signal. This signal varies the carrier frequency linearly above and below the mean frequency as shown in Figure 2.4.
Figure 2.4: Frequency modulation, adapted from [4]
18 Traineeship Report November 17, 2010 2.2. SENSOR TECHNOLOGIES CHAPTER 2. SENSORS
The transmitter emits a signal that varies in frequency as a linear function of time:
f(t) = f0 + at (2.2) where f0 is the mean frequency and a the integer to set the amount of change over time. This signal is reflected from a possible target or object and arrives at the receiver at time t + T : d T = 2 (2.3) c where T is the roundtrip propagation time, d the distance to a target and c the speed of light. The received signal is compared with a reference signal taken directly from the transmitter. The re- ceived frequency curve will be displaced along the time axis relative to the reference frequency curve by an amount equal to the time required for wave propagation to the target and back. These two frequen- cies, combined in the mixer, produce a beat frequency:
BF = f(t) − f(t + T ) = aT (2.4)
This beat frequency can be measured, and used to calculate the distance to the object: 1 d = · BF · c · a (2.5) 2 The linearity of the frequency shift controls the accuracy of the system. The frequency modulation system has an advantage over the phase modulation technique in that a single distance measurement is not ambiguous. Phase modulation systems require two ore more mea- surements at different modulation frequencies to be unambiguous. However, frequency modulation has several disadvantages associated with the requirements of coherence of, when used, the laser beam and the linearity and repeatability of the frequency ramp.
2.2 Sensor technologies
2.2.1 Acoustical With the development of Sound Navigation and Ranging (SONAR), high-frequency acoustic waves have been used to determine position, velocity and orientation of objects. Ultrasonic energy (sound above the limit of the human hearing level) has been the most commonly applied technology. Ultrasonic transducers typically transmit at frequencies greater than 20 kHz generated by both mechanical and electronic sources. Acoustical ranging can be implemented using triangulation, time-of-flight, phase shift measurement or a combination of these techniques. The direction and velocity of a moving object can be determined by measuring the Doppler shift 1 in frequency of the returned energy. This shift is caused by objects moving toward or away from an observer. Typically, triangulation and time-of- flight methods transmit sound energy in pulses and are effective at longer distances for navigation and positioning, and at shorter distances for object detection. The shape and FOV of an emitted pulse by an ultrasonic ranger are shown in Figure 2.5.
1The Doppler shift, or Doppler effect, is the change in frequency of a wave for an observer moving relative to the source of the wave [2]
19 Traineeship Report November 17, 2010 2.2. SENSOR TECHNOLOGIES CHAPTER 2. SENSORS
Figure 2.5: Shape and FOV of ultrasonic sensor, adapted from [12] The performance of ultrasonic ranging systems is significantly affected by environmental phenomena and sensor design characteristics. Of primary concern is the attenuation of sound energy over distance. As an acoustic wave travels away from its source, its intensity decreases to the inverse square law and due to absorption of the sound by air. This varies with humidity and dust content of the air. Absorption can also occur at the reflection surface and is a function of the characteristics of the object being detected. Another parameter affected by the ambient properties of air is the velocity of sound. The factors involved are air temperature and wind direction and velocity. As the robot will be used indoors, these factors will not be significant problems. Another factor to consider is the beam dispersion angle of the selected transducer. Best results for ranging are obtained when the beam centerline is maintained normal to the target surface. However, as Figure 2.6a shows, if the angle of incidence varies from the perpendicular, the range actually being measured does not always correspond to that associated with the beam centerline. The first beam reflection is from the portion of the target that is closest to the sensor.
(a) Due to beam divergence (b) Due to specular reflection
Figure 2.6: Ultrasonic ranging errors, adapted from [4]
The actual line of measurement intersects the target surface at point B as opposed to point A. The width of the beam introduces an uncertainty in the perceived distance to an object from the sensor, but an even greater uncertainty in the angular resolution of the object’s position. A very narrow vertical
20 Traineeship Report November 17, 2010 2.2. SENSOR TECHNOLOGIES CHAPTER 2. SENSORS target, such as a long wooden dowel, maintained perpendicular to the floor corresponds to a relatively large region of floor space, that would essentially appear to the sensor to be obstructed. Worse yet, an opening such as a doorway may not be perceptible at all to the robot when only 6 feet away, simply because at that distance, the beam is wider than the door opening. Finally, errors due to the topograph- ical characteristics of the target surface must be taken into account, as been explained in section 2.1.3. When the angle of incidence of the beam decreases below a certain critical angle, the reflected energy does not return to strike the transducer (Figure 2.6b). Ultrasonic sensors are a powerful and practical method for range determination for selected applica- tions. Simple construction make them reliable and economical. The low-cost factor also makes design redundancy feasible, further improving system reliability and effectiveness. The issues mentioned above however, have to be kept in mind.
2.2.2 Optical Active optical sources employed in range finding include broadband incandescent (the emission of light from a hot body due to its temperature), narrowband LEDs and coherent lasers. Lasers can be found in ranging equipment based on triangulation, TOF, proximity, phase modulation, interferometry and re- turn signal intensity (the last two options are not explained due to the limited applications in robot close range object detection). Lasers produce a bright, intense output which is important for long-distance ranging, and for distinguishing the signal from the background. Secondly, by nature or through use of corrective optics, laser beams are narrow and collimated, with little or no divergence. This property al- lows the source to be highly directional or spatially selective, because an intense beam of energy can be concentrated on a small spot at long distances. However, this induces a problem when the entire region of a robot needs to be scanned. A scanning laser range finder is a solution, or mechanical additions are necessary, for example in the form of a rotating platform to cover the region of interest. However, some disadvantages should also be taken into account. All laser-based systems represent a potential safety problem in that the intense and often invisible beam can be an eye hazard. A highly accurate beam delivery system is needed for high accuracy and the wide dynamic range of the returning anergy complicates the design of the detector electronics. Laser sources typically suffer from low overall power efficiency. Finally, lasing materials are often unstable and possess short lifetimes, resulting in reliability problems. Ranging is accomplished by pulsed TOF methods, Continuous Wave (CW) PM or Frequency Modulation (FM). Infrared systems can be found in ranging equipment based on triangulation, TOF and proximity. In- frared systems emit a pulse of light at a wavelength of around 700-900 nm and are much less powerful than lasers. Therefore, the effective range is shorter than its alternative. Similar to the laser, infrared distance sensors produce narrow beams of light that are highly accurate but suffer from reflectivity and angular inaccuracies, even more violent than the acoustical solutions. IR sensors can be used in combi- nation with a lens to obtain wide angle solutions, but this requires mechanical additions. Infrared light is suited for indoor activities because man-made objects tend to reflect infrared energy well. Infrared loses its usefulness outdoors due to the inherent radiation emitted by the natural terrain, roadways, and objects. Therefore, a consequent disadvantage of infrared is the sensitivity to ambient conditions, especially lighting.
2.2.3 Electromagnetic Radio Detecting And Ranging (RADAR) is the determination of the distance and bearing to an object and/or its speed relative to an observer which is calculated through the measurement of reflected elec- tromagnetic waves. The properties of the received echoes are used to form a picture or determine cer- tain information about the objects that cause the echoes. Specific advantages of radar sensing include
21 Traineeship Report November 17, 2010 2.3. CANDIDATE SYSTEMS OVERVIEW CHAPTER 2. SENSORS the ability to be unaffected by smoke, dust or haze-filled environments. Radar systems can produce high accuracies in terms of target discrimination and range computation. Radars are also effective at measuring speed of moving objects by Doppler shift methods, wherein the magnitude of the frequency shift of an energy wave reflected of a mobile target is proportional to its relative velocity. Pulsed energy systems can detect targets up to distances on the order of tens of kilo- meters, relying on the measurement of the round trip time of a propagating wave. The high speed of propagation of the emitted energy makes short distance measurements difficult for this type of system. The extremely sharp short-duration signals which must be generated and detected are expansive and complicated to realize. Continuous wave systems, on the other hand, are effective at shorter ranges because phase shift measurements are non dependent on the wave velocity. Systems employing a single antenna have the principal advantage that the antenna will collect all the returned energy which falls upon it. The energy is returned from a beam that is inversely proportional to the diameter of the reflector. The disadvantages include the need to manipulate a large diameter an- tenna system when the application requires narrow beams, and the effects of vibration and wind which can necessitate a massive supporting structure. Phased-array antenna configurations present an alter- native arrangement, that eliminates the problem of using a large antenna system and the sensitivity to wind. However, the resulting smaller coverage area decreases overall effectiveness and the requirement for electronically variable phase control increases the system complexity. The portion of the electromagnetic spectrum considered to be the useful frequency for practical radar is between 3 and 10 GHz. The primary electromagnetic source used in most modern conventional radar systems is microwave energy. Microwave energy is ideally suited for long-range sensing because the resolution is sufficient, attenuation of the beams in the atmosphere is minimal and low-mode guid- ing structures can be constructed. An advantage of this system is the all-weather capability. However, they are susceptible to specular reflections at the target surface. Shorter wavelengths can be used to produce systems with high angular resolution and small-aperture antennas. Higher angular resolution is possible at longer wavelengths, but the antenna size becomes very large. For these reasons, con- ventional radar-systems operating in the microwave portion of the energy have less applicability to the high-resolution collision avoidance needs of a mobile robotic platform. As an alternative to microwave radar systems, millimeter wave radar systems can be used. It involves using a much shorter (500 micrometer to 1 centimeter) wavelength, inducing small-sized antenna apertures. Consequently, more information can be obtained about the nature of targets than at larger wavelengths because of reducing scattering of the reflected signal by objects. The overall physical size of the system is reduced, but the smaller apertures result in less collected energy. This limits the ef- fective range of the system. The resulting atmospheric attenuation also prevents the operation in all weather conditions. Unfortunately, even today not many affordable commercial millimeter wave radars are available and are therefore not investigated.
In the next section an overview is shown for all candidate systems to be considered. A summary with all advantages and disadvantages is added to the table and based on the characteristics of the sensor types, conclusions can be drawn and a decision can be made for what sensor is to be used.
2.3 Candidate systems overview
Table 2.1 shows an overview and summary of the results on the investigation of the sensor types. The information in this table is obtained from data sheets supplied by the manufacturers and websites from distributors. An additional small summary of the strengths and weaknesses of the particular sensor types is included.
22 Traineeship Report November 17, 2010 2.3. CANDIDATE SYSTEMS OVERVIEW CHAPTER 2. SENSORS ) ◦ Sensor Cost estimation per piece Info provided Strenghts Weaknesses Minimum range (cm) Maximum range (cm) Accuracy on moving objects Angle of operation ( Laser Displace- e1500,- Distance Material properties Price, range, low 3 40 - 10 ment sensor non-issue, accuracy power-efficiency) Photo-Electric e200,- Distance Material properties Minimum range 20 50 - 151 Triangulation non-issue, accuracy Optical Time of e4000,- Distance Material properties short angle, price, 10 200 - 0.1 Flight non-issue, accuracy minimum range Microwave Time e500,- Distance All weather capabil- minimum range, sus- 15 1500 + "wide" of Flight Radar ity, high-angular res- ceptible to specular olution reflections Optical Reflec- e170,- Material properties Short angle 0.1 70 +/- 2 tion non-issue, accuracy Inductive Sensor e50,- Metal/non- accuracy on metal, only metal objects, 0 6 + 15 metal not influenced by range movement Capacitive Sensor e100,- Object material properties range 0 3 + presence non-issue, accuracy Switch e10,- Object material properties range 0 0 + - presence non-issue, accuracy SLRF e1000- Distance material properties price, low power- 6 400 +/- 240 e5000,- non-issue, accuracy efficiency
Ultrasonic Prox- e20,- Object accuracy, wide beam dependency on sound 3 300 - 20 imity Sensors presence absorbation, Ultrasonic Range e30- Distance accuracy, wide beam, dependency on sound 3 600 +/- 25 Finder e40,- ease of use absorbation Infra-Red Range e10- Distance price, simplicity dependency of light, 0 80 +/- 3 Finder e20,- short angle 1 With the use of a prismatic reflector
Table 2.1: Candidate systems
Most information of Table 2.1 is obtained from the data sheets of the sensors stated in Table 2.2.
Type Product Laser displacement Idec MX1A Photo electric triangulation Idec S1AD Optical time-of-flight Sick DME 2000/3000 Microwave time-of-flight radar AM Sensors MSM10500 Inductive Contrinex DW-AD-70-M30 Capacitive Carlo Gavazzi EC M30 DC Scanning Laser Range Finder Hokuyo UBG-04LX-F01 Ultrasonic range finder Devantech SRF-08 Infrared range finder Sharp GP2D12-15 Optical reflection Eaton E67
Table 2.2: Sensors
23 Traineeship Report November 17, 2010 2.4. DISCUSSION CHAPTER 2. SENSORS
Sensors are found from Conrad Electronic Benelux BV, Farnell, Robot-Electronics or [14].
2.4 Discussion
The laser displacement sensor and scanning laser ranger finder are no valuable options for this par- ticular scope. The low cost-benefit for these devices is decisive. The SLRF currently installed on the robot is suitable for range detection, however the technical limitations explained in 2.2.2 and 1 are still unsolved when another laser (or SLRF) is implemented. Therefore, choosing a laser (range finder) is not valuable. The photo-electric triangulation system has a minimum range of 15 cm and is therefore not suitable. Due to technical limitations, the sensors cannot be positioned more inwards on the robot’s platform. This is due to the fact that a large part of the surface is destined to support several devices such as the Mac Mini for the Robot Operating System (ROS) interfacing. Then, the angle of operation is short. This however can be solved by using a prismatic reflector which will complicate the design (as a reflector will have to be placed for every sensor and will need a support) and is therefore not favorable. The microwave time-of-flight radar system is an interesting option, however it is very expensive and the again minimum range is an issue. Optical reflection is not very expensive and is still very accurate, however they have a very short angle of operation. Provided the requirement of a full coverage of the area surrounding the robot from 50 cm, the amount of sensors needed to obtain a satisfactory result is substantial, making it still a very expensive solution. The inductive, capacitive, and switch systems can be excluded as well. The sensors are useful for spe- cific product solutions, but not for this project. However, a considerable option is to implement a capacitive sensor or possibly a switch for extra collision prevention next to a separate close range sys- tem. Infrared systems can be a solution concerning their price and accuracy. Though the short angle of operation, sensitivity to ambient conditions and weak performance on sensing windows shed light on problems implementing these type of sensors. In this case, the ultrasonic (or SONAR) range finder, is the best solution. They are easy to implement, have a wide angle and are relatively cheap for achieving the 50 cm coverage requirement. With the exception of sound absorbtion, all primary disadvantages and problems regarding the sonar technique can be overcome. Timing issues due to the cross-talk problem can be solved by using groups of sonar devices. The "missing" of doorways can be solved by choosing a system that sends multiple bursts, capable of determining different ranges of objects during one measurement poll. This is further ex- plained in the following chapter. Ultrasonic proximity sensors are eliminated due to fact that only object presence is supplied by proximity sensors. The mobile robot platform should be able to perform actions based on different distances from an object, which is not possible when an object can only be detected.
24 Traineeship Report November 17, 2010 Chapter 3
Sonar Systems
3.1 Sonar selection
Since a decision has been made for what type of sensor will be used, the selection to the sonar sensor intended for the ranging is further specified. In Table 3.1 four sensors are stated that are used in most mobile robot solutions for ranging and object detection. The information provided in this table is collected from the data sheets of the products and the websites of the manufacturers.
Sensor Range (cm) Angle (◦) Refresh Rate (Hz) Frequency (kHz) Price per piece Parallax PING))) 2-300 20 >55 40 e22,98 Devantech SRF-08 3-600 28 >15 40 e29,42 LV-MaxSonar-EZ1 0-645 18.5 20 42 e21,15 SensComp Series 6000 2.5-1050 21 5 50 e15.881 1 Price of necessary ranging module is excluded
Table 3.1: Sonar systems
Prices are obtained from [1] and converted from pounds to euros with the exchange currency of 1:1.15375 (live rate of 05-10-’10). The angles of operation are derived from the data sheets. The manufacturers measure the angle using different objects, but the commonly used objects are either cylindrical or round that have the property to reflect light in all directions. This causes the sensor to sense the object from a wide angle. If for instance a wall, a board or a square pole is the object of interest, due to specular reflections the angle of operation will be much smaller, as shown in the data sheet of the Parallax PING))) (Appendix B). For this reason, taking the inaccuracy of the orientation when the sensor is mounted to the platform into account, the ratio between the widest and smallest angle obtained from the data sheet is used for every sensor, provided that only the ideal situation is given in the other data sheets. The ratio is approximately 2:1. The angle of operation is then pessimistic, but will evade small angular displacements and inaccuracies due to mounting or varying ambient conditions. All four sonar sensors use the TOF technique, section 2.1.3, to determine the range of an object. To calculate the amount of sensors needed, a Matlab m-file (Appendix E.1.2) is built that contains the robot’s surface (80x37cm) and the position of the first 10 sensors. To suppress the complexity of the calculations for the positioning and orientation of the sensors, and due to uncertainties in the shape of the surface, it is assumed that the surface of the robot’s platform is rectangular. Then, the orientation (in x,y) is calculated keeping the full coverage at 50 cm in mind, and sensors are added at variable positions to achieve the goal of this project. The results are shown in Figure 3.1.
25 Traineeship Report November 17, 2010 3.1. SONAR SELECTION CHAPTER 3. SONAR SYSTEMS
150 150 150
100 100 100
50 50 50
0 0 0
−50 −50 −50
−100 −100 −100
−150 −150 −150
−150 −100 −50 0 50 100 150 −150 −100 −50 0 50 100 150 −150 −100 −50 0 50 100 150
(a) Parallax PING and LV-MaxSonar- (b) Devantech SRF08 (c) Senscomp Series 6000 EZ1
Figure 3.1: Sensor placements and orientations
The Parallax PING))) and LV-MaxSonar-EZ1 (Figure 3.1a) sensor systems both emit an acoustic wave with an operational angle, in worst case, of 10 degrees. This induces the necessity of 28 sensors to obtain a full coverage at 50 cm. The Devantech SRF-08 (Figure 3.1b), with an operational angle of 14 degrees, in worst case, necessitates 20 sensors mounted to the mobile robot’s platform. The SensComp Series 6000 Transducer (Figure 3.1c) has the ability to detect objects with an operation angle of 12 degrees, in worst case. Therefore, the amount of sensor needed is 24. The square surface centered on the figures is the mobile robot’s platform. The colored triangles sur- rounding the platform are the representations of the FOV of the selected sonar sensor. The rounded rectangular crossing the FOV of the sensors is the 50 cm boundary line. By calculating the orientat- ing and placement on the platform for each sensor, using regular goniometric formulas, the amount of sensors are calculated. Clearly, the FOV of all sensors cross at this boundary. An example of the calculations is shown below.
Figure 3.2: Sensor placement and orientation for Senscomp Series 6000
26 Traineeship Report November 17, 2010 3.2. DISCUSSION CHAPTER 3. SONAR SYSTEMS
β = θ1 + α (3.1) R D = (3.2) tan β −1 R θ2 = tan ( ) + α (3.3) L + D The resulting amount of sensors can than be used to estimate the cost price for the badge of sensors needed for the mobile robot. Table 3.2 expresses these values.
Sensor Price per piece Amount of sensors needed Cost estimation Parallax PING))) e22,98 28 e640,- Devantech SRF-08 e29,42 20 e588,- LV-MaxSonar-EZ1 e21,15 24 e507,- SensComp Series 6000 e15.881 24 e997,- 2 1 Price of necessary ranging module is excluded 2 Cost estimation includes Senscomp 6500 ranging module for every sensor, e25,68 per piece
Table 3.2: Sonar systems and cost estimation
3.2 Discussion
Comparing the cost estimation combined with the angle of operation, both the LV-MaxSonar-EZ1 and the Devantech SRF-08 come forward. The Senscomp transducers require an additional ranging mod- ule or smart sensor, approaching e1000,- for the entire package of transducer with ranging module or the use of a smart sensor. This erases the Senscomp transducer as a valuable option. The Parallax Ping))) sensor is significantly less expensive than the SensComp transducers, but due to the small angle of operation requires an amount of 28 sensors to obtain full coverage. The Parallax PING))) sensor module is equipped with 3 lines, Vdd (+5V DC), Vss (ground) and SIG (Input/Out- put (I/O) line) (Appendix B), providing the ability to easily connect and interpret the data. However, in this case multiple sensors need to be used. The capability of the sonar systems to operate in a network- ing environment is key to suppress the complexity of this project. However, due to this single I/O pin, networking is impossible and the use of separate I/O pins for every individual sensor inevitable. This increases cost, programming development and the complexity of the system. The small angle of opera- tion, combined with the poor support of networking the sensors makes the PING))) a weak contender, leaving the Devantech and LV-MaxSonar as the two remaining possible options. The Devantech features an advantageous FOV but this comes with a higher cost. Both the LV-MaxSonar as the Devantech operate at a refresh rate of 15-20 Hz. The LV-MaxSonar is slightly faster, but the De- vantech offers the ability to alter the range or analogue gain (Appendix A). Lowering the range or gain would effectively decrease the range, and increase the refresh rate. The Devantech uses an Inter- Integrated Circuit (I2C) communication protocol, ideally suited for networking capabilities. Up to 16 sonar sensors can be serially connected to one I2C bus and can be approached by their individual addresses. The LV-MaxSonar features RX and TX lines, but no addressing. Therefore, for its com- munication and networking capabilities, its wide FOV, the ability to improve the refresh rate and its reasonable pricing, the Devantech SRF-08 is the sensor of choice.
27 Traineeship Report November 17, 2010 3.3. IMPLEMENTATION CHAPTER 3. SONAR SYSTEMS
3.3 Implementation
The Devantech sensor modules are equipped with an I2C communication bus, which is capable of providing the data necessary to obtain range information and of supporting 16 sensors on one bus. Provided this information, only two separate buses are necessary to access all sensors. The properties, a short description of the I2C bus and how to obtain an operational network that encloses the array of sensors, is further explained in chapter 5. Mounting the hardware to the robot’s surface will take place after experiments and simulations have prevailed. The obtained values from the experiments provide the accurate location and orientation of each individual sonar sensor to the mobile robot’s platform. Two remaining possible problems are cross-talk and the refresh rate. These will be discussed in the next sections.
3.3.1 Cross-talk Cross-talk occurs when a pulse of energy is emitted from one transmitter and an object with a certain shape causes the energy to be reflected in the direction of a receiver from a different system. As a countermeasure, the devices are arranged in groups. In order to suppress cross-talk, but also to improve the refresh rate of the entire system, a compromise is made between safety and speed. The more sensors are placed in one group, thus are fired at the same time, the less time it is required to poll the entire robot’s surroundings. However, this enhances the chance of ranging errors due to cross-talk. Therefore, the group of sensors is selected in such way that spacing between the individual sensors is optimal and the refresh rate is improved in comparison to firing the sonar systems one-by-one. The sensors are arranged as shown in Figure 3.3
(a) Group 1 (b) Group 2 (c) Group 3
(d) Group 4 (e) Group 5
Figure 3.3: The groups of sensors operating on the mobile platform
Each individual sensor that is a member of an active group receives the start ranging, or ’ping’, com- mand. Then, when all sensors within the group are fired, the first member’s ranging information is
28 Traineeship Report November 17, 2010 3.3. IMPLEMENTATION CHAPTER 3. SONAR SYSTEMS acquired and interpreted. Consequently, the following group is initiated by ranging the first sensor of this group, followed by the second sensor, and so on. The command to return the ranging information is sequenced immediately after the poll command, to ensure performance with respect to the refresh rate. When a read command is received by the sensor while it is still ranging, the hexadecimal number 0xFF is returned, meaning the sensor is unable to return information. The ranging information is available when the latest echo is returned, or when a time-out has occurred. Therefore, a continuous stream of read commands are send until eventually ranging information is provided by the sensor. In first instance, this sequence is simulated with Matlab, including a random polygon representing an object. The goal is to obtain some information and details with regard to the sensor’s characteristics and the optimal range/analogue gain setting.
3.3.2 Refresh rate The Devantech SRF-08 sensors can be fired every 65 ms (Appendix A). Some data acquisition and interpretation is necessary, before the next fire command can be send. Summing these factors, the time necessary to fire, read and collect the data for one sensor is ∼ 80 ms. Provided that the sensors operate in groups of 4, see section 3.3.1, the realtime refresh rate of the realtime surrounding of the mobile robot is tt = Ag · t = 5 · 80 = 400ms (3.4) where Ag is the amount of groups, t the update time in ms and tt the total time in ms. In the situation where an object is 1.5 m away and for some reason the object is still not detected, the subsequent moment the object can be detected by one sensor is 400 ms later. Or, when the mobile robot is driving the maximum speed of 0.5 m/s, the object is at a distance of 1.3 m. Taking this into consideration, the total refresh rate of the robot’s surrounding is sufficient for the detection of and manoeuvring around objects. However, this is a theoretical approach and therefore to properly prove the previous statement, experiments will be carried out. When the outcome of the experiments are negative, meaning the refresh rate is not sufficient, the Devantech SRF-08 provides a feature to decrease the range and analogue gain of the device. Changing the range can be done by setting the value of register one to: R = (Reg · 43) + 43 (3.5) where R is the range in mm and Reg the value of the Range Register. This information is provided by the data sheet of the manufacturer. However, this alteration is not effectively the range itself, but the time at which the sensor is able to provide the ranging information. In other words, it lowers the refresh rate of the sensor without changing anything to the emitted pulses. This causes the problem of when operating in a group of sensors, one sensor can still be firing echoes while it’s ranging information is already gathered and another sensor is fired simultaneously. The effect of cross-talk will occur. To suppress this effect, the analogue gain of the analogue states can be set. This allows the sensor to emit a weaker pulse, causing the effective range to decrease. The table of values to set the analogue gain can be found in the data sheet, Appendix A.
29 Traineeship Report November 17, 2010 Chapter 4
Simulations
This chapter describes the proceedings to simulate the behavior of the sensors and the results found by these simulations.
4.1 Simulation sensor behavior
The behavior of the sensors are simulated by fixing the robot’s surface and let random polygons, repre- senting random objects, pass by in the surroundings of the robot. To do so, in first instance the polygons are created, then randomly chosen and placed somewhere on the map together with the robot. By mov- ing the polygon in a random direction, the polygon, or object, will eventually move towards the FOV of one or more senors. The detection of the object is based on supplying the minimal distance between the actual position of the sensor and the object. Figure 4.1 illustrates the behavior of the sensor. As the sensor only provides the distance reading of the object, it is uncertain where the actual object is lo- cated in the robot’s surroundings. Therefore illustrative circular arcs are shown providing the possible location of the object within the FOV of the sensor ([15]).
Figure 4.1: Example of sonar object detection The location, orientation and curvature of the arc is calculated by the distance reading of the sensor, resulting in the radius of the arc, and the field of view of the sensor. The outer points of the arc are calculated by the following equations
Px,1 = xs + R cos(θ + α − φ) (4.1) Py,1 = ys + R sin(θ + α − φ) (4.2) Px,2 = xs + R cos(θ − α − φ) (4.3) Py,2 = ys + R sin(θ − α − φ) (4.4) where xs, ys is the position of the sensor, Pxi , Py1 the outer points on the arc, R the range information, or distance to object, θ the orientation of sensor, φ the orientation of robot to x, y axis (rad/s) and α is
30 Traineeship Report November 17, 2010 4.1. SIMULATION SENSOR BEHAVIOR CHAPTER 4. SIMULATIONS the angle of operation. In the Cartesian coordinate system, the circle with center (a,b) and radius r is the set of points (x,y) such that (x − a)2 + (y − b)2 = r2 (4.5) This equation of the circle follows from the Pythagorean theorem applied to any point on the circle, as shown in Figure 4.2.
Figure 4.2: Circle of radius r = 1, center (a, b) = (1.2, -0.5) Using equation (4.5) the points of the circular arc are calculated by the following set of equations. As 2 2 2 (y − ys) = R − (x − xs) (4.6) we find p 2 2 p 2 2 y − ys = + R − (x − xs) or y − ys = − R − (x − xs) (4.7)
Using equation (4.5), for −π < φ, θ, α ≤ π the points of the arc in the sensor’s FOV are calculated by π π π π equations (4.8) if and only if (−θ + 2 ) + φ + α < 2 and (−θ + 2 ) + φ − α > 2 .
for xs − R < x < Px,1 and for xs − R < x < Px,2 p 2 2 p 2 2 y = ys − R − (x − xs) y = ys + R − (x − xs) (4.8) with x, y being the points on the circle representing the object’s apparent location.