Improving Robustness and Precision in Mobile Robot Localization by Using Laser Range Finding and Monocular Vision

Kai O. Arras, Nicola Tomatis

Autonomous System Lab Swiss Federal Institute of Technology Lausanne (EPFL) CH–1015 Lausanne, kai-oliver.arras@epfl.ch, nicola.tomatis@epfl.ch

Abstract scene descriptions. Navigation based on geometric features allow for compact and precise environ- This paper discusses mobile robot localization by ment models. Maps of this type are furthermore means of geometric features from a laser range directly extensible with feature information from finder and a CCD camera. The features are line different sensors and thus a good choice for multi- segments from the laser scanner and vertical edges sensor navigation. This approach relies however from the camera. Emphasis is put on sensor on the existence of features which represents a models with a strong physical basis. For both limitation of environment types. sensors, uncertainties in the calibration and This is viewed as a loss of robustness which can measurement process are adequately modeled and be diminished by simultaneously employing geo- propagated through the feature extractors. This metric features from different sensors with com- yields observations with their first order covari- plementary properties. In this work we consider ance estimates which are passed to an extended navigation by means of line segments extracted Kalman filter for fusion and position estimation. from 1D range data of a 360° laser scanner and Experiments on a real platform show that vertical edges extracted from images of an opposed to the use of the laser range finder only, the embarked CCD camera. multisensor setup allows the uncertainty to stay Precise localization is important in service tasks bounded in difficult localization situations like where load stations might demand accurate dock- long corridors and contributes to an important ing maneuvers. Mail delivery is such an example reduction of uncertainty, particularly in the orien- [2]. When the task includes operation in crowded tation. The experiments further demonstrate the environments where a moving vehicle is supposed applicability of such a multisensor localization sys- to suggest reliability and predictability, precise tem in real-time on a fully autonomous robot. and thus repeatable navigation helps evoking this subjective impression. 1. Introduction The use of the Kalman filter for localization by means of line segments from range data is not new [9][11][2][7]. Vertical edges have been equally Localization in a known, unmodified environment employed [8], and propositions for specific match- belongs to the basic skills of a mobile robot. In ing strategies are available in this context [12]. In many potential service applications of mobile sys- [10], the same features were applied for approach- tems, the vehicle is operating in a structured or ing the relocation problem. The multisensor setup semi structured surrounding. This property can be was used to validate observations of both sensors exploited by using these structures as frequently before accepting them for localization. In [13], a and reliably recognizable landmarks for naviga- similar setup was used with a 3D laser sensor tion. Topological, metric or hybrid navigation simultaneously delivering range and intensity schemes make use of different types of environ- images of the scene in front of the robot. Line seg- ment features on different levels of perceptual abstraction. ments and vertical edges were also employed in a Raw data have the advantage of being as gen- recent work [14], where the localization precision eral as possible. But, with most sensors, they are of laser, monocular and trinocular vision has been credible only by processing great amounts and are separately examined and compared to ground of low informative value when looking for concise truth measurements.

Proceedings of the Third European Workshop on Advanced Mobile Robots (EUROBOT ‘99), Zurich, Switzerland, Sept.6-8, 1999. 2. Sensor Modeling Kodak gray scale control patch where 10’000 read- ings were taken at each of the twenty gray levels. It is attempted to derive uncertainty models of the Opposed to the model in [1], we can observe an sensors employed with a strong physical basis. abrupt rise of noise below a certain amplitude Strictly speaking, it is necessary to trace each (fig. 1). This reduces our model for range variance source of uncertainty in the measurement process to a constant value, independent on target dis- and, with knowledge of the exact measurement tance and amplitude.

−3 principle, propagate it through the sensor electron- x 10 ics up to the raw measurement the operator will 3 see. This allows for a consequent statistical treat- 2.5 ment with noise models of high fidelity which is of 2 great importance for all subsequent stages like 1.5 feature extraction and matching. 1 2.1 Laser Range Finder 0.5 0 −9 −8 −7 −6 −5 −4 −3 −2 −1 0 In all our experiments we used the commercially available Acuity AccuRange4000LR. The Acuity Figure 1: Range standard deviation (y-axis, in sensor is a compromise between building a laser meters) against measured amplitude (x-axis, in range finder by one’s own and devices like the Volts). 10’000 readings, measurements were done scanners of SICK (e.g. PLS100, LMS200). The lat- with a Kodak gray scale control patch. ter two deliver both range and angle information and come with standard interfaces. Besides the Although this analysis lead to such a simple protocol driver which is to be written, they can be result it permits rejection of false or very uncer- used practically plug-and-play. The disadvantage tain readings by means of the amplitude measure- is that this black-box character inhibits the above- ments. This is very important since in many mentioned analysis of noise sources. The practical cases the sensor exhibits strong depen- AccuRange 4000 provides range, amplitude and dency upon the surface properties like color and sensor temperature information, where amplitude roughness. Moreover, the Acuity sensor is often is the signal strength of the reflected laser beam. and reproducibly subject to outliers. When the They are available as analogue signals. laser beam hits no target at all, and at normally Relationships for range and angle variance are occurring range discontinuities it returns an arbi- sought. The accuracy limit of encoders are usually trary range value, typically accompanied by a low low with respect to the beam spot size. Angular signal strength. variability is therefore neglected. For range accu- racy there are several factors which influence the 2.2 CCD Camera extent of noise: The vision system consists in a Pulnix TM-9701 • The amplitude of the returned signal which is full frame, gray-scale, EIA (640 x 480) camera with available as measurement. an effective opening angle of 54° which sends a • Drift and fluctuations in sensor circuitry. At the standard RS-170 signal to a Bt848 based frame configured sampling frequency (1 kHz) this is grabber. No dedicated DSPs are used in our setup, predominant over thermal noise of the detection all image processing is done directly by the CPU of photodiode and resolution artifacts of the inter- the VME card. nal timers. A vision task which is intended to extract accu- • Noise injected by the AD conversion electronics. rate geometric information from a scene requires a In [1] a phase shift measurement principle has calibrated vision system. For this a variety of cam- been examined yielding a range variance to ampli- era parameters including its position and orienta- σ 2 ⁄ tude relationship of the formρ =aVr+b , where tion (extrinsic parameters), image center, scale σ2 ρ is range variance andVr the measured ampli- factor, lens focal length (intrinsic parameters) and tude. After identification, an inverse, slightly non- distortion parameters are to be determined. To cal- linear function was found. For identification in our culate a coherent set of intrinsic, extrinsic and dis- case, an experiment was performed with a station- tortion parameters the calibration technique [15] ary target at about 1 meter distance. The returned has been combined with a priori knowledge of fea- signal strength was varied systematically with a tures from a test field. The procedure is to extract

Proceedings of the Third European Workshop on Advanced Mobile Robots (EUROBOT ‘99), Zurich, Switzerland, Sept.6-8, 1999. 3. Feature Extraction

Geometric environment features can describe structured environments at least partially in a compact and exact way. Horizontal or vertical line segments are of high interest due to the frequent occurrence of line-like structures in man-made environments and the simplicity of their extrac- tion. More specifically, the problem of fitting a model to raw data in the least squares sense has closed form solutions if the model is a line. This is even the case when geometrically meaningful errors are minimized, e.g. the perpendicular dis- Figure 2: CCD image of the corridor where the tances from the points to the line. Already slightly experiments have been carried out (step 5 of the tra- more complex models like circles do not have this jectory). The image is compensated for radial dis- property anymore. tortion. 3.1 Laser Range Finder and fit vertical and horizontal lines from the test field and determine the distortion in the x- and y- The extraction method for line segments from 1D direction. By knowing the 3D position of these range data has been described in [3]. A short out- lines in space, the intrinsic, extrinsic and distor- line of the algorithm shall be given. tion parameters can be determined simulta- The method delivers lines and segments with neously. their first order covariance estimate using polar Due to the particularity of our application, some coordinates. The line model is simplifications for the final calibration model can ρϕαcos()––0r = (4) be done. The camera is mounted horizontally on the robot and the center of projection of the imag- where()ρϕ, is the raw measurement and ()α,r ing device is at (0,0) in robot coordinates with ori- the model parameters. entation 0°. Since only vertical edges are Segmentation is done by a relative model fidel- extracted, calibration is needed only in the hori- ity measure of adjacent groups of points. If these zontal direction (x-axis). With that, only few groups consist of model inliers they constitute a parameters remain to be modeled: the focal length compact cluster in model space since their parame- (), C, the image centercx cy , and the distortion ter vectors of the previously fitted model are simi- , parametersk1k2 . This yields equation (1) for lar. Segments are found by comparing this parameter definition criterion against a threshold. A hierarchical x agglomerative clustering algorithm with a Mahal- C⋅ -----w=x ' +x ()k r 2 + k r 4 , (1) z c 1 2 anobis distance matrix is then applied. It merges w adjacent segments until their distance in model wherex' refers to the distorted location, xcspace is greater than a threshold. 222 =x'–cx,yc=y'–cy ,r=xc+yc andxw ,zw are The constraint of neighborhood is removed as measures of the test field in camera coordinates. soon as the threshold has been exceeded so as to The angleϕ of a feature relative to the robot is fuse non-adjacent segments as well. The clustering finally calculated using equations (2) and (3). terminates if the same threshold was reached again. This allows for particular precise reestima- ()2 4 xx='+xck1r+k2r (2) tions of spatially extended structures like for example a long wall where objects, doors or other ϕatan()⁄ = xC (3) discontinuities lead to multiple small segments The uncertainties from the test field geometry belonging to this wall. and those caused by noise in the camera electron- For the line fitting, the nonlinear regression ics and frame grabber AD conversion are propa- equations have been explicitly derived for polar gated through the camera calibration procedure coordinates minimizing the weighted perpendicu- onto the level of camera parameters yielding a 4x4 lar error from the points to the line. Opposed to a parameter covariance matrix. Kalman filter, more general error scenarios of pos-

Proceedings of the Third European Workshop on Advanced Mobile Robots (EUROBOT ‘99), Zurich, Switzerland, Sept.6-8, 1999. sibly correlated measurements can be taken into • Line fitting: Columns with a predefined number account. of edge pixel are labelled as vertical edges. Line An advantage of this method is its generality. fitting reduces to a 1D problem. Sub-pixel preci- This follows from the fact that all decisions on seg- sion is achieved by calculating the weighted mentation and merging are taken in the model mean of the non-discrete edge pixel position space, independent on the spatial appearance of after image calibration. the model. Only the fitting step is inherently model specific and has to provide the parameter’s first and second moments. In [7] the segmentation method first detects homogeneous regions which are basically marked by unexpected range discontinuities. Identical to the detection of edges in [1], this is done with a Kalman filter. Afterwards segments are found by a (a) widely used recursive algorithm which applies a distance criterion from the points to their current homogeneous region. If the maximal distance is above a threshold, the segment is splitted into two separate regions restarting the process for them. The line parameter estimate is finally done with an information filter. Alike [3] consecutive seg- (b) ments are merged and reestimated if their moments exhibit sufficient similarity. In [16] range discontinuities are equally detected with a Kal- man filter. They serve directly as segment end points.

3.2 CCD Camera (c)

If the camera has a horizontal position on the vehi- Figure 3: Extraction of vertical edges. (a): uncali- cle, vertical structures have the advantage of being brated half image, (b): edge pixels after edge view invariant, opposed to horizontal ones. Fur- enhancement, non-maxima suppression and cali- thermore, they require simple image processing bration, (c): resulting vertical edges. which is important for a real-time implementation under conditions of moderate computing power. For several reasons, only the lower half of the This line extraction method is essentially a special image is taken: better illumination conditions (less case of the Hough transformation, where the direct light from windows and lamps), the avail- model is one-dimensional since the direction is ability of corresponding feature information to the kept constant. one from the laser range finder and more frequent In each abovementioned step, the uncertainty of occurrence of reliable vertical edges. the calibration parameters and a constant uncer- The extraction steps can be summarized as fol- tainty in the x-position of the input image are propagated. The result is a set of vertical edges lows: 2 described by their first two moments()ϕσ, ϕ . • Vertical edge enhancement: Edge extraction using a specialized Sobel filter which approxi- mates the image gradient in the x-direction. 4. EKF Localization • Non-maxima suppression with dynamic thresh- olding: The most relevant edge pixels (maximal Under the assumption of independent errors, the gradient) are extracted and thinned by using a estimation framework of a Kalman filter can be standard method. extended with information from additional sensors • Edge image calibration: The camera model is in a straight-forward way. Since this paper does applied to the edge image using the previously not depart from the usual use of extended Kalman determined parameters. The image is calibrated filtering and error propagation based on first-order only horizontally. Taylor series expansion, most mathematical

Proceedings of the Third European Workshop on Advanced Mobile Robots (EUROBOT ‘99), Zurich, Switzerland, Sept.6-8, 1999. details are omitted. Only aspects which are partic- For several reasons vertical edges are much ular are presented. Please refer to [2][3][5] or [11] more difficult to associate [13][14]. For example, for more profound treatments. multiple, closely extracted vertical edges (see The prediction step uses an a priori map of the fig. 3) are often confronted with large validation environment which has been measured by hand. regions around the predictions. Accordingly, ‘best’

Each featurepi in the map (i.e. lines and vertical for vertical edges means unique match candidate edges) received constant values for their positional with smallest Mahalanobis distance. When there Wi[] uncertainty, held by the covariance matrixCp , is no unique pairing anymore, candidates with where superscript W denotes the world frame. multiple observations in the validation region are Together with the predicted robot pose uncertainty accepted and chosen according to the smallest Pk()+1k , it is propagated into the according sen- Mahalanobis distance of the their closest observa- sor frame for prediction. Additionally, the covari- tion. R ance matrixCS accounts for uncertain robot-to- sensor frame transformations. The innovation 5. Implementation and Experiments covariance matrix of predictionzˆi and observation zj at time indexk+1 is then 5.1 The Robot Pygmalion [] []T [] [] []T S()k + 1 =∇hi⋅⋅Pk()+1k ∇hi+∇hi⋅⋅WiC ∇hi ij x x pp p Our experimental platform is the robot Pygmalion [] []T ++∇hi⋅⋅RC ∇hi R()k+1, (5) which has been recently built in our lab (fig. 4). Its s S s j design principles are oriented towards an applica- () whereRjk+1 is the first order covariance esti- tion as service or personal robot. Long-term auton- ∇[]i mate of observationzj ,hxps,, the measurement omy, safety, extensibility, and friendly appearance Jacobian with respect to the uncertain vectors were the main objectives for design. With its xps,, of the nonlinear measurement model h(). dimensions of about 45x45x70 cm and its weight of relating the system statex to the measurements 55 kg it is of moderate size and danger opposed to for predictionzˆi . many robots in its performance class. The system For matching, an iterative strategy has been runs the deadline-driven, hard real-time operating implemented where each step includes (i) match- system XOberon [6]. ing of the current best pairing, (ii) estimation and (iii) re-prediction of features not associated so far. Figure 4: Pygmalion, As in [13], line segments are integrated first since the robot which was they are mutually more distinct, leading less often used in the experi- to ambiguous matching situations. ments. It is a VME Always the current best match is searched based system with a among pairs of predictionszˆi and observations zj six axis robot con- which satisfy the validation test troller. The proces- sor board carries ()–1 ()T≤χ2 zj–zˆi Sij zj – zˆi α,n (6) currently a PowerPC χ2 at 100MHz. Besides whereα,n is a threshold reflecting a probability levelα chosen from a chi-square distribution with wheel encoders and ndegrees of freedom. Matching observed seg- bumpers, the sen- ments to predicted lines from the map is done in sory system includes the()α,r –model space, thereforen=2 . With ver- the laser range tical edgesn=1 since theϕ –model space is one- finder and the CCD dimensional. camera discussed in The criterion of pairing quality is different for the second chapter. segments and vertical edges. ‘Best’ for line seg- ments means smallest observational uncertainty 5.2 Experimental Results among the candidates satisfying (6) – not smallest Long corridors notoriously recur in mobile robot- Mahalanobis distance. This renders the matching ics. They are, however, an excellent benchmark for more robust against small spurious line segments the localization system of a service robot, since which have been extracted in groups of outliers they occur often in realistic application scenarios occasionally occurring with our sensor (chapter 2). and contain the difficulty to stay localized in the

Proceedings of the Third European Workshop on Advanced Mobile Robots (EUROBOT ‘99), Zurich, Switzerland, Sept.6-8, 1999. R 2 systemC is set to 0, since this uncertain trans- Sv formation is already captured by the camera cali- 1.5 bration and its uncertain parameters (chapter 25130 1 135 2.2). 1 From the figures it can be seen that the corridor 652

0.5 140 145 is actually well-conditioned in the sense that many structures exist which can be used for updates in 80 0 the direction of travel. In the first experiment all these structures have been removed from the map. −0.5 103 190 Only in the two labs and at the upper corridor end the map was left unchanged. In fig. 7 and 8 mod- −1 40 185 180 175 170 eled structures are in black, unmodeled ones in

−1.5 gray. It is attempted to demonstrate the need of additional environment information, if, due to a −2 difficult working area, information of one type is −1 −0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 x [m] not sufficient for accomplishing the navigation Figure 5: Predictions and observations in the robot task. frame before estimation at step 14 of the test trajec- Figure 7 shows one of three runs which were tory. All line segments and three vertical edges performed with the laser range finder only. As to have been matched (to predictions 145, 175, 180). be expected during navigation in the corridor, uncertainty growed boundless in the direction of Figure 6: Update result- Pk()+1k travel. Inaccurate and uncertain position esti- ing from the matches in mates lead regularly to false matches and incor- fig. 5. The Kalman fil- rect pose updates in front of the lower lab. ter corrects the robot In the second run of the first experiment (fig. 8, about 5 cm backwards, one of three runs shown), the vision system was establishing good cor- able to ameliorate precision and reduce uncer- respondence of observa- Pk()+1k+1 tainty to an uncritical extent. This is illustrated in tion and prediction of fig. 5 and fig. 6, where for one of the corridor situa- −78.3 −78.2 −78.1 −78 −77.9 −77.8 −77.7 −77.6 −77.5 the three matched verti- tions (step 14), predictions and extraction results cal edges in fig. 5. Thus they allow for an update are displayed in the robot frame before estimation. also in corridor direction; the state covariance The three matched vertical edges allow for a full matrixPk()+1k+1 stays bounded. update. The task could always be finished. The precision at the endpoint was typically in a sub- direction of travel. In order to demonstrate the centimeter range, showing a slight temperature performance of our multisensor localization sys- dependency of the Acuity sensor. tem, we chose the corridor environment shown in In the second experiment, the contribution of fig. 2, 7 and 8 as benchmark. Two experiments monocular vision to the reduction of estimation have been conducted where the robot drove an uncertainty has been examined. The same test overall distance of more than 1 km. In both experi- path was travelled five times with the laser range ments the vehicle followed the trajectory depicted finder only and five times with both sensors. The in fig. 7 and 8 with a length of about 78 m. map was complete, i.e. no structures had been The robot moved autonomously in a stop-and-go removed. The resulting averaged error bounds are mode, driven by a position controller for non-holo- shown in fig. 9. In fig. 10 the mean numbers of pre- nomic vehicles [4]. A graph with nodes at charac- dictions, observations and matchings at the 47 teristic locations was built which allows for global steps of the test path are presented, and table 1 planning in the map. No local navigation strategy briefly quantifies the results. accounting for unmodeled objects was active. Since The resulting error bounds in figure 9 and their the angle of the laser range finder has to be cali- overall means in table 1 show that, compared to brated at each system startup by measuring the the laser-only case, the multisensor system partic- four vertical profiles (see fig. 4), there is a angular ularly contributes to the reduction of uncertainty uncertainty of the robot-to-sensor frame transfor- in the vehicle orientation. This although the num- R 2 mation. This is modeled by settingC=σθ with ber of matched vertical edges is relatively low due Sls σθreflecting 1.5° in equation (5). For the vision to their infrequent occurrence in our map. s Proceedings of the Third European Workshop on Advanced Mobile Robots (EUROBOT ‘99), Zurich, Switzerland, Sept.6-8, 1999. 1 1 1 1 100 102 104 106 00 02 04 06 98 98 70 70 75 75

Figure 7: Run with laser Figure 8: Run with laser and range finder only. All lines vision. Vertical edges have which would allow for an been added to the map in update in the corridor direc- fig. 7. tions have been removed The pose uncertainty stays 80 80 from the map. Modeled bounded during navigation environment features are in in the corridor. Only moder- {W} map: robot position x,y with uncertainty Pxy, 95% black. {W} map: robot position x,y with uncertainty Pxy, 95% ate corrections at the upper At each of the 47 trajectory and lower corridor end are steps, the robot is depicted necessary; the robot accom- with an ellipse reflecting the plishes its task and returns 99.9% probability region of safely to the start point. 85 85 the posterior state covari- Filled objects in light gray ance matrix Pk()++1k 1 (doors and cupboards on the forx andy . The axis left side) are barely visible dimension is meter. for the laser range finder. The result is typical. False They have a shiny, green sur- matches due to great posi- face and turned out to be tion errors and extensive almost perfect specular uncertainty at the lower cor- reflectors for the infrared 90 90 ridor end yield incorrect laser. estimates. The robot is lost. 95 95

start 100 0 1 100 1

Proceedings of the Third European Workshop on Advanced Mobile Robots (EUROBOT ‘99), Zurich, Switzerland, Sept.6-8, 1999. 20 laser laser and vision 15 (a) σ 10 2 frontal 5.191 cm 4.258 cm

frontal [cm] 5 σ 2 lateral 1.341 cm 1.245 cm 0 5 10 15 20 25 30 35 40 45 σ ° ° 2 angular 1.625 0.687 3 ⁄ (b) nl nv 3.53 / – 3.51 / 1.91 2 texe 377 ms 1596 ms 1 lateral [cm]

0 Table 1: Overall mean values of the error bounds, 5 10 15 20 25 30 35 40 45

2.5 the number of matched line segmentsnl and 2 (c) matched vertical edgesnv , and the average 1.5 localization cycle timetexe . 1 angular [deg] 0.5 long steps (like at the lower corridor end in fig. 7) 0 5 10 15 20 25 30 35 40 45 causes very uncertain predictions, making the Figure 9: Averaged 2σ –error bounds of frontal (a), matching problem difficult for any feature. On the lateral (b) and angular (c) a posterior uncertainty other hand, on-the-fly localization with more during the 47 steps of the test trajectory. Five runs updates per distance travelled produces smaller have been made for each mode. Solid lines: laser validation regions and thus, diminishes the effect range finder only, dashed lines: laser and vision. of a poorly calibrated odometry and an inadequate model for nonsystematic odometry errors. 15 Matching vertical edges is especially error-prone (a) 10 due to their frequent appearance in groups (fig. 3). With large validation regions caused by very 5 uncertain pose predictions, false matches have 0 5 10 15 20 25 30 35 40 45 been occasionally produced. But the effect of these

15 incorrect assignments remain weak since these (b) groups are typically very compact. A possible rem- 10 edy is to exploit the prior knowledge from the map 5 and to try to ‘correlate’ patterns of edges to the observation. Instead of independently matching 0 5 10 15 20 25 30 35 40 45 single observations to single predictions, the obser- 00Figure 10: Averaged number of predictions ( ), vation pattern will be forced to match a similar 00observations ( ) and matches ( ) for line seg- constellation of predicted edges [10][12]. 00ments (a) and vertical edges (b). See also table 1. Furthermore, for the vision system, the large 00 validation regions are contrasted by observations whose uncertainties are very small in comparison. The average execution times express overall From the experiments we conclude that it would durations and reflect the full CPU load including be more relevant to model an uncertain robot-to- sensor acquisition, low-level controllers and com- sensor frame transformation to account for vibra- munication. No special code optimization has been tions and uneven floors, i.e. alikeRC setting Sl done. RC ≠ 0 in equation (5). Sv

6. Discussion 7. Conclusions and Outlook

Experience with earlier implementations [2] It could be demonstrated that the presented multi- showed that on-the-fly localization during motion sensor setup allows for more robust navigation in is not merely an option for good looking demon- critical localization situations like long corridors. strations. It augments the robustness against The permanent occurrence of features of a specific many factors because it brings the update rate type becomes less important. The experiments towards a better ratio to the dynamics of the Kal- showed furthermore that, compared to the laser- man filter variables. Stop-and-go navigation with only case, already a moderate number of matched

Proceedings of the Third European Workshop on Advanced Mobile Robots (EUROBOT ‘99), Zurich, Switzerland, Sept.6-8, 1999. vertical edges contribute to an important reduc- [8] Chenavier F., Crowley J.L., “Position Estima- tion of estimation uncertainty, especially in the tion for a Mobile Robot Using Vision and Odo- vehicle orientation. metry”, Proc. of the 1992 IEEE Int. Confer- By reducing the problem of camera calibration ence on Robotics and Automation, p. 2588-93, and edge extraction to an appropriate degree of Nice, , 1992. specialization, the computational complexity could [9] Crowley J.L., "World Modeling and Position be kept low so as to have an applicable implemen- Estimation for a Mobile Robot Using Ultra- tation on a fully autonomous system. sonic Ranging," Proc. of the 1989 IEEE Int. Further work will focus on more efficient imple- Conference on Robotics and Automation, p. mentations for on-the-fly localization during 674-80, Scottsdale, AZ, 1989. motion. Although ambiguous matching situation [10] Delahoche L., Mouaddib E.M., Pégard C., will be expected to appear less often, more sophis- Vasseur P., "A Mobile Robot Localization ticated strategies could be envisaged [5][12]. Based on a Multisensor Cooperation Ap- Finally, more complex fusion scenarios of range proach," 22nd IEEE Int. Conference on Indus- and vision features could be addressed as well. trial Electronics, Control, and Instrumen- Detection of high-level features for concise scene tation (IECON'96), New York, USA, 1996. descriptions [3] can be a starting point for topol- [11] Leonard J.J., Durrant-Whyte H.F., Directed ogy-based navigation schemes. Sonar Sensing for Mobile Robot Navigation, Kluwer Academic Publishers, 1992. References [12] Muñoz A.J., Gonzales J., “Two-Dimensional Landmark-based Position Estimation from a [1] Adams M.D., Optical Range Analysis of Sta- Single Image”, Proc. of the 1998 IEEE Int. ble Target Pursuit in Mobile Robotics. PhD Conference on Robotics and Automation, Leu- Thesis, University of Oxford, UK, Dec. 1992. ven, , 1998, p. 3709-14. [2] Arras K.O., Vestli S.J., "Hybrid, High-Preci- [13] Neira J., Tardos J.D., Horn J., Schmidt G., sion Localization for the Mail Distributing "Fusing Range and Intensity Images for Mo- Mobile Robot System MoPS", Proc. of the bile Robot Localization," IEEE Transactions 1998 IEEE Int. Conf. on Robotics and Auto- on Robotics and Automation, 15(1):76-84, mation, p. 3134-3129, Leuven, Belgium, 1998. 1999. [3] Arras K.O., Siegwart R.Y., "Feature Extrac- [14] Pérez J.A., Castellanos J.A., Montiel J.M.M., tion and Scene Interpretation for Map-Based Neira J. and Tardós J.D., "Continuous Mobile Navigation and Map Building", Proc. of SPIE, Robot Localization: Vision vs. Laser," Proc. of Mobile Robotics XII, Vol. 3210, p. 42-53, 1997. the 1999 IEEE Int. Conference on Robotics [4] Astolfi A., "Exponential Stabilization of a Mo- and Automation, Detroit, USA, 1999. bile Robot," Proc. of the Third European [15] Prescott B., McLean G.F., “Line-Based Cor- Control Conference, p. 3092-7, , 1995. rection of Radial Lens Distortion”, Graphical [5] Bar-Shalom Y., Fortmann T.E., Tracking and Models and Image Proc. 59(1), 1997, p. 39-47. Data Association, Mathematics in Science [16] Taylor R.M., Probert P.J., “Range Finding and and Engineering, Vol. 179, Academic Press Feature Extraction by Segmentation of Imag- Inc., 1988. es for Mobile Robot Navigation”, Proc. of the [6] Brega R., “A Real-Time Operating System De- 1996 IEEE Int. Conference on Robotics and signed for Predictability and Run-Time Automation, Minneapolis, USA, 1996. Safety”, The Fourth Int. Conference on Mo- tion and Vibration Control (MOVIC’98), Zurich, Switzerland, 1998. [7] Castellanos J.A., Tardós J.D., “Laser-Based Segmentation and Localization for a Mobile Robot”, Proc. of the Sixth Int. Symposium on Robotics and Manufacturing (ISRAM’96), Montpellier, France, 1996.

Proceedings of the Third European Workshop on Advanced Mobile Robots (EUROBOT ‘99), Zurich, Switzerland, Sept.6-8, 1999.