Mapping Why Mapping?

 Learning maps is one of the fundamental problems in mobile

 Maps allow to efficiently carry out their tasks, allow localization …

 Successful systems rely on maps for localization, path planning, activity planning etc. The General Problem of Mapping

What does the environment look like? The General Problem of Mapping

 The problem of robotic mapping is that of acquiring a spatial model of a robot’s environment

 Formally, mapping involves, given the sensor data,

d  {u1, z1,u2 , z2 ,,un , zn} to calculate the most likely map m*  arg max P(m | d) m Mapping Challenges

 A key challenge arises from the measurement errors, which are statistically dependent

 Errors accumulate over time

 They affect future measurement Other Challenges of Mapping

 The high dimensionality of the entities that are being mapped

 Mapping can be high dimensional

 Correspondence problem

 Determine if sensor measurements taken at different points in time correspond to the same physical object in the world

 Environment changes over time

 Slower changes: trees in different seasons

 Faster changes: people walking by

 Robots must choose their way during mapping

 Robotic exploration problem Factors that Influence Mapping

 Size:

 the larger the environment, the more difficult

 Perceptual ambiguity

 the more frequent different places look alike, the more difficult

 Cycles

 cycles make robots return via different paths, the accumulated odometric error can be huge

 The following discussion assumes mapping with known poses Mapping vs. Localization

 Learning maps is a “chicken-and-egg” problem

 First, there is a localization problem.

 Errors can be easily accumulated in odometry, making it less certain about where it is

 Methods exist to correct the error given a perfect map

 Second, there is a mapping problem.

 Constructing a map when the robot’s poses are known is relatively easy

 When robots need to do both:

 Simultaneous localization and mapping (SLAM)

 Our discussion here describes how to calculate a map given we know the pose of the vehicle Occupancy Grid Mapping

 Occupancy grid mapping addresses the problem of generating consistent maps from noisy and uncertain measurement data, assuming robot pose is known

 Basic idea: represent map as a field of random variables, arranged in an evenly spaced grid

 Each variable is binary or a probability, corresponding to the degree of occupancy of the location it covers

 Occupancy grid mapping algorithms implement approximate posterior estimation for those random variables Basic Idea

 Sense and create a local map  Move a little

 Record change in position, orientation  Sense and create a local map

 Fuse/tile together Integrate local map

Global Move D Local map map Observations

 The “Move D” and “Integrate local map” are the hard part

 Integration requires accurate measurement of D (on order of inches and <=5 degrees)

Black Is ground Truth, Purple is Measured Using shaft Encoders Solutions?

 GPS?

 Ignore localization errors?

 Topological maps?

 Artificial landmarks?

 Match the raw sensor data to a priori map?

 In the end, the approach remains the same:

 iconic (more popular, occupancy grid, sensor model, theory of evidence)

 feature-based (better suited for topological map building) Key Questions to Address

 Sensor Error Models: How does robot accurately interpret noisy range and encoder data?

 Localization: How does robot take noisy sensor data and a partial map, and determine its own most likely position?

 Map Making: How does robot build up a map from incremental sensor data?

 Exploration: How does robot travel to ensure that all of the environment is explored and incorporated into the map? Sensor Models

 Need sensor model to deal with uncertainty

 Methods for generating sensor models:

 Empirical (i.e., through testing)

 Analytical (i.e., through understanding of physical properties)

 Subjective (i.e., through experience) Modeling Common Sonar Sensor

The field of view is projected onto a regular grid, called occupancy grid. R: maximum range it can detect; β is the field of view Sonar Tolerance

 Sonar range readings have resolution error

 Thus, specific reading might actually indicate range of possible values

 E.g., reading of 0.87 meters actually means within (0.82, 0.92) meters

 Therefore, tolerance = 0.05 meters

 Tolerance gives width of Region I Tolerance in Sonar Model How to Convert to Numerical Values?

 Need to translate model to specific numerical values for each occupancy grid cell

 Three methods:

 Bayesian

 Dempster-Shafer Theory

 HIMM (Histogrammic in Motion Mapping)

 We will still use:

 Bayesian Probabilities for Occupancy Grids

 For each grid[i][j] covered by sensor scan:

 Compute P(Occupied|s) and P(Empty|s)

 For each grid element, grid[i][j], store a tuple of the two probabilities: Converting Sonar Reading to Probability: Region I Converting Sonar Reading to Probability: Region II Example: What is the value of a grid cell? Conditional Probabilities

 Note that previous calculations gave: P(s|H), not P(H|s)

 Thus, use Bayes Rule:

 P(s|Occupied) and P(s|Empty) are known from sensor model

 P(Occupied) and P(Empty) are unconditional, prior probabilities (which may or may not be known)

 If not known, okay to assume P(Occupied) = P(Empty) = 0.5 Returning to Example

 Let’s assume we’re on Mars, and we know that P(Occupied) = 0.75

 P(Empty|s=6) =

 P(Occupied|s=6) = Updating with Bayes Rule

 How to fuse multiple readings?

 First time:

 Each element of grid initialized with a priori probability of being occupied or empty

 Subsequently:

 Use Bayes’ rule iteratively

 Probability at time tn-1 becomes prior and is combined with current observation at tn: Example

 See work on board Exploration

The act of moving through an unknown environment while building a map that can be used for subsequent navigation. Exploration

 Key question: where hasn’t robot been?  Central concern: given what you know about the world, where should you move to gain as much new information as possible

 Possible approaches:

 Random walk

 Use proprioception to avoid areas that have been recently visited

 Exploit evidential information in the occupancy grid

 Two basic styles of exploration:

 Frontier-based

 Generalized Voronoi graph Frontier-Based Exploration

 The central idea:

 To gain the most new information about the world, move to the boundary between open space and uncharted territory

 Frontiers are regions on the boundary between open and unexplored space

 By moving to successive frontiers, the robot can constantly increase its knowledge of the world Brian Yamauchi’s Method

 Use evidence grids as the spatial representation, similar to sensor modeling  After an evidence grid has been constructed, each cell is classified by:

 open: occupancy prob. < prior prob.

 unknown: occupancy prob. = prior prob.

 occupied: occupancy prob. > prior prob.

 A process analogous to edge detection and region extraction in is used to find the boundaries between open and unknown spaces. Example Navigating to Frontiers

 Once frontiers are detected, the robot attempts to navigate to the nearest accessible, unvisited frontier.

 The path planner uses a depth-first search on the grid Example Example, Continue Another Possible Navigation

 Once frontiers are found, we can control the robot to head towards centroid of Frontier Calculating Centroid

 Centroid is average (x, y) location Motion Control Based on Frontier Exploration

 Robot calculates centroid

 Robot navigates using:

 Move-to-goal and avoid-obstacle behaviors

 Or, plan path and reactively execute the path

 Or, continuously replan and execute the path

 Once robot reaches frontier, map is updated and new frontiers (perhaps) discovered

 Continue until no new frontiers remaining Movies of Frontier-Based Exploration

 from Dieter Fox

https://www.cs.washington.edu/ai/Mobile_Robotics/projects/mapping/allen-animations/allen-explore.avi GVG Methods for Exploration

 Robot builds generalized Voronoi graph (GVG) as it moves through world  As robot moves, it attempts to maintain a path that is equidistant from all objects it senses (called “GVG edge”)

 When robot comes to gateway, randomly choose branch to follow, remembering the other edge(s)  When one path completely explored, backtrack to previously detected branch point  Exploration and backtracking continue until entire area covered Example of GVG Exploration Another Example Keep Moving, Ignoring Difficult Places Reaches Deadend at 9, Backtracks Go Back, Catching Missed Areas Summary

 Map making: converts local sensor observations into global map, independent of robot position

 Occupancy grid: 2-D array, most common data structure for mapping

 Use probabilistic sensor models and Bayesian methods to update occupancy grid

 Raw sensory data (especially odometry) is imperfect

 Two types of localization: iconic, feature-based

 Two types of exploration strategies: frontier-based and GVG