12.2 Localization

Localization is a fundamental problem in robotics. Using its sensors, a mobile robot must determine its location within some map of the environment. There are both passive and active versions of the localization problem:

Passive localization: The robot applies actions, and its position is inferred by computing the nondeterministic or probabilistic I-state. For example, if the Kalman filter is used, then probabilistic I-states are captured by mean and covariance. The mean serves as an estimate of the robot position, and the covariance indicates the amount of uncertainty.

Active localization: A plan must be designed that attempts to reduce the localization uncertainty as much as possible. How should the robot move so that it can figure out its location?

Both versions of localization will be considered in this section.

In many applications, localization is an incremental problem. The initial con- figuration may be known, and the task is to maintain good estimates as motions occur. A more extreme version is the kidnapped-robot problem, in which a robot initially has no knowledge of its initial configuration. Either case can be mod- eled by the appropriate initial conditions. The kidnapped-robot problem is more difficult and is assumed by default in this section.

12.2.1 Discrete Localization

Many interesting lessons about realistic localization problems can be learned by first studying a discrete version of localization. Problems that may or may not be solvable can be embedded in more complicated problems, which may even involve continuous state spaces. The discrete case is often easier to understand, which motivates its presentation here. To simplify the presentation, only the nondeter- ministic I-space ndet will be considered; see Section 12.2.3 for the probabilistic case.

I

Suppose that a robot moves on a 2D grid, which was introduced in Example

2.1. It has a map of the grid but does not know its initial location or orientation within the grid. An example is shown in Figure 12.2a.

To formulate the problem, it is helpful to include in the state both the position of the robot and its orientation. Suppose that the robot may be oriented in one of four directions, which are labeled N, E, W, and S, for “north,” “east,” “west,” and

N

W E

5 4
3
1 2

S

F
L xk R
B
  1. (b)

Figure 12.2: (a) This map is given to the robot for localization purposes. (b) The four possible actions each take one step, if possible, and reorient the robot as shown.

N

6 5 4
7 3
8 1 2

W E

S

R

  1. (b)

Figure 12.3: (a) If a direction is blocked because of an obstacle, then the orientation changes, but the position remains fixed. In this example, the R action is applied.

  1. Another map is given to the robot for localization purposes. In this case, the robot cannot localize itself exactly.

“south,” respectively. Although the robot is treated as a point, its orientation is important because it does not have a compass. If it chooses to move in a particular direction, such as straight ahead, it does not necessarily know which direction it will be heading with respect to the four directions.

Thus, a state, x X, is written as x = (p, d), in which p is a position and d is one of the four directions. A set of states at the same position will be denoted with special superscripts that point in the possible directions. For example, 3

indicates the set of states for which p = 3 and the direction may be north (N) or

east (E), because the superscript points in the north and east directions.

The robot is given four actions,

U = {F, B, R, L}, (12.11)

which represent “forward,” “backward,” “right motion,” and “left motion,” re-

spectively. These motions occur with respect to the current orientation of the robot, which may be unknown. See Figure 12.2b. For the F action, the robot moves forward one grid element and maintains its orientation. For the B action, the robot changes its orientation by 180 degrees and then moves forward one grid element. For the R action, the robot turns right by 90 degrees and then moves forward one grid element. The L action behaves similarly. If it is not possible to move because of an obstacle, it is assumed that the robot changes its orientation (in the case of B, R, or L) but does not change its position. This is depicted in Figure 12.3a.

The robot has one simple sensor that can only detect whether it was able to move in the direction that was attempted. The sensor space is Y = {0, 1}, and the sensor mapping is h : X × X → Y . This yields y = h(xk−1, xk) = 1 if xk−1 and

xk place the robot at different positions, and h(xk−1, xk) = 0 otherwise. Thus, the sensor indicates whether the robot has moved after the application of an action.

Nondeterministic uncertainty will be used, and the initial I-state η0 is always assumed to be X (this can easily be extended to allow starting with any nonempty subset of X). A history I-state at stake k in its general form appears as

η0 = (X, u˜k−1, y2, . . . , yk). (12.12)

One special adjustment was made in comparison to (11.14). There is no obser- vation y1 because the sensor mapping requires a previous state to report a value. Thus, the observation history starts with y2. An example history I-state for stage k = 5 is

η5 = (X, R, R, F, L, 1, 0, 1, 1), (12.13)

in which η0 = X, u˜4 = (R, R, F, L), and (y2, y3, y4, y5) = (1, 0, 1, 1).

The passive localization problem starts with a given map, such as the one shown in Figure 12.2a, and a history I-state, ηk, and computes the nondeterministic I-

state Xkk) ⊆ X. The active localization problem is to compute some k and

sequence of actions, (u1, . . . , uk−1), such that the nondeterministic I-state is as small as possible. In the best case, Xkk) might become a singleton set, which means that the robot knows its position and orientation on the map. However,

due to symmetries, which will be presented shortly in an example, it might not be possible.

Solving the passive localization problem The passive problem requires only that the nondeterministic I-states are computed correctly as the robot moves. A couple of examples of this are given.

Example 12.1 (An Easy Localization Problem) Consider the example given in Figure 12.2a. Suppose that the robot is initially placed in position 1 facing east.

The initial condition is η0 = X, which can be represented as

η0 = 1 ∪ 2 ∪ 3 ∪ 4 ∪ 5 , (12.14)

the collection of all 20 states in X. Suppose that the action sequence (F, L, F, L) is applied. In each case, a motion occurs, which results in the observation history (y2, y3, y4, y5) = (1, 1, 1, 1).

After the first action, u1 = F, the history I-state is η2 = (X, F, 1). The nondeterministic I-state is

X2(η2) = 1 ∪ 2 ∪ 3 ∪ 4 ∪ 5 , (12.15)

which means that any position is still possible, but the successful forward motion removed some orientations from consideration. For example, 1 is not possible because the previous state would have to be directly south of 1, which is an obstacle.

After the second action, u2 = L,

X3(η3) = 3 ∪ 5 , (12.16)

which yields only two possible current states. This can be easily seen in Figure 12.2a by observing that there are only two states from which a forward motion can be followed by a left motion. The initial state must have been either 1 or 3 .

After u3 = F is applied, the only possibility remaining is that x3 must have

been 3 . This yields

X4(η4) = 4 , (12.17)

which exactly localizes the robot: It is at position 4 facing north. After the final action u4 = L is applied it is clear that

X5(η5) = 5 , (12.18)

which means that in the final state, x5, the robot is at position 1 facing west. Once the exact robot state is known, no new uncertainty will accumulate because the effects of all actions are predictable. Although it was not shown, it is also possi-

ble to prune the possible states by the execution of actions that do not produce motions. .

Example 12.2 (A Problem that Involves Symmetries) Now extend the map from Figure 12.2a so that it forms a loop as shown in Figure 12.2b. In this case, it is impossible to determine the precise location of the robot. For simplicity, con- sider only actions that produce motion (convince yourself that allowing the other actions cannot fix the problem).

Suppose that the robot is initially in position 1 facing east. If the action sequence (F, L, F, L, . . .) is executed, the robot will travel around in cycles. The problem is that it is also possible to apply the same action sequence from position 3 facing north. Every action successfully moves the robot, which means that,

to the robot, the information appears identical. The other two cases in which this sequence can be applied to travel in cycles are 1) from 5 heading west, and

2) from 7 heading south. A similar situation occurs from 2 facing east, if the sequence (L, F, L, F, . . .) is applied. Can you find the other three starting states from which this sequence moves the robot at every stage? Similar symmetries exist when traveling in clockwise circles and making right turns instead of left turns.

The state space for this problem contains 32 states, obtained from four direc- tions at each position. After executing some motions, the nondeterministic I-state can be reduced down to a symmetry class of no more than four possible states. How can this be proved? One way is to use the algorithm that is described next.

Solving the active localization problem From the previous two examples, it should be clear how to compute nondeterministic I-states and therefore solve the passive localization problem on a grid. Now consider constructing a plan that solves the active localization problem. Imagine using a computer to help in this task. There are two general approaches:

Precomputed Plan: In this approach, a planning algorithm running on a computer accepts a map of the environment and computes an information- feedback plan that immediately indicates which action to take based on all possible I-states that could result (a derived I-space could be used). During execution, the actions are immediately determined from the stored, precom- puted plan.

Lazy Plan: In this case the map is still given, but the appropriate action is computed just as it is needed during each stage of execution. The computer runs on-board of the robot and must compute which action to take based on the current I-state.

The issues are similar to those of the sampling-based roadmap in Section 5.6. If faster execution is desired, then the precomputed plan may be preferable. If it would consume too much time or space, then a lazy plan may be preferable.

Using either approach, it will be helpful to recall the formulation of Section

12.1.1, which considers Indet as a new state space, X- , in which state feedback

can be used. Even though there are no nature sensing actions, the observations are not predictable because the state is generally unknown. This means that θ- is

unknown, and future new states, -xk+1, are unpredictable once -xk and -uk are given. A plan must therefore use feedback, which means that it needs information learned during execution to solve the problem. The state transition function f- on the new state space was illustrated for the localization problem in Examples 12.1 and 12.2.

The initial state -xI is the set of all original states. If there are no symmetries, the goal set X- G is the set of all singleton subsets of X; otherwise, it is the set of all

smallest possible I-states that are reachable (this does not need to be constructed

in advance). If desired, cost terms can be defined to produce an optimal planning problem. For example, -l(-x, -u) = 2 if a motion occurs, or -l(-x, -u) = 1 otherwise.

Consider the approach of precomputing a plan. The methods of Section 12.1.2

can generally be applied to compute a plan, π : X- → U, that solves the localization

problem from any initial nondeterministic I-state. The approach may be space- intensive because an action must be stored for every state in X- . If there are n grid

tiles, then |X- | = 2n − 1. If the initial I-state is always X, then it may be possible

  • -

to restrict π to a much smaller portion of X. From any -x XG, a search based on backprojections can be conducted. If the initial I-state is added to S, then the

partial plan will reliably localize the robot. Parts of X-

will never be reached and can therefore be ignored.

for which π is not specified

Now consider the lazy approach. An algorithm running on the robot can perform a kind of search by executing actions and seeing which I-states result.

This leads to a directed graph over X-

that is incrementally revealed through the

robot’s motions. The graph is directed because the information regarding the state generally improves. For example, once the robot knows its state (or symmetry class of states), it cannot return to an I-state that represents greater uncertainty. In many cases, the robot may get lucky during execution and localize itself using much less memory than would be required for a precomputed plan.

The robot needs to recognize that the same positions have been reached in different ways, to ensure a systematic search. Even though the robot does not necessarily know its position on the map, it can usually deduce whether it has been to some location previously. One way to achieve this is to assign (i, j) coordinates to the positions already visited. It starts with (0, 0) assigned to the initial position. If F is applied, then suppose that position (1, 0) is reached, assuming the robot moves to a new grid cell. If R is applied, then (0, 1) is reached if the robot is not blocked. The point (2, 1) may be reachable by (F, F, R) or (R, F, F). One way to

interpret this is that a local coordinate frame in R2 is attached to the robot’s initial position. Let this be referred to as the odometric coordinates. The orientation

between this coordinate frame and the map is not known in the beginning, but a transformation between the two can be computed if the robot is able to localize itself exactly.

A variety of search algorithms can now be defined by starting in the initial state -xI and trying actions until a goal condition is satisfied (e.g., no smaller non- deterministic I-states are reachable). There is, however, a key difference between this search and the search conducted by the algorithms in Section 2.2.1. Previ- ously, the search could continue from any state that has been explored previously without any additional cost. In the current setting, there are two issues:

Reroute paths: Most search algorithms enable new states to be expanded from any previously considered states at any time. For the lazy approach, the robot must move to a state and apply an action to determine whether a new state can be reached. The robot is capable of returning to any previously considered state by using its odometric coordinates. This induces a cost that does not exist in the previous search problem. Rather than being able

to jump from place to place in a search tree, the search is instead a long, continuous path that is traversed by the robot. Let the jump be referred to as a reroute path. This will become important in Section 12.3.2.

Information improvement: The robot may not even be able to return to a previous nondeterministic I-state. For example, if the robot follows (F, F, R) and then tries to return to the same state using (B, L, F), it will indeed know that it returned to the same state, but the state remains unknown. It might be the case, however, that after executing (F, F, R), it was able to narrow down the possibilities for its current state. Upon returning using (B, L, F), the nondeterministic I-state will be different.

The implication of these issues is that the search algorithm should take into account the cost of moving the robot and that the search graph is directed. The second issue is really not a problem because even though the I-state may be different when returning to the same position, it will always be at least as good as the previous one. This means that if η1 and η2 are the original and later history I-states from the same position, it will always be true that X(η2) X(η1). Information always improves in this version of the localization problem. Thus, while trying to return to a previous I-state, the robot will find an improved I-state.

Other information models The model given so far in this section is only one of many interesting alternatives. Suppose, for example, that the robot carries a compass that always indicates its direction. In this case, there is no need to keep track of the direction as part of the state. The robot can use the compass to specify actions directly with respect to global directions. Suppose that U = N, E, W, S , which denote the directions, “north,” “east,” “west,” and “south,” respectively. Examples 12.1 and 12.2 now become trivial. The first one is solved by applying the action sequence (E, N). The symmetry problems vanish for Example 12.2, which can also be solved by the sequence (E, N) because (1, 2, 3) is the only sequence of positions that is consistent with the actions and compass readings.

{ }

Other interesting models can be made by giving the robot less information. In the models so far, the robot can easily infer its current position relative to its starting position. Even though it is not necessarily known where this starting position lies on the map, it can always be expressed in relative coordinates. This is because the robot relies on different forms of odometry. For example, if the direction is E and the robot executes the sequence (L, L, L), it is known that the direction is S because three lefts make a right. Suppose that instead of a grid, the robot must explore a graph. It moves discretely from vertex to vertex by applying an action that traverses an edge. Let this be a planar graph that is embedded in

R2 and is drawn with straight line segments. The number of available actions can vary at each vertex. We can generally define U = S1, with the behavior that the robot only rotates without translating whenever a particular direction is blocked

(this is a generalization of the grid case). A sensor can be defined that indicates which actions will lead to translations from the current vertex. In this case, the

model nicely generalizes the original model for the grid. If the robot knows the angles between the edges that arrive at a vertex, then it can use angular odometry to make a local coordinate system in R2 that keeps track of its relative positions.

The situation can be made very confusing for the robot. Suppose that instead of U = S1, the action set at each vertex indicates which edges can be traversed. The robot can traverse an edge by applying an action, but it does not know anything

about the direction relative to other edges. In this case, angular odometry can no longer be used. It could not, for example, tell the difference between traversing a rhombus, trapezoid, or a rectangle. If angular odometry is possible, then some symmetries can be avoided by noting the angles between the edges at each vertex. However, the new model does not allow this. All vertices that have the same degree would appear identical.

      1. Combinatorial Methods for Continuous Localiza- tion

Now consider localization for the case in which X is a continuous region in R2. Assume that X is bounded by a simple polygon (a closed polygonal chain; there are no interior holes). A map of X in R2 is given to the robot. The robot velocity

x˙ is directly commanded by the action u, yielding a motion model x˙ = u, for

which U is a unit ball centered at the origin. This enables a plan to be specified as a continuous path in X, as was done throughout Part II. Therefore, instead of specifying velocities using u, a path is directly specified, which is simpler. For

models of the form x˙ = u and the more general form x˙ = f(x, u), see Section 8.4

and Chapter 13, respectively.

The robot uses two different sensors:

        1. Compass: A perfect compass solves all orientation problems that arose in Section 12.2.1.
        2. Visibility: The visibility sensor, which was shown in Figure 11.15, provides perfect distance measurements in all directions.

There are no nature sensing actions for either sensor.

As in Section 12.2.1, localization involves computing nondeterministic I-states. In the current setting there is no need to represent the orientation as part of the state space because of the perfect compass and known orientation of the polygon in R2. Therefore, the nondeterministic I-states are just subsets of X. Imagine computing the nondeterministic I-state for the example shown in Figure 11.15, but without any history. This is H(y) X, which was defined in (11.6). Only

the current sensor reading is given. This requires computing states from which the distance measurements shown in Figure 11.15b could be obtained. This means that a translation must be found that perfectly overlays the edges shown in Figure 11.15b on top of the polygon edges that are shown in Figure 11.15a. Let ∂X denote the boundary of X. The distance measurements from the visibility sensor

Figure 12.4: An example of the visibility cell decomposition. Inside of each cell, the visibility polygon is composed of the same edges of ∂X.

Figure 12.5: Rays are extended outward, whenever possible, from each pair of mutually visible vertices. The case on the right is a bitangent, as shown in Figure 6.10; however, here the edges extend outward instead of inward as required for the visibility graph.

must correspond exactly to a subset of ∂X. For the example, these could only be obtained from one state, which is shown in Figure 11.15a. Therefore, the robot does not even have to move to localize itself for this example.

As in Section 8.4.3, let the visibility polygon V (x) refer to the set of all points visible from x, which is shown in Figure 11.15a. To perform the required compu- tations efficiently, the polygon must be processed to determine the different ways in which the visibility polygon could appear from various points in X. This in- volves carefully determining which edges of ∂X could appear on ∂V (x). The state space X can be decomposed into a finite number of cells, and over each region the invariant is that same set of edges is used to describe V (x) [136, 415]. An example is shown in Figure 12.4. Two different kinds of rays must be extended to make the decomposition. Figure 12.5 shows the case in which a pair of vertices is mutually visible and an outward ray extension is possible. The other case is shown in Figure 12.6, in which rays are extended outward at every reflex vertex

Figure 12.6: A reflex vertex: If the interior angle at a vertex is greater than π, then two outward rays are extended from the incident edges.

Figure 12.7: Consider this example, in which the initial state is not known [298].

(a vertex whose interior angle is more than π, as considered in Section 6.2.4). The resulting decomposition generates O(n2r) cells in the worse case, in which n is the number of edges that form ∂X and r is the number of reflex vertices (note that r < n). Once the measurements are obtained from the sensor, the cell or cells in which the edges or distance measurements match perfectly need to be computed to determine H(y) (the set of points in X from which the current distance measure- ments could be obtained). An algorithm based on the idea of a visibility skeleton is given in [415], which performs these computations in time O(m + lg n + s) and uses O(n5) space, in which n is the number of vertices in ∂X, m is the number of vertices in V (x), and s = H(y) , the size of the nondeterministic I-state. This method assumes that the environment is preprocessed to perform rapid queries during execution; without preprocessing, H(y) can be computed in time O(mn).

| |

What happens if there are multiple states that match the distance data from the visibility sensor? Since the method in [415] only computes H(y) X, some robot motions must be planned to further reduce the uncertainty. This provides yet another interesting illustration of the power of I-spaces. Even though the state space is continuous, an I-state in this case is used to disambiguate the state from a finite collection of possibilities.

Figure 12.8: The four possible initial positions for the robot in Figure 12.7 based on the visibility sensor.

Figure 12.9: These motions completely disambiguate the state.

The following example is taken from [298].

Example 12.3 (Visibility-Based Localization) Consider the environment shown in Figure 12.7, with the initial state as shown. Based on the visibility sensor obser- vation, the initial state could be any one of the four possibilities shown in Figure

12.8. Thus, H(y1) contains four states, in which y1 is the initial sensor observation. Suppose that the motion sequence shown in Figure 12.9 is executed. After the first step, the position of the robot is narrowed down to two possibilities, as shown in Figure 12.10. This occurs because the corridor is longer for the remaining two possibilities. After the second motion, the state is completely determined because

the short side corridor is detected. .

The localization problem can be solved in general by using the visibility cell decomposition, as shown in Figure 12.4. Initially, X1(η1) = H(y1) is computed from the initial visibility polygon, which can be efficiently performed using the visibility skeleton [415]. Suppose that X1(η1) contains k states. In this case, k translated copies of the map are overlaid so that all of the possible states in X1(η1)

Figure 12.10: There are now only two possible states.

coincide. A motion is then executed that reduces the amount of uncertainty. This could be performed, by example, by crossing a cell boundary in the overlay that corresponds to one or more, but not all, of the k copies. This enables some possible states to be eliminated from the next I-state, X2(η2). The overlay is used once again to obtain another disambiguating motion, which results in X3(η3). This process continues until the state is known. In [298], a motion plan is given that enables the robot to localize itself by traveling no more than k times as far as the optimal distance that would need to be traveled to verify the given state. This particular localization problem might not seem too difficult after seeing Example 12.3, but it turns out that the problem of localizing using optimal motions is NP- hard if any simple polygon is allowed. This was proved in [298] by showing that every abstract decision tree can be realized as a localization problem, and the abstract decision tree problem is already known to be NP-hard.

Many interesting variations of the localization problem in continuous spaces can be constructed by changing the sensing model. For example, suppose that the robot can only measure distances up to a limit; all points beyond the limit cannot be seen. This corresponds to many realistic sensing systems, such as infrared sensors, sonars, and range scanners on mobile robots. This may substantially enlarge H(y). Suppose that the robot can take distance measurements only in a limited number of directions, as shown in Figure 11.14b. Another interesting variant can be made by removing the compass. This introduces the orientation confusion effects observed in Section 12.2.1. One can even consider interesting localization problems that have little or no sensing [751, 752], which yields I-spaces that are similar to that for the tray tilting example in Figure 11.28.

      1. Probabilistic Methods for Localization

The localization problems considered so far have involved only nondeterministic uncertainty. Furthermore, it was assumed that nature does not interfere with the state transition equation or the sensor mapping. If nature is involved in the sen- sor mapping, then future I-states are not predictable. For the active localization problem, this implies that a localization plan must use information feedback. In

other words, the actions must be conditioned on I-states so that the appropri- ate decisions are taken after new observations are made. The passive localization problem involves computing probabilistic I-states from the sensing and action his- tories. The formulation and solution of localization problems that involve nature and nondeterministic uncertainty will be left to the reader. Only the probabilistic case will be covered here.

Discrete problems First consider adding probabilities to the discrete grid prob- lem of Section 12.2.1. A state is once again expressed as x = (p, d). The initial condition is a probability distribution, P(x1), over X. One reasonable choice is to make P(x1) a uniform probability distribution, which makes each direction and position equally likely. The robot is once again given four actions, but now assume that nature interferes with state transitions. For example, if uk = F, then perhaps with high probability the robot moves forward, but with low probability it may move right, left, or possibly not move at all, even if it is not blocked.

The sensor mapping from Section 12.2.1 indicated whether the robot moved. In the current setting, nature can interfere with this measurement. With low probability, it may incorrectly indicate that the robot moved, when in fact it remained stationary. Conversely, it may also indicate that the robot remained still, when in fact it moved. Since the sensor depends on the previous two states, the mapping is expressed as

yk = h(xk, xk−1, ψk). (12.19)

With a given probability model, P(ψk), this can be expressed as P(yk|xk, xk−1).

To solve the passive localization problem, the expressions from Section 11.2.3

for computing the derived I-states are applied. If the sensor mapping used only the current state, then (11.36), (11.38), and (11.39) would apply without modification. However, since h depends on both xk and xk−1, some modifications are needed.

Recall that the observations start with y2 for this sensor. Therefore, P(x1|η1) =

P(x1|y1) = P(x1), instead of applying (11.36).

After each stage, P(xk+1|ηk+1) is computed from P(xkk) by first applying

(11.38) to take into account the action uk. Equation (11.39) takes into account the sensor observation, yk+1, but P(yk+1|xk+1, ηk, uk) is not given because the

sensor mapping also depends on xk−1. It reduces using marginalization as

P(ykk−1, uk−1, xk) =

-

xk−1∈X

P(ykk−1, uk−1, xk−1, xk)P(xk−1|ηk−1, uk−1, xk).

(12.20)

The first factor in the sum can be reduced to the sensor model,

P(ykk−1, uk−1, xk−1, xk) = P(yk|xk−1, xk), (12.21) because the observations depend only on xk−1, xk, and the nature sensing action,

ψk. The second term in (12.20) can be computed using Bayes’ rule as

P(x

k−1

k−1

, uk−1

, xk

P(xkk−1, uk−1, xk−1)P(xk−1|ηk−1, uk−1)

xk-−1∈X

) = ,

P(xkk−1, uk−1, xk−1)P(xk−1|ηk−1, uk−1)

(12.22)

in which P(xkk−1, uk−1, xk−1) simplifies to P(xk|uk−1, xk−1). This is directly ob- tained from the state transition probability, which is expressed as P(xk+1|xk, uk)

by shifting the stage index forward. The term P(xk−1 ηk−1, uk−1) is given by (11.38). The completes the computation of the probabilistic I-states, which solves the passive localization problem.

|

Solving the active localization problem is substantially harder because a search occurs on Iprob. The same choices exist as for the discrete localization problem.

Computing an information-feedback plan over the whole I-space prob is theoreti- cally possible but impractical for most environments. The search-based idea that was applied to incrementally grow a directed graph in Section 12.2.1 could also be applied here. The success of the method depends on clever search heuristics developed for this particular problem.

I

Continuous problems Localization in a continuous space using probabilistic models has received substantial attention in recent years [258, 447, 622, 825, 887, 962]. It is often difficult to localize mobile robots because of noisy sensor data, modeling errors, and high demands for robust operation over long time periods. Probabilistic modeling and the computation of probabilistic I-states have been quite successful in many experimental systems, both for indoor and outdoor mobile robots. Figure 12.11 shows localization successfully being solved using sonars only. The vast majority of work in this context involves passive localization because the robot is often completing some other task, such as reaching a particular part of the environment. Therefore, the focus is mainly on computing the probabilistic

  1. states, rather than performing a difficult search on Iprob.

Probabilistic localization in continuous spaces most often involves the defini- tion of the probability densities p(xk+1 xk, uk) and p(yk xk) (in the case of a state sensor mapping). If the stages represent equally spaced times, then these densities usually remain fixed for every stage. The state space is usually X = SE(2) to account for translation and rotation, but it may be X = R2 for translation only. The density p(xk+1 xk, uk) accounts for the unpredictability that arises when con- trolling a mobile robot over some fixed time interval. A method for estimating this distribution for nonholonomic robots by solving stochastic differential equations appears in [1004].

|

| |

The density p(yk xk) indicates the relative likelihood of various measurements when given the state. Most often this models distance measurements that are obtained from a laser range scanner, an array of sonars, or even infrared sensors. Suppose that a robot moves around in a 2D environment and takes depth mea- surements at various orientations. In the robot body frame, there are n angles at

|

    1. (b)

(c) (d)

Figure 12.11: Four frames from an animation that performs probabilistic localiza- tion of an indoor mobile robot using sonars [350].

which a depth measurement is taken. Ideally, the measurements should look like those in Figure 11.15b; however, in practice, the data contain substantial noise.

The observation y ∈ Y is an n-dimensional vector of noisy depth measurements.

One common way to define p(y x) is to assume that the error in each distance measurement follows a Gaussian density. The mean value of the measurement can easily be calculated as the true distance value once x is given, and the variance should be determined from experimental evaluation of the sensor. If it is assumed that the vector of measurements is modeled as a set of independent, identically distributed random variables, a simple product of Guassian densities is obtained

|

for p(y|x).

Once the models have been formulated, the computation of probabilistic I- states directly follows from Sections 11.2.3 and 11.4.1. The initial condition is a probability density function, p(x1), over X. The marginalization and Bayesian update rules are then applied to construct a sequence of density functions of the

form p(xkk) for every stage, k.

In some limited applications, the models used to express p(xk+1|xk, uk) and

p(yk xk) may be linear and Gaussian. In this case, the Kalman filter of Section

|

11.6.1 can be easily applied. In most cases, however, the densities will not have this form. Moment-based approximations, as discussed in Section 11.4.3, can be used to approximate the densities. If second-order moments are used, then the so- called extended Kalman filter is obtained, in which the Kalman filter update rules can be applied to a linear-Gaussian approximation to the original problem. In recent years, one of the most widely accepted approaches in experimental mobile robotics is to use sampling-based techniques to directly compute and estimate the probabilistic I-states. The particle-filtering approach, described in general in Section 11.6.2, appears to provide good experimental performance when applied to localization. The application of particle filtering in this context is often referred to as Monte Carlo localization; see the references at the end of this chapter.

results matching ""

    No results matching ""