Sensor Based Motion Planning

``Sensor Based Planning'' incorporates sensor information, reflecting the current state of the environment, into a robot's planning process, as opposed to classical planning , where full knowledge of the world's geometry is assumed to be known prior to the planning event. Sensor based planning is important because: (1) the robot often has no a priori knowledge of the world; (2) the robot may have only a coarse knowledge of the world because of limited memory; (3) the world model is bound to contain inaccuracies which can be overcome with sensor based planning strategies; and (4) the world is subject to unexpected occurrences or rapidly changing situations.

There already exists a large number of classical path planning methods. However, many of these techniques are not amenable to sensor based interpretation. It is not possible to simply add a step to acquire sensory information, and then construct a plan from the acquired model using a classical technique, since the robot needs a path planning strategy in the first place to acquire the world model.

The first principal problem in sensor based motion planning is the find-goal problem. In this problem, the robot seeks to use its on-board sensors to find a collision free path from its current configuration to a goal configuration. In the first variation of the find goal problem, which we term the absolute find-goal problem, the absolute coordinates of the goal configuration are assumed to be known. A second variation on this problem is described below.

The second principal problem in sensor based motion planning is sensor-based exploration, in which a robot is not directed to seek a particular goal in an unknown environment, but is instead directed to explore the apriori unknown environment in such a way as to see all potentially important features. The exploration problem can be motivated by the following application. Imagine that a robot is to explore the interior of a collapsed building, which has crumbled due to an earthquake, in order to search for human survivors. It is clearly impossible to have knowledge of the building's interior geometry prior to the exploration. Thus, the robot must be able to see, with its on-board sensors, all points in the building's interior while following its exploration path. In this way, no potential survivors will be missed by the exploring robot. Algorithms that solve the find-goal problem are not useful for exploration because the location of the ``goal'' (a human survivor in our example) is not known. A second variation on the find-goal problem that is motivated by this scenario and which is an intermediary between the find-goal and exploration problems is the recognizable find-goal problem. In this case, the absolute coordinates of the goal are not known, but it is assumed that the robot can recognize the goal if it becomes with in line of sight. The aim of the recognizable find-goal problem is to explore an unknown environment so as to find a recognizable goal. If the goal is reached before the entire environment is searched, then the search procedure is terminated.

In prior work we developed a scheme to solve one type of exploration problem. As a byproduct, the algorithm can then also solve both variations of the find-goal problem. The algorithm is based on the Generalized Voronoi Graph (GVG), which is a roadmap. We have developed an incremental approach to constructing the GVG of an unknown environment strictly from sensor data. We only assume that the robot has a dead reckoning system and on board sensors that measure distance and direction to nearby obstacles.

In collaboration with JPL, we have been developing algorithms for the autonomous navigation of future Mars Rover vehicles. These algorithms (the "WedgeBug" and "RoverBug" algorithms) are the sensor-based analogies to the classical tangent graph algorithm, but assume no apriori knowledge of the robot's environment, and also take the limited field-of-view of the rover's cameras into account. See this page for more detail and some fancy figures related to this project.

Our current research activities center around how to incorporate uncertainty into sensor-based planning.


Selected Papers