OpenRAVE Documentation

visibilityplanning Module

Planning with a wrist camera to look at the target object before grasping.

../../_images/visibilityplanning.jpg

Running the Example:

openrave.py --example visibilityplanning

Description

This example shows a vision-centric manipulation framework for that can be used to perform more reliable reach-and-grasp tasks. The biggest problem with a lot of autonomous manipulation frameworks is that they perform the full grasp planning step as soon as the object is detected into view by a camera. Due to uncertainty in sensors and perception algorithms, usually the object error is huge when a camera is viewing it from far away. This is why OpenRAVE provides a module to plan with cameras attached to the gripper that implements [1].

By combining grasp planning and visual feedback algorithms, and constantly considering sensor visibility, the framework can recover from sensor calibration errors and unexpected changes in the environment. The planning phase generates a plan to move the robot manipulator as close as safely possible to the target object such that the target is easily detectable by the on-board sensors. The execution phase is responsible for continuously choosing and validating a grasp for the target while updating the environment with more accurate information. It is vital to perform the grasp selection for the target during visual-feedback execution because more precise information about the target’s location and its surroundings is available. Here is a small chart outlining the differences between the common manipulation frameworks:

../../_images/visibilityplanning_framework.png

First Stage: Sampling Camera Locations

Handling Occlusions

Occlusions are handled by shooting rays from the camera and computing where they hit. If any ray hits another object, the target is occluded.

../../_images/visibilityplanning_raycollision.png

Object Visibility Extents

The places where a camera could be in order for object detection to work are recorded.

Visibility detection extents.jpg

# gather data # create a probability distribution # resample

Adding Robot Kinematics

The final sampling algorithm is:

../../_images/visibilityplanning_samplingalgorithm.jpg

The final planner just involves an RRT that uses this goal sampler. The next figure shows the two-stage planning proposed in the paper.

../../_images/visibilityplanning_twostage.jpg

For comparison reasons the one-stage planning is shown above. Interestingly, visibility acts like a key-hole configuration that allows the two-stage planner to finish both paths in a very fast time. The times are comparible to the first stage.

[1]Rosen Diankov, Takeo Kanade, James Kuffner. Integrating Grasp Planning and Visual Feedback for Reliable Manipulation, IEEE-RAS Intl. Conf. on Humanoid Robots, December 2009.

Command-line

Usage: openrave.py [options]

Visibility Planning Module.

Options:
  -h, --help            show this help message and exit
  --scene=SCENE         openrave scene to load
  --nocameraview        If set, will not open any camera views

  OpenRAVE Environment Options:
    --loadplugin=_LOADPLUGINS
                        List all plugins and the interfaces they provide.
    --collision=_COLLISION
                        Default collision checker to use
    --physics=_PHYSICS  physics engine to use (default=none)
    --viewer=_VIEWER    viewer to use (default=qtcoin)
    --server=_SERVER    server to use (default=None).
    --serverport=_SERVERPORT
                        port to load server on (default=4765).
    --module=_MODULES   module to load, can specify multiple modules. Two
                        arguments are required: "name" "args".
    -l _LEVEL, --level=_LEVEL, --log_level=_LEVEL
                        Debug level, one of
                        (fatal,error,warn,info,debug,verbose,verifyplans)
    --testmode          if set, will run the program in a finite amount of
                        time and spend computation time validating results.
                        Used for testing

Main Python Code

def main(env,options):
    "Main example code."
    scene = PA10GraspExample(env)
    scene.loadscene(scenefilename=options.scene,sensorname='wristcam',usecameraview=options.usecameraview)
    scene.start()

Class Definitions

class openravepy.examples.visibilityplanning.PA10GraspExample(env)[source]

Bases: openravepy.examples.visibilityplanning.VisibilityGrasping

Specific class to setup an PA10 scene for visibility grasping

loadscene(randomize=True, **kwargs)[source]
class openravepy.examples.visibilityplanning.VisibilityGrasping(env)[source]

Calls on the openrave grasp planners to get a robot to pick up objects while guaranteeing visibility with its cameras

computevisibilitymodel(target)[source]
gettarget(orenv)[source]
loadscene(scenefilename, sensorname, robotname=None, showsensors=True, usecameraview=True)[source]
movegripper(grippervalues, robot=None)[source]
robotgohome(homevalues=None)[source]
start(dopause=False, usevision=False)[source]
starttrajectory(trajdata)[source]
syncrobot(robot)[source]
waitrobot(robot=None)[source]
openravepy.examples.visibilityplanning.main(env, options)[source]

Main example code.

openravepy.examples.visibilityplanning.run(*args, **kwargs)[source]

Command-line execution of the example.

Parameters:args – arguments for script to parse, if not specified will use sys.argv

Questions/Feedback

Having problems with OpenRAVE?