OpenRAVE Documentation

visibilitymodel Module

Samples visible locations of a target object and a sensor.



Running the Generator --database visibilitymodel --robot=robots/pa10schunk.robot.xml

Showing Visible Locations --database visibilitymodel --robot=robots/pa10schunk.robot.xml --show


Dynamically generate/load the visibility sampler for a manipulator/sensor/target combination:

ikmodel = openravepy.databases.visibilitymodel.VisibilityModel(robot,target,sensorname)
if not vmodel.load():


As long as a sensor is attached to a robot arm, can be applied to any robot to get immediate visibiliy configuration sampling:


The visibility database generator uses the VisualFeedback - rmanipulation for the underlying visibility computation. The higher level functions it provides are sampling configurations, computing all valid configurations with the manipulator, and display.


Usage: [options]

Computes and manages the visibility transforms for a manipulator/target.

  -h, --help            show this help message and exit
  --target=TARGET       OpenRAVE kinbody target filename
                        Name of the sensor to build visibilty model for (has
                        to be camera). If none, takes first possible sensor.
  --preshape=PRESHAPES  Add a preshape for the manipulator gripper joints
  --sphere=SPHERE       Force detectability extents to be distributed around a
                        sphere. Parameter is a string with the first value
                        being density (3 is default) and the rest being
                        The direction of the cone multiplied with the half-
                        angle (radian) that the detectability extents are
                        constrained to. Multiple cones can be provided.
                        The offset to move the ray origin (prevents
                        meaningless collisions), default is 0.03
  --showimage           If set, will show the camera image when showing the

  OpenRAVE Environment Options:
                        List all plugins and the interfaces they provide.
                        Default collision checker to use
    --physics=_PHYSICS  physics engine to use (default=none)
    --viewer=_VIEWER    viewer to use (default=qtcoin)
    --server=_SERVER    server to use (default=None).
                        port to load server on (default=4765).
    --module=_MODULES   module to load, can specify multiple modules. Two
                        arguments are required: "name" "args".
    -l _LEVEL, --level=_LEVEL, --log_level=_LEVEL
                        Debug level, one of
    --testmode          if set, will run the program in a finite amount of
                        time and spend computation time validating results.
                        Used for testing

  OpenRAVE Database Generator General Options:
    --show              Graphically shows the built model
    --getfilename       If set, will return the final database filename where
                        all data is stored
    --gethas            If set, will exit with 0 if datafile is generated and
                        up to date, otherwise will return a 1. This will
                        require loading the model and checking versions, so
                        might be a little slow.
    --robot=ROBOT       OpenRAVE robot to load
                        number of threads to compute the database with
                        The name of the manipulator on the robot to use

Class Definitions

class openravepy.databases.visibilitymodel.VisibilityModel(robot, target, sensorrobot=None, sensorname=None, maxvelmult=None)[source]

Bases: openravepy.databases.DatabaseGenerator

Starts a visibility model using a robot, a sensor, and a target

The minimum needed to be specified is the robot and a sensorname. Supports sensors that do not belong to the current robot in the case that a robot is holding the target with its manipulator. Providing the target allows visibility information to be computed.

static CreateOptionParser()[source]
class GripperVisibility(manip)[source]

Used to hide links not beloning to gripper.

When ‘entered’ will hide all the non-gripper links in order to facilitate visiblity of the gripper

static VisibilityModel.RunFromParser(Model=None, parser=None, args=None, **kwargs)[source]

Sets the camera transforms to the visual feedback problem

VisibilityModel.autogenerate(options=None, gmodel=None)[source]
VisibilityModel.computeValidTransform(returnall=False, checkcollision=True, computevisibility=True, randomize=False)[source]
VisibilityModel.generate(preshapes, sphere=None, conedirangles=None)[source]

uses a planner to safely move the hand to the preshape and returns the trajectory

VisibilityModel.pruneTransformations(thresh=0.040000000000000001, numminneighs=10, maxdist=None, translationonly=True)[source][source][source]
VisibilityModel.showtransforms(options=None)[source]*args, **kwargs)[source]

Command-line execution of the example. args specifies a list of the arguments to the script.


Having problems with OpenRAVE?