OpenRAVE Documentation

tutorial_inversereachability Module

Generate and use the inverse-reachability database in OpenRAVE for the PR2 robot.

../../_images/tutorial_inversereachability.jpg

Running the Example:

openrave.py --example tutorial_inversereachability

Prerequisite

IK solvers

Generate the IK solver for leftarm_torso. It takes several minutes:

openrave.py --database inversekinematics --robot=robots/pr2-beta-static.zae --manipname=leftarm_torso

[optional] Download inverse-reachability database

Get the inverse-reachability and reachability databases for OpenRAVE r1974 from:

http://people.csail.mit.edu/liuhuan/pr2/openrave/openrave_database/robot.1fd7b38c8ca370ea2f8d4ab79bbb074b.tgz

and decompress the file under:

~/.openrave/

You can also Generate database.

Generate database

Generate the databases with single core

openrave.py --database inversereachability --robot=robots/pr2-beta-static.zae --manipname=leftarm_torso --ignorefreespace 

This process will generate both reachability and inverse-rechability databases. It will take more than 10 hours (Intel Core2 Duo E8400 @ 3GHz with 4G of memory).

Generate the databases with multiple cores

If you have the openrave_planning ROS stack installed, you can try a parallelized version of the reachability database generation:

First:

rosmake openrave_data

Then:

roscore

Finally, in a separate terminal:

rosrun openrave_database kinematicreachability_ros.py --robot=robots/pr2-beta-static.zae --manipname=leftarm_torso --ignorefreespace --launchservice='8*localhost'

To add another computer with 8 cores add:

--launchservice='8*newhost'

Once the reachability database is generated, generate the inversereachability database (which will take much less time):

openrave.py --database inversereachability --robot=robots/pr2-beta-static.zae --manipname=leftarm_torso --ignorefreespace 

There are a few other parameters that can be set:

heightthresh = .05 # The max radius of the arm to perform the computation (default=0.05)
quatthresh = .15 # The max radius of the arm to perform the computation (default=0.15)
show_maxnumber = 10 # Number of robots to show simultaneously (default=20)
id = None # Special id differentiating inversereachability models
jointvalues = None # String of joint values that connect the robot base link to the manipulator base link
show_transparency = .8 # Transparency of the robots to show (default=.8)

You can also [optional] Download inverse-reachability database.

Visualize database

Visualize inverse-reachability database

This tutorial visualizes the database for one grasp, there will be another tutorial that will show how to visualize the base distribution for all possible grasps for an object.

../../_images/tutorial_inversereachability_ir_grasps.png

The blue cloud indicates possible base positions for grasping the object.

Visualize reachability database

Command:

openrave.py --database kinematicreachability --robot=robots/pr2-beta-static.zae --show
../../_images/tutorial_inversereachability_front.png

The color indicates reachability.

../../_images/tutorial_inversereachability_back.png

The color indicates reachability.

Use database: Move the robot where it can perform a specific grasp

../../_images/tutorial_inversereachability_goal.png

Robot in a place where it can grasp the cup.

Set up environment

# set up planning environment
env = Environment()
env.SetViewer('qtcoin')

Set up goal

# set up goal grasp transform
# goal grasp transform specified in global frame, this equals manip.GetTransform() in the goal state    
O_T_grasp = array([[ -9.88017917e-01,  -1.54339954e-01 ,  0.00000000e+00 ,  1.06494129e+00],
                   [  1.54339954e-01,  -9.88017917e-01 ,  0.00000000e+00 ,  5.51449812e-05],
                   [  0.00000000e+00 ,  0.00000000e+00 ,  1.00000000e+00 ,  9.55221763e-01],
                   [  0.00000000e+00 ,  0.00000000e+00,   0.00000000e+00  , 1.00000000e+00]]) 
../../_images/tutorial_inversereachability_goal_grasp.png

Goal grasp

Set up robot

# initialize robot pose
v = self.robot.GetActiveDOFValues()
v[self.robot.GetJoint('l_shoulder_pan_joint').GetDOFIndex()]= 3.14/2
v[self.robot.GetJoint('r_shoulder_pan_joint').GetDOFIndex()] = -3.14/2
v[self.robot.GetJoint('l_gripper_l_finger_joint').GetDOFIndex()] = .54
self.robot.SetActiveDOFValues(v)
../../_images/tutorial_inversereachability_before.png

Robot starting state

Load database

# load inverserechability database
self.irmodel = inversereachability.InverseReachabilityModel(robot=self.robot)
starttime = time.time()
print 'loading irmodel'
if not self.irmodel.load():
    print 'do you want to generate irmodel for your robot? it might take several hours'
    print 'or you can go to http://people.csail.mit.edu/liuhuan/pr2/openrave/openrave_database/ to get the database for PR2'
    input = raw_input('[Y/n]')
    if input == 'y' or input == 'Y' or input == '\n' or input == '':
        class IrmodelOption:
        self.irmodel.autogenerate()
        self.irmodel.load()
    else:
        raise ValueError('')
        
print 'time to load inverse-reachability model: %fs'%(time.time()-starttime)

Get robot base distribution

densityfn,samplerfn,bounds = self.irmodel.computeBaseDistribution(Tgrasp)
  • Input for computeBaseDistribution():

    Tgrasp: 4x4 numpy.array, row major matrix, the grasp transform in global frame
            equals manip.GetTransform() in the goal state
  • Output for computeBaseDistribution():

    densityfn: gaussian kernel density function taking poses of openrave quaternion type, returns probabilities
    samplerfn: gaussian kernel sampler function taking number of sample and weight, returns robot base poses and joint states
    bounds: 2x3 array, bounds of samples, [[min rotation, min x, min y],[max rotation, max x, max y]]

Find valid poses from the distribution

Code fragment from sanbox/mobilemanipulation.py

# initialize sampling parameters
goals = []
numfailures = 0
starttime = time.time()
timeout = inf
with self.robot:
    while len(goals) < N:
        if time.time()-starttime > timeout:
            break
        poses,jointstate = samplerfn(N-len(goals))
        for pose in poses:
            self.robot.SetTransform(pose)
            self.robot.SetDOFValues(*jointstate)
            # validate that base is not in collision
            if not self.manip.CheckIndependentCollision(CollisionReport()):
                q = self.manip.FindIKSolution(Tgrasp,filteroptions=IkFilterOptions.CheckEnvCollisions)
                if q is not None:
                    values = self.robot.GetDOFValues()
                    values[self.manip.GetArmIndices()] = q
                    goals.append((Tgrasp,pose,values))
                elif self.manip.FindIKSolution(Tgrasp,0) is None:
                    numfailures += 1

Move robot to valid poses

print 'showing %d results'%N
for ind,goal in enumerate(goals):
    raw_input('press ENTER to show goal %d'%ind)
    Tgrasp,pose,values = goal
    self.robot.SetTransform(pose)
    self.robot.SetDOFValues(values)
../../_images/tutorial_inversereachability_after.png

One sample from the goal distribution.

[optional] Overlay results

Code fragment from databases.inversereachability

transparency = .8
with self.env:
    self.env.Remove(self.robot)
    newrobots = []
    for goal in goals:
        Tgrasp,T,values = goal
        newrobot = RaveCreateRobot(self.env,self.robot.GetXMLId())
        newrobot.Clone(self.robot,0)
        newrobot.SetName(self.robot.GetName())
        for link in newrobot.GetLinks():
            for geom in link.GetGeometries():
                geom.SetTransparency(transparency)
        self.env.Add(newrobot,True)
        with self.env:
            newrobot.SetTransform(T)
            newrobot.SetDOFValues(values)
        newrobots.append(newrobot)
../../_images/tutorial_inversereachability1.jpg

Top view of ten samples.

../../_images/tutorial_inversereachability_10_solutions_side.png

Side view of ten samples.

Questions/Feedback

Having problems with OpenRAVE?