Child pages
  • Object Grasping Demo
Skip to end of metadata
Go to start of metadata

These instructions will make Baxter grasp named objects from the table.

This works as follows:

  1. The user enters the name of the object to be picked up through a simple command line interface.
  2. The object detector finds the approximate object location in the 2D RGB image (d).
  3. The output of the content, a heatmap, is used to focus the sampling of grasp candidates on the object of interest (a).
  4. Our grasp pose detection method finds tries to find grasps on the object of interest (b).
  5. The grasps are scored according to the estimated object detection probability given by the heatmap (c,d).
  6. The object detection scores are combined with a utility function that evaluates the reachability, height and verticalness of a grasp.
  7. The highest scoring grasp is selected and the robot attempts to execute it.

Step-by-step guide

  1. Launch rviz, start the Asus Xtion drivers, and suppress Baxter's built-in collision checker:

    roslaunch active_sensing object_recognition.launch
  2. Launch the point cloud registration:

    roslaunch object_grasping register_clouds.launch
  3. Launch the grasp pose detection:

    roslaunch agile_grasp2 objects_passive.launch
  4. Run the object detection:

    rosrun object_grasping recognize_objects4.py
  5. Turn on the robot and run the grasp execution node:

    rosrun baxter_tools enable_robot.py -e
    python src/active_sensing/scripts/object_recognition_passive.py 1 0