These instructions will make Baxter grasp named objects from the table.
This works as follows:
- The user enters the name of the object to be picked up through a simple command line interface.
- The object detector finds the approximate object location in the 2D RGB image (d).
- The output of the content, a heatmap, is used to focus the sampling of grasp candidates on the object of interest (a).
- Our grasp pose detection method finds tries to find grasps on the object of interest (b).
- The grasps are scored according to the estimated object detection probability given by the heatmap (c,d).
- The object detection scores are combined with a utility function that evaluates the reachability, height and verticalness of a grasp.
- The highest scoring grasp is selected and the robot attempts to execute it.
Launch rviz, start the Asus Xtion drivers, and suppress Baxter's built-in collision checker:
Launch the point cloud registration:
Launch the grasp pose detection:
Run the object detection:
Turn on the robot and run the grasp execution node: