I spent about a month of this summer setting up the Baxter robot for data collection as part of my research project. Overall, I’m impressed by the hardware relative to the cost – I can see the Baxter being the automation platform of the future once we can write smart software for it.

Here, I’ll recorded some roadblocks and pitfalls I got stuck on that might save someone a good chunk of time. I found that although there are very nice tools that mostly play together great, the documentation can fall short when you get stuck, and I lost big chunks of time searching around on user forums and hacking solutions together. The setup revolved around a few simple requirements:

  • move the robot predictably and precisely
  • work out forward and inverse kinematics for the end-effector
  • could work with Kinect point clouds in the robot’s cartesian space

Although there is already convenient projects for all of these, it can be hard to work through what exactly you should use and what paths you should avoid.

Motion Planning

First, unless you mostly don’t care forward/inverse kinematics (you probably do care), use MoveIt! for all robot motion. The Baxter SDK ships with some basic functionality: moving to joint positions and an inverse kinematics solver service. MoveIt has much more – it allows you to specify shapes to avoid collisions with, plan smooth multistep movements, and is generally much more usable once set up.

Setup

The official guide walks you through most of the setup, but one annoying roadblock during setup was an error with MoveIt’s python binding, called moveit-commander. Although I could control the robot from rviz (use rviz! it’s fantastic!), I couldn’t get moveit-commander to get the robot’s state and couldn’t find a complete solution online, although many sources were correct about the source of the problem. The underlying issue is that MoveIt looks at the /robot/joint_states topic for the robot state while the Baxter provides it on /joint_states. Simple enough, but there are some arg-parsing bugs and concurrency/blocking interactions which made the problem hard to actually solve. In the end, I ran a simple ROS subscriber/publisher in a different node from the program using MoveIt and it finally worked. When I ran this in the same script that initialized MoveIt, it failed, which makes me think that some blocking during the MoveIt initialization code is to blame. To automate, I added a line in my MoveIt launch file that ran this node and never though about it again.

Tips

I was pushed in the right direction by this post. The main way I move the robot now is with the group.compute_cartesian_path() function, which takes waypoints you want the end effector to pass through and plans a motion through all of them, linearly interpolating between the points. This makes motions very smooth, and one cool consequence is that it avoids putting the arms in situations where the next movement is quite hard, which I found happens often using the SDK inverse kinematics. The other nice property is predictability: it avoids extra motion beyond getting to the goal and the motion plan maintains orientation of the end effector. I’ve included some relevant snippets from my code.

You can also add collision checking with external objects through the PlanningSceneInterface, which is a very nice way to guarantee that the planner won’t, for example, crash into the Kinect in the middle of the robot.

Using the Kinect

There is a solid backbone of software supporting the Kinect for robotics. Start by installing freenect for ROS and grab these launch files. Once you have the Kinect up and running (try visualizing it in rviz), you’ll want to align the coordinates of the Kinect and the robot. You can do this by declaring a static transform to tf and converting the frames of the Kinect point clouds into the robot’s world. Unfortunately, the C++ library with a lot of this functionality, pcl, does not have complete Python bindings. To do the transformation without rolling your own code, you can directly use my repository or use it as a reference to write your own.

I used this code with the kinect calibration package from https://github.com/h2r/baxter_h2r_packages to make the process of using the Kinect sensor with the robot very easy. Steps to calibrate:

  1. Clone this repository, and follow the instructions at baxter_h2r_packages to set up that repository. I found I needed to also clone https://github.com/sniekum/ar_track_alvar_msgs to get their calibration package working.
  2. Run the Baxter kinect calibration program and get the transform (make sure you follow their small setup instructions exactly)
  3. Copy the new transform (you can directly copy the output of tf_look.py) into the relevant args of the kinect_startup_calibrated.launch file
  4. Run the launch file

You should now have a calibrated point cloud aligned with the robot on the /transformed_points topic.