I spent about a month of this summer setting up the Baxter robot for data collection as part of my research project. Overall, I’m impressed by the hardware relative to the cost – I can see the Baxter being the automation platform of the future once we can write smart software for it.

Here, I’ll recorded some roadblocks and pitfalls I got stuck on that might save someone a good chunk of time. I found that although there are very nice tools that mostly play together great, the documentation can fall short when you get stuck, and I lost big chunks of time searching around on user forums and hacking solutions together. The setup revolved around a few simple requirements:

  • move the robot predictably and precisely
  • work out forward and inverse kinematics for the end-effector
  • could work with Kinect point clouds in the robot’s cartesian space

Although there is already convenient projects for all of these, it can be hard to work through what exactly you should use and what paths you should avoid.

Motion Planning

First, unless you mostly don’t care forward/inverse kinematics (you probably do care), use MoveIt! for all robot motion. The Baxter SDK ships with some basic functionality: moving to joint positions and an inverse kinematics solver service. MoveIt has much more – it allows you to specify shapes to avoid collisions with, plan smooth multistep movements, and is generally much more usable once set up.

Setup

The official guide walks you through most of the setup, but one annoying roadblock during setup was an error with MoveIt’s python binding, called moveit-commander. Although I could control the robot from rviz (use rviz! it’s fantastic!), I couldn’t get moveit-commander to get the robot’s state and couldn’t find a complete solution online, although many sources were correct about the source of the problem. The underlying issue is that MoveIt looks at the /robot/joint_states topic for the robot state while the Baxter provides it on /joint_states. Simple enough, but there are some arg-parsing bugs and concurrency/blocking interactions which made the problem hard to actually solve. In the end, I ran a simple ROS subscriber/publisher in a different node from the program using MoveIt and it finally worked. When I ran this in the same script that initialized MoveIt, it failed, which makes me think that some blocking during the MoveIt initialization code is to blame. To automate, I added a line in my MoveIt launch file that ran this node and never though about it again.

#!/usr/bin/env python
import rospy
from sensor_msgs.msg import JointState
rospy.init_node("remapper")
joint_states = rospy.Publisher("/joint_states", JointState, queue_size = 10)
def remap(data):
if not rospy.is_shutdown():
joint_states.publish(data)
rospy.Subscriber("/robot/joint_states", JointState, remap)
rospy.spin()
view raw remap.py hosted with ❤ by GitHub

Tips

I was pushed in the right direction by this post. The main way I move the robot now is with the group.compute_cartesian_path() function, which takes waypoints you want the end effector to pass through and plans a motion through all of them, linearly interpolating between the points. This makes motions very smooth, and one cool consequence is that it avoids putting the arms in situations where the next movement is quite hard, which I found happens often using the SDK inverse kinematics. The other nice property is predictability: it avoids extra motion beyond getting to the goal and the motion plan maintains orientation of the end effector. I’ve included some relevant snippets from my code.

class BaxterController(object):
def __init__(self):
# ...
self.commander = RobotCommander()
self.scene = PlanningSceneInterface()
self.right_arm_moveit = MoveGroupCommander("right_arm")
self.right_arm_moveit.set_goal_position_tolerance(0.01)
# ...
def get_plan(self, points, orientation, limb):
"""Get smooth plan through given list of points
orientation is a geometry_msgs.msg.Quaternion
"""
group = self.get_arm(limb) # returns the appropriate MoveGroupCommander
now = group.get_current_pose().pose
waypoints = [now]
for x, y, z in points:
x, y, z = self.get_adjusted_goal(x, y, z, limb) # small constant offsets
p = util.get_pose_stamped(x, y, z, O)
waypoints.append(p)
plan, f = group.compute_cartesian_path(waypoints, 0.01, 0.0)
# f is the fraction of succesfully planned path
return plan, f
def slow_plan(self, plan):
"""Explicitly sets plan velocities, use if you see velocity errors"""
n = 7
vels = [0.1] * n
for p in plan.joint_trajectory.points:
p.velocities = vels
def do_plan(self, plan, limb = None):
group = self.get_arm(limb)
return group.execute(plan)
view raw controller.py hosted with ❤ by GitHub

You can also add collision checking with external objects through the PlanningSceneInterface, which is a very nice way to guarantee that the planner won’t, for example, crash into the Kinect in the middle of the robot.

Using the Kinect

There is a solid backbone of software supporting the Kinect for robotics. Start by installing freenect for ROS and grab these launch files. Once you have the Kinect up and running (try visualizing it in rviz), you’ll want to align the coordinates of the Kinect and the robot. You can do this by declaring a static transform to tf and converting the frames of the Kinect point clouds into the robot’s world. Unfortunately, the C++ library with a lot of this functionality, pcl, does not have complete Python bindings. To do the transformation without rolling your own code, you can directly use my repository or use it as a reference to write your own.

I used this code with the kinect calibration package from https://github.com/h2r/baxter_h2r_packages to make the process of using the Kinect sensor with the robot very easy. Steps to calibrate:

  1. Clone this repository, and follow the instructions at baxter_h2r_packages to set up that repository. I found I needed to also clone https://github.com/sniekum/ar_track_alvar_msgs to get their calibration package working.
  2. Run the Baxter kinect calibration program and get the transform (make sure you follow their small setup instructions exactly)
  3. Copy the new transform (you can directly copy the output of tf_look.py) into the relevant args of the kinect_startup_calibrated.launch file
  4. Run the launch file

You should now have a calibrated point cloud aligned with the robot on the /transformed_points topic.