From sdk-wiki
Revision as of 20:22, 12 December 2014 by Imcmahon (Talk | contribs)

Jump to: navigation, search

Kinect camera connectivity is supported through the SDK as of v1.1. See below for details...

Visualize Kinect signal in RViz

    $ sudo apt-get install ros-indigo-openni-camera ros-indigo-openni-launch

You may need to reboot the robot to correctly install the camera drivers. To start publishing the kinect camera, run the openni_launch file:

    $ roslaunch openni_launch openni.launch

You can now publish a static tf transform for the kinect connected to a frame on the robot like so:

    $ rosrun tf static_transform_publisher <x> <y> <z> <qx> <qy> <qz> <qw> <parent frame> /camera_link 50

If you launch rviz and add a camera linked to /camera/rgb/image_color and add a TF obect, deselecting Show Names and Show Axes, you should be able to see the output from the kinect with an icon indicating its relative position to the robot Using the transform:

    $ rosrun tf static_transform_publisher 1 0 0 .1 0 0 0 /torso /camera_link 50

Kinect TF example

Useing a Kinect sensor in MoveIt!

It's possible to integrate sensor data from the kinect with MoveIt! so that you can plan paths for Baxter in dynamic environments. This functionality may required an updated version of MoveIt!, so you should first update to the latest MoveIt configuration package. At this time, there are no drivers available to integrate either linux or ROS with the Xbox One Kinect, or the K4W2, so these instructions are for the original Xbox kinect, or the K4W. To get the kinect working, we first need to install the OpenNI ros packages

  $ sudo apt-get install ros-groovy-openni-camera ros-groovy-openni-launch

In an RSDK shell, launch the openni drivers to check that they're working correctly.

  $ roslaunch openni_launch openni.launch

In a separate RSDK shell, you can use image_view to look at the data

  $ rosrun image_view disparity_view image:=/camera/depth/disparity

You can also just look at the rgb image

  $ rosrun image_view image_view image:=/camera/rgb/image_color

RGB and Depth images from kinect

If this all works, then you should be good to go!

Integrating with MoveIt!

You'll probably first want to specify where you've placed your kinect relative to Baxter. We set up a static transform publisher in our MoveIt! launch files to do this for us. By default, the camera is set to position:

 #  x y z yaw pitch roll parent_frame
 #  1 0 0  0    0     0     /torso

If you're fine with this location for now, you can leave it as is. If not, you can edit the file

  $ rosed baxter_moveit_config baxter_moveit_sensor_manager.launch

where you can update the location of your kinect by changing the arguments at

  <node pkg="tf" type="static_transform_publisher" name="camera_link_broadcaster"
  args="1.0 0.0 0.0 0.0 0.0 0.0 /torso /camera_link 100" />

More information on this transform publisher can be found on ROS's wiki

After doing this, you can run MoveIt! using input from the kinect. You should not be running your own OpenNI server anymore, but you should have the joint trajectory action server running in an RSDK shell. In another RSDK shell, run the kinect demo:

  $ roslaunch baxter_moveit_config demo_kinect.launch

Alternatively, you could run the regular MoveIt! demo and pass in a kinect argument

  $ roslaunch baxter_moveit_config demo_baxter.launch kinect:=true

After launching rviz, you should be able to see the input data from your kinect sensor in the environment. Self-filtering should be performed for you to ensure that the kinect doesn't consider Baxter to be part of the planning environment. You can now do motion planning for Baxter using the kinect data.
A motion plan for Baxter with input from the kinect
Another motion plan for Baxter with the kinect
The floating set of axes in the images shows the camera location - it is not in the default location


If you want to use the kinect data with our MoveIt! simulator, that's also possible. Be careful not to run the simulator in an RSDK shell, or with the ROS_MASTER_URI set to the robot

  $ roslaunch baxter_moveit_config demo_sim.launch kinect:=true


There are a couple of settings that you may want to adjust for your use case, which can be done through the kinect yaml file

  $ rosed baxter_moveit_config kinect_sensor.yaml

Where you'll see the following information

 - sensor_plugin: occupancy_map_monitor/PointCloudOctomapUpdater
   point_cloud_topic: /move_group/camera/depth/points
   max_range: 5.0
   padding_offset: 0
   padding_scale: 3.0
   frame_subsample: 1
   point_subsample: 1

Information about these settings can be found on the MoveIt! wiki