Difference between revisions of "Kinect basics"

From sdk-wiki
Jump to: navigation, search
(Parameters)
 
(34 intermediate revisions by one user not shown)
Line 1: Line 1:
 
Kinect camera connectivity is supported through the SDK as of v1.1.  See below for details...
 
Kinect camera connectivity is supported through the SDK as of v1.1.  See below for details...
  
== Visualize Kinect signal in RViz ==
+
== Depth Sensor Setup ==
  
 +
==== Installing Depth Sensor ROS drivers ====
 +
<tabs>
 +
<tab title="ROS Indigo Kinect">
 
<syntaxhighlight lang="bash" enclose="div">
 
<syntaxhighlight lang="bash" enclose="div">
     $ sudo apt-get install openni-camera openni-launch
+
 
 +
     $ sudo apt-get install ros-indigo-libfreenect ros-indigo-freenect-camera ros-indigo-freenect-launch
 +
 
 
</syntaxhighlight>
 
</syntaxhighlight>
 +
You may need to reboot the robot to correctly install the camera drivers.
 +
To start publishing the kinect camera, run the freenect_launch file:
  
 +
<syntaxhighlight lang="bash" enclose="div">
 +
    $ roslaunch freenect_launch freenect.launch rgb_frame_id:=camera_rgb_optical_frame depth_frame_id:=camera_depth_optical_frame
 +
</syntaxhighlight>
 +
</tab>
 +
<tab title="ROS Indigo Xtion">
 +
<syntaxhighlight lang="bash" enclose="div">
 +
 +
    $ sudo apt-get install libopenni0 libopenni-sensor-primesense0 ros-indigo-openni-camera ros-indigo-openni-launch
 +
 +
</syntaxhighlight>
 
You may need to reboot the robot to correctly install the camera drivers.
 
You may need to reboot the robot to correctly install the camera drivers.
To start publishing the kinect camera, run the openni_launch file:
+
To start publishing the Xtion camera, run the openni_launch file:
  
 
<syntaxhighlight lang="bash" enclose="div">
 
<syntaxhighlight lang="bash" enclose="div">
     $ roslaunch openni_launch openni.launch
+
     $ roslaunch openni_launch openni.launch rgb_frame_id:=camera_rgb_optical_frame depth_frame_id:=camera_depth_optical_frame
 
</syntaxhighlight>
 
</syntaxhighlight>
 +
</tab>
 +
</tabs>
  
You can now publish a static tf transform for the kinect connected to a frame on the robot like so:
+
==== Visualizing the Depth Sensor ====
 +
You can now publish a static tf transform for the depth sensor connected to a frame on the robot like so:
  
 
<syntaxhighlight lang="bash" enclose="div">
 
<syntaxhighlight lang="bash" enclose="div">
Line 29: Line 49:
 
[[File:kinect_tf.png|600px|Kinect TF example]]
 
[[File:kinect_tf.png|600px|Kinect TF example]]
  
== Useing a Kinect sensor in MoveIt! ==
 
  
It's possible to integrate sensor data from the kinect with MoveIt! so that you can plan paths for Baxter in dynamic environments. This functionality may required an updated version of MoveIt!, so you should first update to the [[#Installation.2FPrerequisites|latest]] MoveIt configuration package. At this time, there are no drivers available to integrate either linux or ROS with the Xbox One Kinect, or the K4W2, so these instructions are for the original Xbox kinect, or the K4W. To get the kinect working, we first need to install the OpenNI ros packages
+
In a separate RSDK shell, you can use image_view to look at the image disparity data
<source lang="bash">
+
  $ sudo apt-get install ros-groovy-openni-camera ros-groovy-openni-launch
+
</source>
+
In an RSDK shell, launch the openni drivers to check that they're working correctly.
+
<source lang="bash">
+
  $ roslaunch openni_launch openni.launch
+
</source>
+
In a separate RSDK shell, you can use image_view to look at the data
+
 
<source lang="bash">
 
<source lang="bash">
 
   $ rosrun image_view disparity_view image:=/camera/depth/disparity
 
   $ rosrun image_view disparity_view image:=/camera/depth/disparity
Line 50: Line 61:
 
<br /> <br />
 
<br /> <br />
 
If this all works, then you should be good to go!
 
If this all works, then you should be good to go!
 +
 +
== Using a Depth Sensor in MoveIt! ==
 +
It's possible to integrate depth sensor data with MoveIt! This will allow MoveIt to plan paths for Baxter in dynamic environments. This functionality will required an up to date version of MoveIt!, so you should first update to the [[#Installation.2FPrerequisites|latest]] MoveIt configuration package. At this time, there are no drivers available to integrate either linux or ROS with the Xbox One Kinect, or the K4W2, so these instructions are for the original Xbox kinect, or the K4W and Primesense Xtion Pro.
 +
 +
'''Make sure you have installed all the MoveIt! ROS packages described in the [http://sdk.rethinkrobotics.com/wiki/MoveIt_Tutorial MoveIt! Tutorial]'''
  
 
==== Integrating with MoveIt! ====
 
==== Integrating with MoveIt! ====
You'll probably first want to specify where you've placed your kinect relative to Baxter. We set up a static transform publisher in our MoveIt! launch files to do this for us. By default, the camera is set to position:
+
===== Depth Sensor to Base Transform =====
 +
You'll probably first want to specify where you've placed your depth sensor relative to Baxter. We set up a static transform publisher in our MoveIt! launch files to do this for us. By default, the camera is set to position:
 
<code>
 
<code>
   # x y z yaw pitch roll parent_frame
+
   #   x     y   z yaw pitch roll parent_frame
   #  1 0 0 0   0     0    /torso
+
   #  0.15 0.075 0.5 0.0 0.7854 0.0    /torso
 
</code>
 
</code>
If you're fine with this location for now, you can leave it as is. If not, you can edit the file
+
If you're fine with this location for now, you can leave it as is. If not, you can supply the argument camera_link_pose with whatever transform to /torso that you desire.
 
<source lang="bash">
 
<source lang="bash">
   $ rosed baxter_moveit_config baxter_moveit_sensor_manager.launch
+
  # To change the transform between /camera_link and /torso, users can override the transform between camera and robot
 +
   $ roslaunch baxter_moveit_config demo_kinect.launch camera_link_pose:="1.0 0.0 0.0 0.0 0.0 0.0"
 
</source>
 
</source>
where you can update the location of your kinect by changing the arguments at
+
 
<code>
+
  <!-- Users update this to set transform between camera and robot -->
+
  <node pkg="tf" type="static_transform_publisher" name="camera_link_broadcaster"
+
  args="1.0 0.0 0.0 0.0 0.0 0.0 /torso /camera_link 100" />
+
</code>
+
 
More information on this transform publisher can be found on ROS's [http://wiki.ros.org/tf#static_transform_publisher wiki]
 
More information on this transform publisher can be found on ROS's [http://wiki.ros.org/tf#static_transform_publisher wiki]
  
After doing this, you can run MoveIt! using input from the kinect. You should not be running your own OpenNI server anymore, but you should have the joint trajectory action server running in an RSDK shell. In another RSDK shell, run the kinect demo:
+
===== Launching JTAS and MoveIt! together =====
 +
After doing this, you can run MoveIt! using input from your depth sensor. You should not be running your own Freenect/OpenNI server anymore, but you should have the joint trajectory action server running in an RSDK shell. In another RSDK shell, run the kinect demo:
 +
<source lang="bash">
 +
  $ rosrun baxter_interface joint_trajectory_action_server.py
 +
</source>
 +
And in a separate <code>baxter.sh</code> Terminal:
 +
<tabs>
 +
<tab title="Kinect">
 
<source lang="bash">
 
<source lang="bash">
 
   $ roslaunch baxter_moveit_config demo_kinect.launch
 
   $ roslaunch baxter_moveit_config demo_kinect.launch
Line 77: Line 97:
 
   $ roslaunch baxter_moveit_config demo_baxter.launch kinect:=true
 
   $ roslaunch baxter_moveit_config demo_baxter.launch kinect:=true
 
</source>
 
</source>
 +
</tab>
 +
<tab title="Xtion">
 +
<source lang="bash">
 +
  $ roslaunch baxter_moveit_config demo_xtion.launch
 +
</source>
 +
Alternatively, you could run the regular MoveIt! demo and pass in a xtion argument
 +
<source lang="bash">
 +
  $ roslaunch baxter_moveit_config demo_baxter.launch xtion:=true
 +
</source>
 +
</tab>
 +
</tabs>
  
After launching rviz, you should be able to see the input data from your kinect sensor in the environment. Self-filtering should be performed for you to ensure that the kinect doesn't consider Baxter to be part of the planning environment. You can now do motion planning for Baxter using the kinect data.
+
After launching rviz, you should be able to see the input data from your depth sensor sensor in the environment. Self-filtering should be performed for you to ensure that the sensor doesn't consider Baxter to be part of the planning environment. You can now do motion planning for Baxter using the kinect data.
 
<br />
 
<br />
 
[[File:Kinect_moveit_plan_1.png|600px|A motion plan for Baxter with input from the kinect]]
 
[[File:Kinect_moveit_plan_1.png|600px|A motion plan for Baxter with input from the kinect]]
Line 87: Line 118:
  
 
==== Simulator ====
 
==== Simulator ====
If you want to use the kinect data with our MoveIt! simulator, that's also possible.
+
If you want to use the kinect data with our MoveIt! simulator, that's also possible with the same command.
Be careful not to run the simulator in an RSDK shell, or with the ROS_MASTER_URI set to the robot
+
Just be careful to run the simulator in an RSDK shell set to 'sim': ./baxter.sh sim
<source lang="bash">
+
  $ roslaunch baxter_moveit_config demo_sim.launch kinect:=true
+
</source>
+
  
 
==== Parameters ====
 
==== Parameters ====
 +
<tabs>
 +
<tab title="Kinect">
 
There are a couple of settings that you may want to adjust for your use case, which can be done through the kinect yaml file
 
There are a couple of settings that you may want to adjust for your use case, which can be done through the kinect yaml file
 
<source lang="bash">
 
<source lang="bash">
Line 103: Line 133:
 
   sensors:  
 
   sensors:  
 
   - sensor_plugin: occupancy_map_monitor/PointCloudOctomapUpdater
 
   - sensor_plugin: occupancy_map_monitor/PointCloudOctomapUpdater
     point_cloud_topic: /move_group/camera/depth/points
+
     point_cloud_topic: /camera/depth_registered/points
 
     max_range: 5.0
 
     max_range: 5.0
 
     padding_offset: 0
 
     padding_offset: 0
Line 111: Line 141:
 
</code>
 
</code>
 
Information about these settings can be found on the [http://moveit.ros.org/wiki/3D_Sensors MoveIt! wiki]
 
Information about these settings can be found on the [http://moveit.ros.org/wiki/3D_Sensors MoveIt! wiki]
 +
</tab>
 +
<tab title="Xtion">
 +
There are a couple of settings that you may want to adjust for your use case, which can be done through the kinect yaml file
 +
<source lang="bash">
 +
  $ rosed baxter_moveit_config xtion_sensor.yaml
 +
</source>
 +
 +
Where you'll see the following information
 +
<code>
 +
  sensors:
 +
  - sensor_plugin: occupancy_map_monitor/PointCloudOctomapUpdater
 +
    point_cloud_topic: /camera/depth_registered/points
 +
    max_range: 4.0
 +
    padding_offset: 0
 +
    padding_scale: 3.0
 +
    frame_subsample: 1
 +
    point_subsample: 10
 +
</code>
 +
Information about these settings can be found on the [http://moveit.ros.org/wiki/3D_Sensors MoveIt! wiki]
 +
</tab>
 +
</tabs>

Latest revision as of 22:12, 21 January 2015

Kinect camera connectivity is supported through the SDK as of v1.1. See below for details...

Depth Sensor Setup

Installing Depth Sensor ROS drivers

        $ sudo apt-get install ros-indigo-libfreenect ros-indigo-freenect-camera ros-indigo-freenect-launch

    You may need to reboot the robot to correctly install the camera drivers. To start publishing the kinect camera, run the freenect_launch file:

        $ roslaunch freenect_launch freenect.launch rgb_frame_id:=camera_rgb_optical_frame depth_frame_id:=camera_depth_optical_frame
        $ sudo apt-get install libopenni0 libopenni-sensor-primesense0 ros-indigo-openni-camera ros-indigo-openni-launch

    You may need to reboot the robot to correctly install the camera drivers. To start publishing the Xtion camera, run the openni_launch file:

        $ roslaunch openni_launch openni.launch rgb_frame_id:=camera_rgb_optical_frame depth_frame_id:=camera_depth_optical_frame


    Visualizing the Depth Sensor

    You can now publish a static tf transform for the depth sensor connected to a frame on the robot like so:

        $ rosrun tf static_transform_publisher <x> <y> <z> <qx> <qy> <qz> <qw> <parent frame> /camera_link 50

    If you launch rviz and add a camera linked to /camera/rgb/image_color and add a TF obect, deselecting Show Names and Show Axes, you should be able to see the output from the kinect with an icon indicating its relative position to the robot Using the transform:

        $ rosrun tf static_transform_publisher 1 0 0 .1 0 0 0 /torso /camera_link 50

    Kinect TF example


    In a separate RSDK shell, you can use image_view to look at the image disparity data

      $ rosrun image_view disparity_view image:=/camera/depth/disparity

    You can also just look at the rgb image

      $ rosrun image_view image_view image:=/camera/rgb/image_color

    RGB and Depth images from kinect

    If this all works, then you should be good to go!

    Using a Depth Sensor in MoveIt!

    It's possible to integrate depth sensor data with MoveIt! This will allow MoveIt to plan paths for Baxter in dynamic environments. This functionality will required an up to date version of MoveIt!, so you should first update to the latest MoveIt configuration package. At this time, there are no drivers available to integrate either linux or ROS with the Xbox One Kinect, or the K4W2, so these instructions are for the original Xbox kinect, or the K4W and Primesense Xtion Pro.

    Make sure you have installed all the MoveIt! ROS packages described in the MoveIt! Tutorial

    Integrating with MoveIt!

    Depth Sensor to Base Transform

    You'll probably first want to specify where you've placed your depth sensor relative to Baxter. We set up a static transform publisher in our MoveIt! launch files to do this for us. By default, the camera is set to position:

     #   x     y    z  yaw pitch roll  parent_frame
     #  0.15 0.075 0.5 0.0 0.7854 0.0     /torso
    

    If you're fine with this location for now, you can leave it as is. If not, you can supply the argument camera_link_pose with whatever transform to /torso that you desire.

      # To change the transform between /camera_link and /torso, users can override the transform between camera and robot
      $ roslaunch baxter_moveit_config demo_kinect.launch camera_link_pose:="1.0 0.0 0.0 0.0 0.0 0.0"

    More information on this transform publisher can be found on ROS's wiki

    Launching JTAS and MoveIt! together

    After doing this, you can run MoveIt! using input from your depth sensor. You should not be running your own Freenect/OpenNI server anymore, but you should have the joint trajectory action server running in an RSDK shell. In another RSDK shell, run the kinect demo:

      $ rosrun baxter_interface joint_trajectory_action_server.py

    And in a separate baxter.sh Terminal:

        $ roslaunch baxter_moveit_config demo_kinect.launch

      Alternatively, you could run the regular MoveIt! demo and pass in a kinect argument

        $ roslaunch baxter_moveit_config demo_baxter.launch kinect:=true
        $ roslaunch baxter_moveit_config demo_xtion.launch

      Alternatively, you could run the regular MoveIt! demo and pass in a xtion argument

        $ roslaunch baxter_moveit_config demo_baxter.launch xtion:=true


      After launching rviz, you should be able to see the input data from your depth sensor sensor in the environment. Self-filtering should be performed for you to ensure that the sensor doesn't consider Baxter to be part of the planning environment. You can now do motion planning for Baxter using the kinect data.
      A motion plan for Baxter with input from the kinect
      Another motion plan for Baxter with the kinect
      The floating set of axes in the images shows the camera location - it is not in the default location

      Simulator

      If you want to use the kinect data with our MoveIt! simulator, that's also possible with the same command. Just be careful to run the simulator in an RSDK shell set to 'sim': ./baxter.sh sim

      Parameters

        There are a couple of settings that you may want to adjust for your use case, which can be done through the kinect yaml file

          $ rosed baxter_moveit_config kinect_sensor.yaml

        Where you'll see the following information

         sensors: 
         - sensor_plugin: occupancy_map_monitor/PointCloudOctomapUpdater
           point_cloud_topic: /camera/depth_registered/points
           max_range: 5.0
           padding_offset: 0
           padding_scale: 3.0
           frame_subsample: 1
           point_subsample: 1
        

        Information about these settings can be found on the MoveIt! wiki

        There are a couple of settings that you may want to adjust for your use case, which can be done through the kinect yaml file

          $ rosed baxter_moveit_config xtion_sensor.yaml

        Where you'll see the following information

         sensors:
         - sensor_plugin: occupancy_map_monitor/PointCloudOctomapUpdater
           point_cloud_topic: /camera/depth_registered/points
           max_range: 4.0
           padding_offset: 0
           padding_scale: 3.0
           frame_subsample: 1
           point_subsample: 10
        

        Information about these settings can be found on the MoveIt! wiki