Clearing local costmap in ROS when using PointCloud2

Published on June 16, 2018 under Blog

Recently, I finished developing a simple framework for autonomous exploration of 3D environments using drones in ROS. It was my final project for Caltech's ME 134 Autonomy class. I was using the move_base navigation stack and I ran into a weird issue where my local costmap would correctly display obstacles as they appear, but will not get cleared once the obstacles move out of drone's field of view. In this article I'll talk about how I fixed this issue - hopefully this will help someone out there facing the same issue.

Problem statement

This problem is a bit hard to explain so I'm going to include some animations to illustrate the issue. It's important that you understand exactly what my issue was - there are many different reasons why certain parts of the move_base stack can fail and I don't want you to waste time reading about my solution if it isn't relevant to you.

The drone and local costmap

The type of sensors you use for your navigation stack matters a lot - in my project, I used a simulated drone with a Kinect attached to it (from hector_quadrotor package). Kinect published depth information as point cloud, i.e. message of type sensor_msgs/PointCloud2. Below you can see how the drone and Kinect depth data looked in rviz. Even if you're using LaserScan instead of a point cloud, this article might still be relevant to you.

Desired result

The animation below shows the result we want to get. The white square around the drone represents the local costmap, which is populated using the data from the depth cloud. Note that the costmap gets cleared when the obstacle is no longer in the way.

Incorrect costmap behaviour

Now the actual issue I encountered: note how on the animation below the costmap does not get cleared once the drone flies away from the wall. This is a big problem because it gets in the way of the local cost planner, which can fail because it has an incorrect costmap.

The solution

There are several reasons why your costmap might not be updated. I'm going to suggest several steps you can take to get it working.

Step 1: Getting the obstacle height right

When running your navigation stack (i.e. move_base) you might see the following warning in the console output:

[WARN] [1529117350.880168708, 781.744000000]: The origin for the
sensor at (0.05, -0.02, 2.42) is out of map bounds. So, the costmap
cannot raytrace for it.

One possible (trivial) reason could be that you're using an incorrect transform for your sensor - I will assume you're using the correct transform. Now, the actual issue could be that there is some displacement between your sensor and the costmap in the Z-axis. Since I had a drone, I had to make it hover at the altitude of 1 metre above ground. This meant that the costmap has to be generated at approximately the same level. I had to make the following changes to my costmap_common_params.yaml (GitHub link if you want to see the full config):

max_obstacle_height: 1.2
# ...
map_type: voxel
obstacle_layer:
  enabled: true
  # ...
  origin_z: 0.8
  z_resolution: 0.4
  z_voxels: 1
  publish_voxel_map: false
  observation_sources: depth
  depth:
    data_type: PointCloud2
    topic: /camera/depth/points
    marking: true
    clearing: true
    min_obstacle_height: 0.8
    max_obstacle_height: 1.2
# ...

Note that now the Z-origin of the obstacle layer is at 0.8 metres, it has a height of 1 unit cell and the size of a cell is 0.4 metres. Putting everything together, I essentially have a 2D costmap that considers obstacles of height between 0.8 and 1.2 metres. The Kinect sensor on my drone was always at around 1m altitude so the costmap was always able to raytrace it.

Aside: Make sure your obstacle layer is a part of the local costmap. For example, in my local_costmap_params.yaml (GitHub link) I have the following specified:

local_costmap:
  # ...
  static_map: false
  rolling_window: true
  # ...
  plugins:
    - {name: obstacle_layer,      type: "costmap_2d::VoxelLayer"}
    - {name: inflation_layer,     type: "costmap_2d::InflationLayer"}

Step 2: Clearing the costmap using LaserScan

Following instructions above might've made the warning disappear, but chances are your costmap still doesn't get cleared correctly.

In the previous step we have specified marking: true and clearing: true for our depth point cloud data source. Marking essentially means using the point cloud to add obstacles to the costmap. Clearing means using point cloud data to remove obstacles from the costmap. It's apparent that clearing functionality isn't working.

A possible reason could be that the costmap_2d doesn't know that it should interpret "no points" as "free space". There is actually a way to enable this logic, but it only works for LaserScan data, so we'll need to convert our PointCloud2 into LaserScan. We can use pointcloud_to_laserscan package to carry out this conversion: install it using sudo apt install ros-kinetic-pointcloud-to-laserscan (or your ROS version equivalent) and run the conversion node as follows:

<launch>
  <node pkg="pointcloud_to_laserscan" type="pointcloud_to_laserscan_node" name="pointcloud_to_laserscan">
    <!-- See all params at http://wiki.ros.org/pointcloud_to_laserscan -->
      
    <!-- Min and max height to sample from depth data - these values worked for my drone -->
    <param name="min_height" value="0.3"/>
    <param name="max_height" value="1.7"/>
      
    <!-- Min and max range range of generated laser scan - set this to match your depth sensor -->
    <param name="range_min" value="0.2"/>
    <param name="range_max" value="4.0"/>
    
    <!-- Frame of your depth sensor -->
    <param name="target_frame" value="camera_link"/>

    <!-- Topic from which to read PointCloud2 -->
    <remap from="cloud_in" to="/camea/depth/points"/>
    
    <!-- Topic to which LaserScan will be published -->
    <remap from="scan" to="/scan"/>
  </node>
</launch>

Once you've specified the correct settings, you can visualise your PointCloud2 and LaserScan in rviz. You should get something that looks like the animation below. Note the red LaserScan points generated from the depth data.

Finally, you need to tweak your obstacle layer observation_sources to use the new laser scan data, the example is shown below. The most important part is the inf_is_valid: true. This tells costmap_2d to treat "no reading" from laser scan data as free space, which is exactly what we want. This is possible because pointcloud_to_laserscan_node we enabled before by default publishes +inf values when it doesn't see any depth cloud data for a particular angle.

# ...
obstacle_layer:
  # ...
  observation_sources: scan
  scan:
    data_type: LaserScan
    topic: scan # Your LaserScan topic
    marking: true
    clearing: true
    inf_is_valid: true # This parameter does the trick!
    min_obstacle_height: 0.8
    max_obstacle_height: 1.2
# ...

Step 3: Recovery behaviours

If you've done step 1 and 2 correctly, your problems should be fixed by now. Nevertheless it's worth specifying some recovery behaviours for your navigation planners. This is also useful if (for some weird reason) you can't do step 2 and must rely on PointCloud2 data only.

Recovery behaviours are triggered as a last resort when your local planner fails to generate a local plan. Since we're concerned with clearing the local costmap, we'll add this logic to our recovery behaviours. In my case, I added the following to my move_base_params.yaml (GitHub link):

# ...
recovery_behaviors:
  - name: 'conservative_reset'
    type: 'clear_costmap_recovery/ClearCostmapRecovery'
  - name: 'aggressive_reset'
    type: 'clear_costmap_recovery/ClearCostmapRecovery'
conservative_reset:
  reset_distance: 0.1
  layer_names: ['obstacle_layer']
aggressive_reset:
  reset_distance: 0.0
  layer_names: ['obstacle_layer']

It is very important to specify the obstacle_layer as one of the layer_names or the costmap won't get reset correctly. You might wanna increase reset_distance for conservative_reset if you don't wanna clear the full costmap.


End of Article

Timur Kuzhagaliyev Author

I'm a computer science graduate from UCL & Caltech, working as a systems engineer at Jump Trading. Before Jump, I was doing computer vision for Video Quality Analysis team at Amazon.