In practice, we found dense reconstructions are the most complete and highest quality (with our mobile robotics platform) when fusing data from multiple Velodyne, SICK LMS- 151, and stereo cameras using our own datasets. We are releasing such a dataset to provide a realistic mobile-robotics platform with a variety of sensors—a few of which are prime candidates for ground truth when testing the accuracy of reconstructions with other sensors. For example, the push-broom laser sensor (which excels at 3D urban reconstructions) can be used to compare the reconstruction quality of monocular vs. stereo camera vs. Velodyne vs. a combination thereof.

This dataset is ideal to benchmark and evaluate large- scale dense reconstruction frameworks. It was collected in Oxford, UK at mid-day, thus it provides a representative urban environment with numerous pedestrians, bicycles, and vehicles visible to all sensors throughout the 1.6 km trajectory.

It includes data from the following sensors which collectively provide a continuous 360◦ view around the vehicle:

  1. 1x Point Grey Bumblebee XB3 Stereo Camera (Color)
  2. 1x Point Grey Bumblebee2 Stereo Camera (Grayscale)
  3. 4x Point Grey Grasshopper2 Monocular Cameras (Color, Fisheye Lens)
  4. 2x Velodyne HDL-32E 3D lidars
  5. 3x SICK LMS-151 2D lidars

In addition, the following is provided to aid in processing the raw sensor data:

  1. Optimized SE(3) vehicle trajectory
  2. Undistorted, rectified stereo image pairs
  3. Undistorted mono images
  4. Camera intrinsics
  5. Extrinsic SE(3) transforms for all sensors

Finally, we provide example depth maps — using the techniques described in this paper — for the Bumblebee XB3 to enable users to rapidly utilize the dataset with their existing dense reconstruction pipelines.  All data is stored in a similar format as KITTI along with MATLAB development toolkit.