Work Description

Title: UWslam Dataset Open Access Deposited

h
Attribute Value
Methodology
  • 1) Stereo Survey Dataset: A stereo SLAM evaluation dataset was collected with a diver operated camera rig on a shallow coral reef of Lizard Island in Australia. The dataset was collected using a spiral survey technique that fully covered a circular area of approximately 14 m in diameter, with natural sunlight providing the only illumination. The rectified stereo image size is 1355x1002 pixels and the images were collected at 5 Hz. We refer to this dataset as LizardIsland. To obtain a ground truth comparison for evaluating our stereo SLAM method, we processed the dataset through COLMAP to generate a sparse 3D reconstruction with optimized camera poses. COLMAP does not fix the scale during optimization, so the reconstruction was scaled in post-process to match the mean left and right stereo pair baseline to the calibrated value.

  • 2) Hybrid Vehicle-Manipulator Dataset: During a cruise in 2019, a hybrid dataset of synchronized vehicle mounted stereo and wrist mounted fisheye imagery was collected in natural deep ocean environments of the Costa Rican continental shelf margin with the SuBastian ROV, operated by Schmidt Ocean Institute. We have previously published the fisheye imagery portion of this data as the UWHandles dataset. For this work, we have extended this dataset by further processing four environmentally unique stereo and fisheye image sequences for evaluation of our hybrid SLAM method. We refer to these sequences as Mounds1, Mounds2, Seeps1, and Seeps2. For these sequences, TagSLAM was used to obtain ground truth pose estimates for the stereo and fisheye cameras, based on the detection of AprilTags distributed in the scenes.

  • Meta repository for the dataset  https://github.com/gidobot/UWslamdataset
Description
  • UWslam is a dataset for underwater stereo and hybrid monocular fisheye + stereo SLAM in natural seafloor environments. The dataset includes a spiral survey of a shallow reef captured with a diver operated stereo rig and 4 hybrid image sequences captured with a deep ocean ROV in different deep ocean environments. Ground truth pose estimates for the spiral stereo trajectory were obtained by processing the image sequence through COLMAP. Ground truth pose estimates for the hybrid sequences were obtained by distributing fiducials on the seafloor before capturing an image sequence and processing the image sequences with the ROS based TagSLAM package.
Creator
Depositor
  • gidobot@umich.edu
Contact information
Discipline
Funding agency
  • National Science Foundation (NSF)
  • National Aeronautics and Space Administration (NASA)
Keyword
Date coverage
  • 2019-01-01 to 2020-01-01
Citations to related material
  • G. Billings, R. Camilli and M. Johnson-Roberson, "Hybrid Visual SLAM for Underwater Vehicle Manipulator Systems," in IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 6798-6805, July 2022, doi: 10.1109/LRA.2022.3176448.
Resource type
Last modified
  • 03/27/2023
Published
  • 03/27/2023
Language
DOI
  • https://doi.org/10.7302/xb7v-x666
License
To Cite this Work:
Billings, G. H., Johnson-Roberson, M. (2023). UWslam Dataset [Data set], University of Michigan - Deep Blue Data. https://doi.org/10.7302/xb7v-x666

Relationships

This work is not a member of any user collections.

Files (Count: 2; Size: 61.8 GB)

Date: 20 March, 2023

Dataset Title: Underwater SLAM dataset (UWslam_dataset)

Dataset Creators: G. Billings, M. Johnson-Roberson

Dataset Contact: Gideon Billings gidobot@umich.edu

Funding: NNX16AL08G (NASA), IIS-1830660 (NSF), IIS-1830500 (NSF)

Key Points:
- This is a dataset for developing and testing visual based Simultaenous Localization and Mapping (SLAM) methods in natural underwater environments.
- Dataset includes spiral survey of a shallow reef captured with a diver operated stereo camera.
- Dataset includes 4 synchronized hybrid image sequences from an ROV mounted stereo camera with a manipulator mounted fisheye camera, collected in deep ocean environments.

Research Overview:
Underwater vehicles must be capable of mapping and reconstructing their working environment to perform automated intervention tasks, such as sample collection. This image dataset was collected in natural underwater environments to aid the development and evaluation of underwater SLAM methods. Each image sequence includes globally referenced ground truth camera poses for each image frame. The hybrid image sequences were collected to develop SLAM methods that can fuse images from both vehicle mounted cameras and independently positioned cameras (e.g. manipulator mounted) into the same 3D scene reconstruction. This capability enables active viewpoint acquisition to fill incomplete areas of the scene reconstruction with the independent camera.

Methodology:
Ground truth camera poses for the spiral stereo camera survey were obtained by processing the image sequence through COLMAP. Ground truth camera pose estimates for the hybrid sequences were obtained by distributing AprilTag fiducials on the seafloor before capturing an image sequence and then processing the image sequences through the ROS based TagSLAM package.

Files Contained Here:
For each image in a synchronized frame, the naming convention is idx_.png, where idx increments from 0 for each image sequence. For a given image sequence, the timestamp of the idx image frame in seconds is given by the idx row of the time.txt file, indexed from 0 as the first row. The calibration yaml files for the left and right cameras for both the hybrid and stereo spiral sequences are in the format output by the ROS camera_calibration tool using a pinhole projection model (see the following wiki for a description of these parameters: http://wiki.ros.org/camera_calibration_parsers). The fisheye camera was calibrated with the Kalibr toolbox using the equidistant distortion model, and each calibration parameter is explicitely named in the fisheye_calib.yaml file.

For the hybrid sequences, is "left", "right", and "fish", for the left stereo, right stereo, and fisheye images respectively. The camera_poses.txt files give the ground truth camera poses for each camera in a sequence as a globally referenced translation in meters with a rotation quaternion for both the fisheye and stereo cameras (line format is given below).

For the stereo spiral sequence, is "0" and "1" for the left stereo and right stereo images respectively. The gt_poses.txt file gives the ground truth globally referenced translation for each stereo camera frame (note that the camera rotation is not included in the pose file).

The SIFTvoc.txt vocabulary is used with a modified version of the DBoW2 library for indexing and converting images into a bag-of-word representation. Following is a link to a modified version of DBoW2 that supports SIFT features:
https://github.com/gidobot/DBoW2

The dataset is organized by the following file structure

UWslam_dataset
└───hybrid -> folder for hybrid image sequences
│ └───calibration
│ │ │ fisheye_calib.yaml -> calibration file for fisheye in OpenCV format
│ │ │ left_camera.yaml -> calibration file for left stereo camera in ROS camera_calibration format
│ │ │ right_camera.yaml -> calibration file for right stereo camera in ROS camera_calibration format
│ └───
│ │ camera_poses.txt -> ground truth hybrid camera poses in line format: idx fish_tx fish_ty fish_tz fish_qw fish_qx fish_qy fish_qz stereo_tx stereo_ty stereo_tz stereo_qw stereo_qx stereo_qy stereo_qz
│ │ times.txt -> image timestamps where row indexed from 0 corresponds to frame idx
│ └───images/raw -> folder containing rectified stereo and raw fisheye images
└───stereo -> folder for stereo survey sequence
│ │ gt_poses.txt -> ground truth trajectory in line format: timestamp stereo_tx stereo_ty stereo_tz
│ │ times.txt -> image timestamps
│ └───calibration
│ │ │ left_camera.yaml -> calibration file for left stereo camera
│ │ │ right_camera.yaml -> calibration file for right stereo camera
│ └───images -> folder containing rectified stereo images
└───sift_vocabulary
│ │ SIFTvoc.txt -> SIFT DBOW2 vocabulary trained on underwater images

Use and Access:
This data set is made available under a Creative Commons Public Domain license (CC0 1.0).

To Cite Data:
Billings, G., Camilli, R., & Johnson-Roberson, M. (2022). Hybrid Visual SLAM for Underwater Vehicle Manipulator Systems. IEEE Robotics and Automation Letters, 7(3), 6798-6805.

Download All Files (To download individual files, select them in the “Files” panel above)

Total work file size of 61.8 GB is too large to download directly. Consider using Globus (see below).

Files are ready   Download Data from Globus
Best for data sets > 3 GB. Globus is the platform Deep Blue Data uses to make large data sets available.   More about Globus

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.