Search Constraints
1 - 2 of 2
Number of results to display per page
View results as:
Search Results
-
- Creator:
- Sheppard, Anja, Sethuraman, Advaith V, Bagoren, Onur, Pinnow, Christopher, Anderson, Jamey, Havens, Timothy C, and Skinner, Katherine A
- Description:
- The AI4Shipwrecks dataset contains sidescan sonar images of shipwrecks and corresponding binary labels collected during 2022 and 2023 at the NOAA Thunder Bay National Marine Sanctuary in Alpena, MI. The data collection platform was an Iver3 Autonomous Underwater Vehicle (AUV) equipped with an EdgeTech 2205 dual-frequency ultra-high resolution sidescan sonar and 3D bathymetric system. The labels were compiled from reference labels created by experts in marine archaeology. The intended use of this dataset is to encourage development of semantic segmentation, object detection, or anomaly detection algorithms in the computer vision field. Comparisons of state-of-the-art segmentation networks on our dataset are shown in the paper. , The file structure is organized as described in the README.txt file, where images in 'images' directories are the waterfall product of sidescan sonar surveys, and images in 'labels' directories are binary representations of expert labels. Images across the 'images' and 'labels' directories are correlated by having identical filenames. In the label images, a pixel value of '0' represents the non-shipwreck/other class and '1' represents the shipwreck class for the correspondingly named image (<wreck_name>_<##>.png) in the images directory. , and The project webpage can be found at: https://umfieldrobotics.github.io/ai4shipwrecks/
- Keyword:
- machine learning, computer vision, field robotics, marine robotics, underwater robotics, sidescan sonar, semantic segmentation, and object detection
- Discipline:
- Engineering
-
- Creator:
- Carlevaris-Bianco, Nicholas , Ushani, Arash , and Eustice, Ryan
- Description:
- This is a large scale, long-term autonomy dataset for robotics research collected on the University of Michigan’s North Campus. The dataset consists of omnidirectional imagery, 3D lidar, planar lidar, GPS, and proprioceptive sensors for odometry collected using a Segway robot. The dataset was collected to facilitate research focusing on longterm autonomous operation in changing environments. The dataset is comprised of 27 sessions spaced approximately biweekly over the course of 15 months. The sessions repeatedly explore the campus, both indoors and outdoors, on varying trajectories, and at different times of the day across all four seasons. This allows the dataset to capture many challenging elements including: moving obstacles (e.g., pedestrians, bicyclists, and cars), changing lighting, varying viewpoint, seasonal and weather changes (e.g., falling leaves and snow), and long-term structural changes caused by construction projects. To further facilitate research, we also provide ground-truth pose for all sessions in a single frame of reference. and A detailed description of the dataset and the methods used to generate it is in the document nclt.pdf. If you use this dataset in your research please cite: Carlevaris-Bianco, N., Ushani, A., Eustice, R. (2021). The University of Michigan North Campus Long-Term Vision and LIDAR Dataset [Data set]. University of Michigan - Deep Blue. https://doi.org/10.7302/7rnm-6a03
- Keyword:
- Long-term SLAM, place recognition, lidar, computer vision, and field and service robotics
- Citation to related publication:
- Carlevaris-Bianco, Nicholas, et al. “University of Michigan North Campus Long-Term Vision and Lidar Dataset.” The International Journal of Robotics Research, vol. 35, no. 9, Aug. 2016, pp. 1023–1035, doi:10.1177/0278364915614638.
- Discipline:
- Engineering