The OmniScape Dataset - Archive ouverte HAL Access content directly
Conference Papers Year :

The OmniScape Dataset

(1, 2) , (3) , (2) , (1)
1
2
3

Abstract

Despite the utility and benefits of omnidirectional images in robotics and automotive applications, there are no datasets of omnidirectional images available with semantic segmentation, depth map, and dynamic properties. This is due to the time cost and human effort required to annotate ground truth images. This paper presents a framework for generating omnidirectional images using images that are acquired from a virtual environment. For this purpose, we demonstrate the relevance of the proposed framework on two well-known simulators: CARLA Simulator, which is an open-source simulator for autonomous driving research, and Grand Theft Auto V (GTA V), which is a very high quality video game. We explain in details the generated OmniScape dataset, which includes stereo fisheye and catadioptric images acquired from the two front sides of a motorcycle, including semantic segmentation, depth map, intrinsic parameters of the cameras and the dynamic parameters of the motorcycle. It is worth noting that the case of two-wheeled vehicles is more challenging than cars due to the specific dynamic of these vehicles.
Fichier principal
Vignette du fichier
20.icra.pdf (1.21 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03088300 , version 1 (26-12-2020)

Identifiers

Cite

Ahmed Rida Sekkat, Yohan Dupuis, Pascal Vasseur, Paul Honeine. The OmniScape Dataset. 2020 IEEE International Conference on Robotics and Automation (ICRA), May 2020, Paris, France. pp.1603-1608, ⟨10.1109/ICRA40945.2020.9197144⟩. ⟨hal-03088300⟩
125 View
549 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More