The OmniScape Dataset
Abstract
Despite the utility and benefits of omnidirectional images in robotics and automotive applications, there are no datasets of omnidirectional images available with semantic segmentation, depth map, and dynamic properties. This is due to the time cost and human effort required to annotate ground truth images. This paper presents a framework for generating omnidirectional images using images that are acquired from a virtual environment. For this purpose, we demonstrate the relevance of the proposed framework on two well-known simulators: CARLA Simulator, which is an open-source simulator for autonomous driving research, and Grand Theft Auto V (GTA V), which is a very high quality video game. We explain in details the generated OmniScape dataset, which includes stereo fisheye and catadioptric images acquired from the two front sides of a motorcycle, including semantic segmentation, depth map, intrinsic parameters of the cameras and the dynamic parameters of the motorcycle. It is worth noting that the case of two-wheeled vehicles is more challenging than cars due to the specific dynamic of these vehicles.
Domains
Statistics [stat] Machine Learning [stat.ML] Engineering Sciences [physics] Signal and Image processing Mathematics [math] Statistics [math.ST] Computer Science [cs] Signal and Image Processing Computer Science [cs] Neural and Evolutionary Computing [cs.NE] Computer Science [cs] Machine Learning [cs.LG] Computer Science [cs] Computers and Society [cs.CY] Computer Science [cs] Computer Vision and Pattern Recognition [cs.CV] Computer Science [cs] Artificial Intelligence [cs.AI]
Origin : Files produced by the author(s)