A comparative study of semantic segmentation using omnidirectional images - Archive ouverte HAL Access content directly
Conference Papers Year :

A comparative study of semantic segmentation using omnidirectional images

(1, 2) , (3) , (1) , (2)
1
2
3

Abstract

The semantic segmentation of omnidirectional urban driving images is a research topic that has increasingly attracted the attention of researchers. This paper presents a thorough comparative study of different neural network models trained on four different representations: perspective, equirectangular, spherical and fisheye. We use in this study real perspective images, and synthetic perspective, fisheye and equirectangular images, as well as a test set of real fisheye images. We evaluate the performance of convolution on spherical images and perspective images. The conclusions obtained by analyzing the results of this study are multiple and help understanding how different networks learn to deal with omnidirectional distortions. Our main finding is that models trained on omnidirectional images are robust against modality changes and are able to learn a universal representation, giving good results in both perspective and omnidirectional images. The relevance of all results is examined with an analysis of quantitative measures.
Fichier principal
Vignette du fichier
RFIAP_2020_paper_47.pdf (4.74 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03088368 , version 1 (26-12-2020)

Identifiers

  • HAL Id : hal-03088368 , version 1

Cite

Ahmed Rida Sekkat, Yohan Dupuis, Paul Honeine, Pascal Vasseur. A comparative study of semantic segmentation using omnidirectional images. Congrès Reconnaissance des Formes, Image, Apprentissage et Perception (RFIAP), Jun 2020, Vannes, France. ⟨hal-03088368⟩
118 View
352 Download

Share

Gmail Facebook Twitter LinkedIn More