A comparative study of semantic segmentation using omnidirectional images - Normandie Université Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

A comparative study of semantic segmentation using omnidirectional images

Résumé

The semantic segmentation of omnidirectional urban driving images is a research topic that has increasingly attracted the attention of researchers. This paper presents a thorough comparative study of different neural network models trained on four different representations: perspective, equirectangular, spherical and fisheye. We use in this study real perspective images, and synthetic perspective, fisheye and equirectangular images, as well as a test set of real fisheye images. We evaluate the performance of convolution on spherical images and perspective images. The conclusions obtained by analyzing the results of this study are multiple and help understanding how different networks learn to deal with omnidirectional distortions. Our main finding is that models trained on omnidirectional images are robust against modality changes and are able to learn a universal representation, giving good results in both perspective and omnidirectional images. The relevance of all results is examined with an analysis of quantitative measures.
Fichier principal
Vignette du fichier
RFIAP_2020_paper_47.pdf (4.74 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03088368 , version 1 (26-12-2020)

Identifiants

  • HAL Id : hal-03088368 , version 1

Citer

Ahmed Rida Sekkat, Yohan Dupuis, Paul Honeine, Pascal Vasseur. A comparative study of semantic segmentation using omnidirectional images. Congrès Reconnaissance des Formes, Image, Apprentissage et Perception (RFIAP), Jun 2020, Vannes, France. ⟨hal-03088368⟩
153 Consultations
467 Téléchargements

Partager

Gmail Facebook X LinkedIn More