Generating Visual Representations for Zero-Shot Classification - Normandie Université Accéder directement au contenu
Communication Dans Un Congrès Année : 2017

Generating Visual Representations for Zero-Shot Classification

Résumé

This paper addresses the task of learning an image clas-sifier when some categories are defined by semantic descriptions only (e.g. visual attributes) while the others are defined by exemplar images as well. This task is often referred to as the Zero-Shot classification task (ZSC). Most of the previous methods rely on learning a common embedding space allowing to compare visual features of unknown categories with semantic descriptions. This paper argues that these approaches are limited as i) efficient discrimi-native classifiers can't be used ii) classification tasks with seen and unseen categories (Generalized Zero-Shot Classification or GZSC) can't be addressed efficiently. In contrast , this paper suggests to address ZSC and GZSC by i) learning a conditional generator using seen classes ii) generate artificial training examples for the categories without exemplars. ZSC is then turned into a standard supervised learning problem. Experiments with 4 generative models and 5 datasets experimentally validate the approach, giving state-of-the-art results on both ZSC and GZSC.
Fichier principal
Vignette du fichier
1-paper.pdf (2.82 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-01576222 , version 1 (22-08-2017)
hal-01576222 , version 2 (28-08-2017)
hal-01576222 , version 3 (11-12-2017)

Identifiants

Citer

Maxime Bucher, Stéphane Herbin, Frédéric Jurie. Generating Visual Representations for Zero-Shot Classification. International Conference on Computer Vision (ICCV), Oct 2017, venise, Italy. ⟨hal-01576222v3⟩
324 Consultations
568 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More