Dilated Spatial Generative Adversarial Networks for Ergodic Image Generation

Abstract : Generative models have recently received renewed attention as a result of adversarial learning. Generative adversarial networks consist of samples generation model and a discrimination model able to distinguish between genuine and synthetic samples. In combination with convolutional (for the discriminator) and de-convolutional (for the generator) layers, they are particularly suitable for image generation, especially of natural scenes. However, the presence of fully connected layers adds global dependencies in the generated images. This may lead to high and global variations in the generated sample for small local variations in the input noise. In this work we propose to use architec-tures based on fully convolutional networks (including among others dilated layers), architectures specifically designed to generate globally ergodic images, that is images without global dependencies. Conducted experiments reveal that these architectures are well suited for generating natural textures such as geologic structures .
Type de document :
Communication dans un congrès
Liste complète des métadonnées

Littérature citée [18 références]  Voir  Masquer  Télécharger

https://hal-normandie-univ.archives-ouvertes.fr/hal-02128358
Contributeur : Cyprien Ruffino <>
Soumis le : mardi 14 mai 2019 - 11:20:55
Dernière modification le : samedi 2 novembre 2019 - 11:34:02

Identifiants

  • HAL Id : hal-02128358, version 1
  • ARXIV : 1905.08613

Citation

Cyprien Ruffino, Romain Hérault, Eric Laloy, Gilles Gasso. Dilated Spatial Generative Adversarial Networks for Ergodic Image Generation. Conférence sur l'Apprentissage Automatique, Jun 2018, Rouen, France. ⟨hal-02128358⟩

Partager

Métriques

Consultations de la notice

55

Téléchargements de fichiers

34