Dilated Spatial Generative Adversarial Networks for Ergodic Image Generation - Archive ouverte HAL Access content directly
Conference Papers Year : 2018

Dilated Spatial Generative Adversarial Networks for Ergodic Image Generation

(1, 2, 3) , (4) , (5, 6) , (1)
1
2
3
4
5
6

Abstract

Generative models have recently received renewed attention as a result of adversarial learning. Generative adversarial networks consist of samples generation model and a discrimination model able to distinguish between genuine and synthetic samples. In combination with convolutional (for the discriminator) and de-convolutional (for the generator) layers, they are particularly suitable for image generation, especially of natural scenes. However, the presence of fully connected layers adds global dependencies in the generated images. This may lead to high and global variations in the generated sample for small local variations in the input noise. In this work we propose to use architec-tures based on fully convolutional networks (including among others dilated layers), architectures specifically designed to generate globally ergodic images, that is images without global dependencies. Conducted experiments reveal that these architectures are well suited for generating natural textures such as geologic structures .
Fichier principal
Vignette du fichier
Dilated_Spatial_Generative_Adversarial_Networks_for_Ergodic_Image_Generation (1).pdf (720.71 Ko) Télécharger le fichier
Vignette du fichier
main.pdf (717.95 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-02128358 , version 1 (14-05-2019)

Identifiers

Cite

Cyprien Ruffino, Romain Hérault, Eric Laloy, Gilles Gasso. Dilated Spatial Generative Adversarial Networks for Ergodic Image Generation. Conférence sur l'Apprentissage (CAp2018), Jun 2018, Rouen, France. ⟨hal-02128358⟩
34 View
205 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More