Neural Networks Regularization Through Class-wise Invariant Representation Learning

Abstract : Training deep neural networks is known to require a large number of training samples. However, in many applications only few training samples are available. In this work, we tackle the issue of training neural networks for classification task when few training samples are available. We attempt to solve this issue by proposing a new regularization term that constrains the hidden layers of a network to learn class-wise invariant representations. In our regularization framework, learning invariant representations is generalized to the class membership where samples with the same class should have the same representation. Numerical experiments over MNIST and its variants showed that our proposal helps improving the generalization of neural network particularly when trained with few samples. We provide the source code of our framework https://github.com/sbelharbi/learning-class-invariant-features .
Type de document :
Pré-publication, Document de travail
Liste complète des métadonnées

https://hal-normandie-univ.archives-ouvertes.fr/hal-02129472
Contributeur : Romain Hérault <>
Soumis le : mardi 14 mai 2019 - 22:16:08
Dernière modification le : mercredi 15 mai 2019 - 06:38:23

Lien texte intégral

Identifiants

  • HAL Id : hal-02129472, version 1
  • ARXIV : 1709.01867

Citation

Soufiane Belharbi, Clement Chatelain, Romain Hérault, Sébastien Adam. Neural Networks Regularization Through Class-wise Invariant Representation Learning. 2019. ⟨hal-02129472⟩

Partager

Métriques

Consultations de la notice

26