Deep Multi-Task Learning with evolving weights

Abstract : Pre-training of deep neural networks has been abandoned in the last few years. The main reason is the difficulty to control the overfitting and tune the consequential raised number of hyper-parameters. In this paper we use a multi-task learning framework that gathers weighted supervised and unsupervised tasks. We propose to evolve the weights along the learning epochs in order to avoid the break in the sequential transfer learning used in the pre-training scheme. This framework allows the use of unlabeled data. Extensive experiments on MNIST showed interesting results.
Type de document :
Communication dans un congrès
Liste complète des métadonnées

https://hal-normandie-univ.archives-ouvertes.fr/hal-02345855
Contributeur : Romain Hérault <>
Soumis le : lundi 4 novembre 2019 - 16:32:26
Dernière modification le : mardi 5 novembre 2019 - 01:26:55

Identifiants

  • HAL Id : hal-02345855, version 1

Citation

Soufiane Belharbi, Romain Hérault, Chatelain Clement, Sébastien Adam. Deep Multi-Task Learning with evolving weights. European Symposium on Artificial Neural Networks (ESANN), Apr 2016, Brugge, Belgium. ⟨hal-02345855⟩

Partager

Métriques

Consultations de la notice

13