Skip to Main content Skip to Navigation
Conference papers

Deep Multi-Task Learning with evolving weights

Abstract : Pre-training of deep neural networks has been abandoned in the last few years. The main reason is the difficulty to control the overfitting and tune the consequential raised number of hyper-parameters. In this paper we use a multi-task learning framework that gathers weighted supervised and unsupervised tasks. We propose to evolve the weights along the learning epochs in order to avoid the break in the sequential transfer learning used in the pre-training scheme. This framework allows the use of unlabeled data. Extensive experiments on MNIST showed interesting results.
Complete list of metadatas

https://hal-normandie-univ.archives-ouvertes.fr/hal-02345855
Contributor : Romain Hérault <>
Submitted on : Monday, November 4, 2019 - 4:32:26 PM
Last modification on : Tuesday, November 5, 2019 - 1:26:55 AM

Identifiers

  • HAL Id : hal-02345855, version 1

Citation

Soufiane Belharbi, Romain Hérault, Chatelain Clement, Sébastien Adam. Deep Multi-Task Learning with evolving weights. European Symposium on Artificial Neural Networks (ESANN), Apr 2016, Brugge, Belgium. ⟨hal-02345855⟩

Share

Metrics

Record views

31