Deep Multi-Task Learning with evolving weights - Archive ouverte HAL Access content directly
Conference Papers Year :

Deep Multi-Task Learning with evolving weights

(1, 2, 3, 4) , (2, 3, 4, 1) , (2, 3, 4, 1) , (3, 2, 4, 5)
1
2
3
4
5

Abstract

Pre-training of deep neural networks has been abandoned in the last few years. The main reason is the difficulty to control the overfitting and tune the consequential raised number of hyper-parameters. In this paper we use a multi-task learning framework that gathers weighted supervised and unsupervised tasks. We propose to evolve the weights along the learning epochs in order to avoid the break in the sequential transfer learning used in the pre-training scheme. This framework allows the use of unlabeled data. Extensive experiments on MNIST showed interesting results.
Not file

Dates and versions

hal-02345855 , version 1 (04-11-2019)

Identifiers

  • HAL Id : hal-02345855 , version 1

Cite

Soufiane Belharbi, Romain Hérault, Chatelain Clement, Sébastien Adam. Deep Multi-Task Learning with evolving weights. European Symposium on Artificial Neural Networks (ESANN), Apr 2016, Brugge, Belgium. ⟨hal-02345855⟩
18 View
0 Download

Share

Gmail Facebook Twitter LinkedIn More