MFAS: Multimodal Fusion Architecture Search - Normandie Université Accéder directement au contenu
Communication Dans Un Congrès Année : 2019

MFAS: Multimodal Fusion Architecture Search

Résumé

We tackle the problem of finding good architectures for multimodal classification problems. We propose a novel and generic search space that spans a large number of possible fusion architectures. In order to find an optimal architecture for a given dataset in the proposed search space, we leverage an efficient sequential model-based exploration approach that is tailored for the problem. We demonstrate the value of posing multimodal fusion as a neural architecture search problem by extensive experimentation on a toy dataset and two other real multimodal datasets. We discover fusion architectures that exhibit state-of-the-art performance for problems with different domain and dataset size, including the NTU RGB+D dataset, the largest multi-modal action recognition dataset available.
Fichier principal
Vignette du fichier
mfas_arxiv.pdf (410.16 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-02068293 , version 1 (14-03-2019)

Identifiants

Citer

Juan-Manuel Pérez-Rúa, Valentin Vielzeuf, Stéphane Pateux, Moez Baccouche, Frédéric Jurie. MFAS: Multimodal Fusion Architecture Search. CVPR 2019, Jun 2019, Long Beach, United States. ⟨hal-02068293⟩
435 Consultations
837 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More