Toward Multimodal
Model-Agnostic Meta-Learning

Risto Vuorio1

Shao-Hua Sun2

Hexiang Hu2

Joseph J. Lim2

1SK T-Brain
2University of Southern California

Paper

Download our paper

arXiv

Go to the arXiv page

Code

Coming Soon

Bibtex

Cite our paper

Abstract

Gradient-based meta-learners such as MAML are able to learn a meta-prior from similar tasks to adapt to novel tasks from the same distribution with few gradient updates. One important limitation of such frameworks is that they seek a common initialization shared across the entire task distribution, substantially limiting the diversity of the task distributions that they are able to learn from. In this paper, we augment MAML with the capability to identify tasks sampled from a multimodal task distribution and adapt quickly through gradient updates. Specifically, we propose a multimodal MAML algorithm that is able to modulate its meta-learned prior according to the identified task, allowing faster adaptation. We evaluate the proposed model on a diverse set of problems including regression, few-shot image classification, and reinforcement learning. The results demonstrate the effectiveness of our model in modulating the meta-learned prior in response to the characteristics of tasks sampled from a multimodal distribution.


Model Overview and Learning Algorithm

  • Model-based Meta-learner identifies the mode of a sampled task from a few samples and then modulate the meta-learned prior parameters of the gradient-based meta-learner.
  • Gradient-based Meta-learner quickly adapts to target tasks with few gradient steps by seeking a good parameter initializat.



Paper

@inproceedings{vuorio2018toward,
  title = {Toward Multimodal Model-Agnostic Meta-Learning},
  author = {Vuorio, Risto and Sun, Shao-Hua and Hu, Hexiang and Lim, Joseph J},
  booktitle = {2nd Workshop on Meta-Learning at The Thirty-second Annual Conference Neural Information Processing Systems},
  year = {2018},
}