La-MAML: Look-Ahead Meta-Learning for Continual Learning

A pictoral depiction of the La-MAML algorithm

Abstract

The continual learning problem involves training models with limited capacity to perform well on a set of an unknown number of sequentially arriving tasks. While meta-learning shows great potential for reducing interference between old and new tasks, the current training procedures tend to be either slow or offline, and sensitive to many hyper-parameters. In this work, we propose Look-ahead MAML (La-MAML), a fast optimisation-based meta-learning algorithm for online-continual learning, aided by a small episodic memory. Our proposed modulation of per-parameter learning rates in our meta-learning update allows us to draw connections to prior work on hypergradients and meta-descent. This provides a more flexible and efficient way to mitigate catastrophic forgetting compared to conventional prior-based methods.La-MAML achieves performance superior to other replay-based, prior-based and meta-learning based approaches for continual learning on real-world visual classification benchmarks.

Publication
In Neural Information Processing Systems 2020 (Oral)
Click the Cite button above to view the bibtex.
Gunshi Gupta
Gunshi Gupta
Deep Learning Researcher

My research interests include Meta-Learning, Bayesian and Continual Deep Learning, Robotics.

Related