Recent years have seen rapid progress in meta-learning methods, which learn (and optimize) the performance of learning methods based on data, generate new learning methods from scratch, and learn to transfer knowledge across tasks and domains. Meta-learning can be seen as the logical conclusion of the arc that machine learning has undergone in the last decade, from learning classifiers, to learning representations, and finally to learning algorithms that themselves acquire representations and classifiers. The ability to improve one’s own learning capabilities through experience can also be viewed as a hallmark of intelligent beings, and there are strong connections with work on human learning in neuroscience.

Meta-learning methods are also of substantial practical interest, since they have, e.g., been shown to yield new state-of-the-art automated machine learning methods, novel deep learning architectures, and substantially improved one-shot learning systems.

Some of the fundamental questions that this workshop aims to address are:

The goal of this workshop is to bring together researchers from all the different communities and topics that fall under the umbrella of meta-learning. We expect that the presence of these different communities will result in a fruitful exchange of ideas and stimulate an open discussion about the current challenges in meta-learning, as well as possible solutions.


Additional Panelists


Important dates


08:30 Introduction and opening remarks
08:40 Jitendra Malik – Learning to optimize with reinforcement learning
09:10 Christophe Giraud-Carrier – Informing the Use of Hyperparameter Optimization Through Metalearning
09:40 Poster spotlights
10:00 Poster session 1 ( + Coffee Break)
11:00 Jane Wang – Multiple scales of task and reward-based learning
11:30 Chelsea Finn – Model-Agnostic Meta-Learning: Universality, Inductive Bias, and Weak Supervision
12:00 Lunch Break
13:30 Josh Tenenbaum – Learn to learn high-dimensional models from few examples
14:00 Contributed talk 1: Multiple Adaptive Bayesian Linear Regression for Scalable Bayesian Optimization with Warm Start
14:15 Contributed talk 2: Learning to Model the Tail
14:30 Poster session 2 ( + Coffee Break)
15:30 Oriol Vinyals – Meta Unsupervised Learning
16:00 Panel discussion
17:00 End

Accepted Papers

Program Committee

We thank the program committee for shaping the excellent technical program (in alphabetical order):

Parminder Bhatia, Andrew Brock, Bistra Dilkina, Rocky Duan, David Duvenaud, Thomas Elsken, Dumitru Erhan, Matthias Feurer, Chelsea Finn, Roman Garnett, Christophe Giraud-Carrier, Erin Grant, Klaus Greff, Roger Grosse, Abhishek Gupta, Matt Hoffman, Aaron Klein, Marius Lindauer, Jan-Hendrik Metzen, Igor Mordatch, Randy Olson, Sachin Ravi, Horst Samulowitz, Jürgen Schmidhuber, Matthias Seeger, Jake Snell, Jasper Snoek, Alexander Toshev, Eleni Triantafillou, Jan van Rijn, Joaquin Vanschoren.


For any question you can contact us at