The list of questions from the panel discussion are here.

Recent years have seen rapid progress in meta-learning methods, which learn (and optimize) the performance of learning methods based on data, generate new learning methods from scratch, or learn to transfer knowledge across tasks and domains. Meta-learning can be seen as the logical conclusion of the arc that machine learning has undergone in the last decade, from learning classifiers, to learning representations, and finally to learning algorithms that themselves acquire representations and classifiers. The ability to improve one’s own learning capabilities through experience can also be viewed as a hallmark of intelligent beings, and there are strong connections with work on human learning in neuroscience.

Meta-learning methods are also of substantial practical interest, since they have, e.g., been shown to yield new state-of-the-art automated machine learning methods, novel deep learning architectures, and substantially improved one-shot learning systems.

Some of the fundamental questions that this workshop aims to address are:

The goal of this workshop is to bring together researchers from all the different communities and topics that fall under the umbrella of meta-learning. We expect that the presence of these different communities will result in a fruitful exchange of ideas and stimulate an open discussion about the current challenges in meta-learning, as well as possible solutions.

Invited Speakers

Invited Panelists


Important dates


09:00 Introduction and opening remarks
09:10 Invited talk 1: Lise Getoor, “Exploiting structure for meta-learning”
09:40 Poster spotlights 1
10:00 Poster session 1
10:30 Coffee break
11:00 Invited talk 2: Sergey Levine, “What’s wrong with meta-learning (and how we can fix it)”
11:30 Poster session 2
12:00 Lunch break
13:30 Invited talk 3: Hugo Larochelle, “Thoughts on progress made and challenges ahead in few-shot learning”
14:00 Invited talk 4: Michèle Sebag, “Monte Carlo tree search for algorithm configuration: MOSAIC”
14:30 Poster spotlights 2
14:50 Poster session 3
15:00 Coffee break
15:30 Poster session 4
16:00 Invited talk 5: Nando de Freitas, “Tools that learn”
16:30 Contributed talk 1: JD Co-Reyes, “Guiding policies with language via meta-learning”
16:45 Contributed talk 2: Arthur Pesah & Antoine Wehenkel, “Recurrent machines for likelihood-free inference”
17:00 Panel discussion
18:00 End

Invited Talks

Lise Getoor (UC Santa Cruz), “Exploiting structure for meta-learning”

Many machine learning problems exhibit rich structural dependencies. We need meta-learning algorithms which can represent, discover and exploit them, and we can use structured models to express the dependencies inherent in meta-learning. In this talk, I’ll introduce some common structural dependencies, show their power and how they can be represented, and discuss how we can make use of them for meta-learning.

Sergey Levine (UC Berkeley), “What’s wrong with meta-learning (and how we can fix it)”

Meta-learning, or learning to learn, offers an appealing framework for training deep neural networks to adapt quickly and efficiently to new tasks. Indeed, the framework of meta-learning holds the promise of resolving the long-standing challenge of sample complexity in deep learning: by learning to learn efficiently, deep models can be meta-trained to adapt quickly to classify new image classes from a couple of examples, or learn new skills with reinforcement learning from just a few trials.

However, although the framework of meta-learning and few-shot learning is exceedingly appealing, it carries with it a number of major challenges. First, designing neural network models for meta-learning is quite difficult, since meta-learning models must be able to ingest entire datasets to adapt effectively. I will discuss how this challenge can be addressed by describing a model-agnostic meta-learning algorithm: a meta-learning algorithm that can use any model architecture, training that architecture to adapt efficiently via simple finetuning.

The second challenge is that meta-learning trades off the challenge of algorithm design (by learning the algorithm) for the challenge of task design: the performance of meta-learning algorithms depends critically on the ability of the user to manually design large sets of diverse meta-training tasks. In practice, this often ends up being an enormous barrier to widespread adoption of meta-learning methods. I will describe our recent work on unsupervised meta-learning, where tasks are proposed automatically from unlabeled data, and discuss how unsupervised meta-learning can exceed the performance of standard unsupervised learning methods while removing the manual task design requirement inherent in standard meta-learning methods.

Hugo Larochelle (Google Brain), “Thoughts on progress made and challenges ahead in few-shot learning”

A lot of the recent progress on many AI tasks were enabled in part by the availability of large quantities of labeled data. Yet, humans are able to learn concepts from as little as a handful of examples. Meta-learning has been a very promising framework for addressing the problem of generalizing from small amounts of data, known as few-shot learning. In this talk, I’ll present an overview of the recent research that has made exciting progress on this topic. I will also share my thoughts on the challenges and research opportunities that remain in few-shot learning, including a proposal for a new benchmark.

Michèle Sebag (Paris-Saclay), “Monte Carlo tree search for algorithm configuration: MOSAIC”

The sensitivity of algorithms (related to machine learning, combinatorial optimization, constraint satisfaction) w.r.t. their hyperparameters, and the difficulty of finding the algorithm and its hyperparameter setting best suited to the problem instance at hand, has led to the rapidly developing field of algorithm selection and calibration, and, focusing on machine learning, to AutoML.

Several international AutoML challenges have been organized since 2013, motivating the development of the Bayesian optimization-based approach Auto-sklearn, the randomized search approach Hyperband, and others. This talk will present a new approach, called Monte Carlo Tree Search for Algorithm Configuration (MOSAIC), fully exploiting the tree structure of the algorithm portfolio-hyperparameter search space.

It is shown that MOSAIC outperforms the current AutoML winner Auto-Sklearn on both the AutoML challenge 2015, and the MNIST dataset.

Joint work: Heri Rakotoarison

Nando de Freitas (DeepMind), “Tools that learn”

Spotlights 1 (and Poster Sessions 1 & 2)

  1. Meta-Learner with Linear Nulling
  2. OBOE: Collaborative Filtering for AutoML Initialization
  3. Backpropamine: Meta-Training Self-Modifying Neural Networks with Gradient Descent
  4. Hyperparameter Learning via Distributional Transfer
  5. Toward Multimodal Model-Agnostic Meta-Learning
  6. Fast Neural Architecture Construction Using EnvelopeNets
  7. Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples
  8. Macro Neural Architecture Search Revisited
  9. AutoDL Challenge Design and Beta Tests
  10. Modular Meta-Learning in Abstract Graph Networks for Combinatorial Generalization
  11. Cross-Modulation Networks for Few-Shot Learning
  12. Large Margin Meta-Learning for Few-Shot Classification
  13. Amortized Bayesian Meta-Learning
  14. The Effects of Negative Adaptation in Model-Agnostic Meta-Learning
  15. Mitigating Architectural Mismatch During the Evolutionary Synthesis of Deep Neural Networks
  16. Evolvability ES: Scalable Evolutionary Meta-Learning
  17. Consolidating the Meta-Learning Zoo: A Unifying Perspective as Posterior Predictive Inference
  18. Deep Online Learning via Meta-Learning: Continual Adaptation for Model-Based RL

Spotlights 2 (and Poster Sessions 3 & 4)

  1. Incremental Few-Shot Learning with Attention Attractor Networks
  2. Auto-Meta: Automated Gradient-Based Meta-Learner Search
  3. Transferring Knowledge Across Learning Processes
  4. Few-Shot Learning for Free by Modeling Global Class Structure
  5. TAEML: Task-Adaptive Ensemble of Meta-Learners
  6. A Simple Transfer-Learning Extension of Hyperband
  7. Learned Optimizers That Outperform SGD on Wall-Clock and Validation Loss
  8. Learning to Learn with Conditional Class Dependencies
  9. Unsupervised Learning via Meta-Learning
  10. Control Adaptation via Meta-Learning Dynamics
  11. Learning to Adapt in Dynamic, Real-World Environments via Meta-Reinforcement Learning
  12. Learning to Design RNA
  13. Graph Hypernetworks for Neural Architecture Search
  14. Meta-Learning with Latent Embedding Optimization
  15. ProMP: Proximal Meta-Policy Search
  16. Attentive Task-Agnostic Meta-Learning for Few-Shot Text Classification
  17. Variadic Learning by Bayesian Nonparametric Deep Embedding
  18. from Nodes to Networks: Evolving Recurrent Neural Networks
  19. Meta Learning for Defaults: Symbolic Defaults

Submission Instructions

The submission window for this workshop is now closed. Decision notifications were sent out November 9, 2018. Thank you to all who submitted!

We have provided a modified .sty file here that appropriately lists the name of the workshop when \neuripsfinal is enabled. Please use this style files in conjunction with corresponding LaTeX .tex template from the NeurIPS website to submit a final camera-ready copy.

Accepted papers and supplementary material are available on the workshop website. However, these do not constitute archival publications and no formal workshop proceedings will be made available, meaning contributors are free to publish their work in archival journals or conferences.


  1. Can supplementary material be added beyond the 4-page limit and are there any restrictions on it?

    Yes, you may include additional supplementary material, but we ask that it be limited to a reasonable amount (max 10 pages in addition to the main submission) and that it follow the same NeurIPS format as the paper.

  2. Can a submission to this workshop be submitted to another NeurIPS workshop in parallel?

    We discourage this, as it leads to more work for reviewers across multiple workshops. Our suggestion is to pick one workshop to submit to.

  3. If a submission is accepted, is it possible for all authors of the accepted paper to receive a chance to register?

    We cannot confirm this yet, but it is most likely that we will have at most one registration to offer per accepted paper.

  4. Can a paper be submitted to the workshop that has already appeared at a previous conference with published proceedings?

    We won’t be accepting such submissions unless they have been adapted to contain significantly new results (where novelty is one of the qualities reviewers will be asked to evaluate).

Accepted Abstracts

Program Committee

We thank the program committee for shaping the excellent technical program (in alphabetical order):

Aaron Klein, Abhishek Gupta, Alexandre Lacoste, Andre Carvalho, Andrew Brock, Anusha Nagabandi, Aravind Srinivas, Balazs Kegl, Benjamin Letham, Brandon Schoenfeld, Chelsea Finn, Daniel Hernandez-Lobato, Dumitru Erhan, Eleni Triantafillou, Eytan Bakshy, Ghassen Jerfel, Hugo Larochelle, Hugo Jair Escalante, Ignasi Clavera, Igor Mordatch, Jake Snell, Jan van Rijn, Jasper Snoek, Jürgen Schmidhuber, Ke Li, Lars Kotthoff, Marius Lindauer, Matt Hoffman, Mengye Ren, Michael Chang, Misha Denil, Parminder Bhatia, Pavel Brazdil, Pieter Gijsbers, Rafael Mantovani, Razvan Pascanu, Ricardo Prudencio, Roberto Calandra, Rodolphe Jenatton, Roger Grosse, Roman Garnett, Sayna Ebrahimi, Sergio Escalera, Stephen Roberts, Thanard Kurutach, Thomas Elsken, Tin Ho, Udayan Khurana

Past workshops

Workshop on Meta-Learning (MetaLearn 2017) @ NeurIPS 2017


For any further questions, you can contact us at


We are very thankful to our corporate sponsors!