Recent years have seen rapid progress in meta-learning methods, which learn (and optimize) the performance of learning methods based on data, generate new learning methods from scratch, and learn to transfer knowledge across tasks and domains. Meta-learning can be seen as the logical conclusion of the arc that machine learning has undergone in the last decade, from learning classifiers, to learning representations, and finally to learning algorithms that themselves acquire representations and classifiers. The ability to improve one’s own learning capabilities through experience can also be viewed as a hallmark of intelligent beings, and there are strong connections with work on human learning in neuroscience.
Meta-learning methods are also of substantial practical interest, since they have, e.g., been shown to yield new state-of-the-art automated machine learning methods, novel deep learning architectures, and substantially improved one-shot learning systems.
Some of the fundamental questions that this workshop aims to address are:
- How can we exploit our domain knowledge to effectively guide the meta-learning process?
- What are the meta-learning processes in nature (e.g, in humans), and how can we take inspiration from them?
- Which ML approaches are best suited for meta-learning, in which circumstances, and why?
- What principles can we learn from meta-learning to help us design the next generation of learning systems?
- What are the fundamental differences in the learning “task” compared to traditional “non-meta” learners?
- Is there a practical limit to the number of meta-learning layers (e.g., would a meta-meta-meta-learning algorithm be of practical use)?
- How can we design more sample-efficient meta-learning methods?
The goal of this workshop is to bring together researchers from all the different communities and topics that fall under the umbrella of meta-learning. We expect that the presence of these different communities will result in a fruitful exchange of ideas and stimulate an open discussion about the current challenges in meta-learning, as well as possible solutions.
In terms of prospective participants, our main targets are machine learning researchers interested in the processes related to understanding and improving current meta-learning algorithms. Specific target communities within machine learning include, but are not limited to: meta-learning, AutoML, reinforcement learning, deep learning, optimization, evolutionary computation, and Bayesian optimization. Our invited speakers also include researchers who study human learning, to provide a broad perspective to the attendees.
- Pieter Abbeel (UC Berkeley, Covariant.ai)
- David Abel (Brown University)
- Jeff Clune (University of Wyoming, Uber AI)
- Erin Grant (UC Berkeley)
- Raia Hadsell (DeepMind)
- Brenden Lake(NYU, Facebook AI Research)
- Roberto Calandra(Facebook AI Research)
- Ignasi Clavera (UC Berkeley)
- Frank Hutter (University of Freiburg)
- Joaquin Vanschoren (Eindhoven University of Technology)
- Jane Wang (DeepMind)
|09:00||Introduction and opening remarks|
|09:10||Invited talk 1|
|09:40||Poster spotlights 1|
|10:00||Poster session 1|
|11:00||Invited talk 2|
|11:30||Poster session 2|
|13:30||Invited talk 3|
|14:00||Invited talk 4|
|14:30||Poster spotlights 2|
|14:50||Poster session 3|
|15:30||Poster session 4|
|16:00||Invited talk 5|
|16:30||Contributed talk 1|
|16:45||Contributed talk 2|
Pieter Abbeel (UC Berkeley, Covariant)
David Abel (Brown University)
Jeff Clune (/University of Wyoming, Uber AI)
Erin Grant (UC Berkeley)
Raia Hadsell (DeepMind)
Brenden Lake (NYU, Facebook)
Spotlights 1 (and Poster Sessions 1 & 2)
Spotlights 2 (and Poster Sessions 3 & 4)
Can supplementary material be added beyond the 4-page limit and are there any restrictions on it?
Yes, you may include additional supplementary material, but we ask that it be limited to a reasonable amount (max 10 pages in addition to the main submission) and that it follow the same NeurIPS format as the paper.
Can a submission to this workshop be submitted to another NeurIPS workshop in parallel?
We discourage this, as it leads to more work for reviewers across multiple workshops. Our suggestion is to pick one workshop to submit to.
If a submission is accepted, is it possible for all authors of the accepted paper to receive a chance to register?
We cannot confirm this yet, but it is most likely that we will have at most one registration to offer per accepted paper.
Can a paper be submitted to the workshop that has already appeared at a previous conference with published proceedings?
We won’t be accepting such submissions unless they have been adapted to contain significantly new results (where novelty is one of the qualities reviewers will be asked to evaluate).
Workshop on Meta-Learning (MetaLearn 2017) @ NeurIPS 2017
Workshop on Meta-Learning (MetaLearn 2018) @ NeurIPS 2018
For any further questions, you can contact us at firstname.lastname@example.org.
We are very thankful to our corporate sponsors!