Recent years have seen rapid progress in meta-learning methods, which learn (and optimize) the performance of learning methods based on data, generate new learning methods from scratch, and learn to transfer knowledge across tasks and domains. Meta-learning can be seen as the logical conclusion of the arc that machine learning has undergone in the last decade, from learning classifiers, to learning representations, and finally to learning algorithms that themselves acquire representations and classifiers. The ability to improve one’s own learning capabilities through experience can also be viewed as a hallmark of intelligent beings, and there are strong connections with work on human learning in neuroscience.
Meta-learning methods are also of substantial practical interest, since they have, e.g., been shown to yield new state-of-the-art automated machine learning methods, novel deep learning architectures, and substantially improved one-shot learning systems.
Some of the fundamental questions that this workshop aims to address are:
- How can we exploit our domain knowledge to effectively guide the meta-learning process?
- What are the meta-learning processes in nature (e.g, in humans), and how can we take inspiration from them?
- Which ML approaches are best suited for meta-learning, in which circumstances, and why?
- What principles can we learn from meta-learning to help us design the next generation of learning systems?
- What are the fundamental differences in the learning “task” compared to traditional “non-meta” learners?
- Is there a practical limit to the number of meta-learning layers (e.g., would a meta-meta-meta-learning algorithm be of practical use)?
- How can we design more sample-efficient meta-learning methods?
The goal of this workshop is to bring together researchers from all the different communities and topics that fall under the umbrella of meta-learning. We expect that the presence of these different communities will result in a fruitful exchange of ideas and stimulate an open discussion about the current challenges in meta-learning, as well as possible solutions.
In terms of prospective participants, our main targets are machine learning researchers interested in the processes related to understanding and improving current meta-learning algorithms. Specific target communities within machine learning include, but are not limited to: meta-learning, AutoML, reinforcement learning, deep learning, optimization, evolutionary computation, and Bayesian optimization. Our invited speakers also include researchers who study human learning, to provide a broad perspective to the attendees.
- Pieter Abbeel (UC Berkeley, Covariant.ai)
- David Abel (Brown University)
- Jeff Clune (University of Wyoming, Uber AI)
- Erin Grant (UC Berkeley)
- Raia Hadsell (DeepMind)
- Brenden Lake (NYU, Facebook AI Research)
- Roberto Calandra (Facebook AI Research)
- Ignasi Clavera (UC Berkeley)
- Frank Hutter (University of Freiburg)
- Joaquin Vanschoren (Eindhoven University of Technology)
- Jane Wang (DeepMind)
Submission deadline: 10 September 2019 (11:59 PM anywhere on Earth) Notification: 1 October 2019
- Camera ready: 2 December 2019
- Workshop: 13 December 2019
|09:00||Introduction and opening remarks|
|09:10||Invited talk 1|
|09:40||Invited talk 2|
|10:00||Poster spotlights 1|
|10:30||Coffee & posters|
|11:30||Invited talk 3|
|14:00||Invited talk 4|
|14:30||Invited talk 5|
|15:00||Poster spotlights 2|
|15:20||Coffee & posters|
|16:30||Contributed talk 1|
|16:45||Contributed talk 2|
|17:00||Invited talk 6|
Papers must be in the latest NeurIPS format, but with a maximum of 4 pages (excluding references).
Papers should be anonymized upon submission.
Accepted papers and eventual supplementary material will be made available on the workshop website. However, this does not constitute an archival publication and no formal workshop proceedings will be made available, meaning contributors are free to publish their work in archival journals or conferences.
The two best papers submitted will be presented as 15-minute contributed talks.
Submissions can be made at https://cmt3.research.microsoft.com/METALEARN2019/Submission/Index during the submission period.
Can supplementary material be added beyond the 4-page limit and are there any restrictions on it?
Yes, you may include additional supplementary material, but you should ensure that the main paper is self-contained, since looking at supplementary material is at the discretion of the reviewers. The supplementary material should also follow the same NeurIPS format as the paper and be limited to a reasonable amount (max 10 pages in addition to the main submission).
Can a submission to this workshop be submitted to another NeurIPS workshop in parallel?
We discourage this, as it leads to more work for reviewers across multiple workshops. Our suggestion is to pick one workshop to submit to.
If a submission is accepted, is it possible for all authors of the accepted paper to receive a chance to register?
We cannot confirm this yet, but it is most likely that we will have at most one registration to offer per accepted paper.
Can a paper be submitted to the workshop that has already appeared at a previous conference with published proceedings?
We won’t be accepting such submissions unless they have been adapted to contain significantly new results (where novelty is one of the qualities reviewers will be asked to evaluate).
Can a paper be submitted to the workshop that is currently under review or will be under review at a conference during the review phase?
MetaLearn submissions are 4 pages, i.e., much shorter than standard conference submissions. But from our side it is perfectly fine to submit a condensed version of a parallel conference submission, if it also fine for the conference in question. Our workshop does not have archival proceedings, and therefore parallel submissions of extended versions to other conferences are acceptable.
- Deep Subspace Networks For Few-Shot Learning. Christian Simon, Piotr Koniusz, Richard Nock, Mehrtash Harandi
- On the conditions of MAML convergence. Shiro Takagi
- Constrained Bayesian Optimization with Max-Value Entropy Search. Valerio Perrone, Iaroslav Shcherbatyi, Rodolphe Jenatton, Cedric Archambeau, Matthias Seeger
- Is Fast Adaptation All You Need?. Khurram Javed, Hengshuai Yao, Martha White
- Learning to tune XGBoost with XGBoost. Johanna Sommer, Dimitrios Sarigiannis, Thomas Parnell
- Texture Bias Of CNNs Limits Few-Shot Classification Performance. David Macleod, Sam Ringer, William JW Williams
- Meta-Learning Contextual Bandit Exploration. Amr Sharaf, Hal Daume
- Gradient-Aware Model-based Policy Search. Pierluca D’Oro, Alberto Maria Metelli, Andrea Tirinzoni, Matteo Papini, Marcello Restelli
- Transferable Neural Processes for Hyperparameter Optimization. Ying WEI, Peilin Zhao, Huaxiu Yao, Junzhou Huang
- SEGA: Searching efficiently for new generator architectures. Sivan Doveh, Raja Giryes
- Niseko: a Large-Scale Meta-Learning Dataset. Zeyuan Shang, Emanuel Zgraggen, Philipp Eichmann, Tim Kraska
- AutoML using Metadata Language Embeddings. Iddo Drori, Lu Liu, Yi Nian, Sharath Koorathota, Jie Li, Antonio K Moretti, Juliana Freire , Madeleine Udell
- Neural Architecture Search via Bayesian Optimization with a Neural Network Prior. Colin White, Willie Neiswanger, Yash Savani
- Improved Training Speed, Accuracy, and Data Utilization Through Loss Function Optimization. Santiago Gonzalez, Risto Miikkulainen
- Domain-Agnostic Few-Shot Classification by Learning Disparate Modulators. Yongseok Choi, Junyoung Park, Subin Yi, Dong-Yeon Cho
- Meta-analysis of Continual Learning. Cuong V Nguyen
- Meta-Learning of Structured Representation by Proximal Mapping. Mao Li, Yingyi Ma, Hongwei Jin, Zhan Shi, Xinhua Zhang
- Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning. Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Karol Hausman, Sergey Levine, Chelsea Finn
- An empirical study of pretrained representations for few-shot classification. Tiago Ramalho, Thierry Sousbie, Stefano Peluchetti
- Warm Starting Method for CMA-ES. Masahiro Nomura, Shuhei Watanabe, Yoshihiko Ozaki, Masaki Onishi
- Bayesian Optimisation over Multiple Continuous and Categorical Inputs. Binxin Ru, Ahsan Alvi, Vu Nguyen, Michael A. Osborne, Stephen Roberts
- Noise Contrastive Meta-Learning for Conditional Density Estimation using Kernel Mean Embeddings. Jean-Francois Ton, Leung Chan, Yee Whye Teh, Dino Sejdinovic
- MetaPoison: Learning to Craft Adversarial Poisoning Examples via Meta-Learning. W. Ronny Huang, Jonas Geiping, Liam Fowl, Tom Goldstein
- MetaPix: Few-shot video retargeting. Jessica Lee, Rohit Girdhar, Deva Ramanan
- Meta-Learning with Warped Gradient Descent. Sebastian Flennerhag, Andrei A. Rusu, Razvan Pascanu, Hujun Yin, Raia Hadsell
- Characterizing Policy Divergence for Personalized Meta-Reinforcement Learning. Michael Zhang
- Modular Meta-Learning with Shrinkage. Yutian Chen, Abram Friesen, Feryal Behbahani, David Budden, Matt Hoffman, Arnaud Doucet, Nando de Freitas
- Meta-analysis of Bayesian analyses. Paul Blomstedt, Diego Mesquita, Samuel Kaski
- Ranking architectures using meta-learning. Alina Dubatovka, Effrosyni Kokiopoulou, Luciano Sbaiz, Andrea Gesmundo, Gabor Bartok, Jesse Berent
- Meta-Learning Deep Energy-Based Memory Models. Sergey Bartunov, Jack Rae, Simon Osindero, Timothy Lillicrap
- ES-MAML: Learning to Adapt with Evolution Strategies. Xingyou Song, Krzysztof Choromanski, Wenbo Gao, Yuxiang Yang, Yunhao Tang, Aldo Pacchiano
- Charting the Right Manifold: Manifold Mixup for Few-shot Learning. Puneet Mangla, Mayank Singh, Nupur Kumari, Abhishek Sinha, Balaji Krishnamurthy, Vineeth N Balasubramanian
- A quantile-based approach to hyperparameter transfer learning. David Salinas, Huibin Shen, Valerio Perrone
- Learning an Adaptive Learning Rate Schedule. Zhen Xu, Andrew M Dai, Jonas Kemp, Luke Metz
- VIABLE: Fast Adaptation via Backpropagating Learned Loss. Leo Feng, Luisa Zintgraf, Bei Peng, Shimon Whiteson
- Empirical Bayes Meta-Learning with Synthetic Gradients. Xu Hu, Pablo Moreno, Xi Shen, Yang Xiao, Neil Lawrence, Guillaume Obozinski, Andreas Damianou
- Zero-Shot Text Classification With Generative Language Models. Raul Puri, Bryan Catanzaro
- On Transfer Learning via Linearized Neural Networks. Wesley J Maddox, Shuai Tang, Pablo Moreno, Andrew Gordon Gordon Wilson, Andreas Damianou
- Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML. Aniruddh Raghu, Maithra Raghu, Oriol Vinyals, Samy Bengio
- Learning Compositional Rules via Neural Program Synthesis. Maxwell Nye, Armando Solar-Lezama, Joshua Tenenbaum, Brenden Lake
- Assay modelling with adaptive deep kernel learning. Prudencio Tossou, Basile Dura, Alexandre Lacoste
- Meta-Learning without Memorization. Mingzhang Michael Yin, Chelsea Finn, George Tucker, Sergey Levine
- Meta-reinforcement learning of causal strategies. Ishita Dasgupta, Zeb Kurth-Nelson, Silvia Chiappa, Jovana Mitrovic, Edward Hughes, Pedro Ortega, Matthew Botvinick, Jane Wang
- Improving Model Robustness via Automatically Incorporating Self-supervision Tasks. Donghwa Kim, Kangwook Lee, Changho Suh
- Meta-learning curiosity algorithms. Ferran Alet, Martin Schneider, Tomas Lozano-Perez, Leslie Kaelbling
- Differentially Private Meta-Learning. Jeffrey Li, Mikhail Khodak, Sebastian Caldas, Ameet Talwalkar
- Neural Architecture Evolution in Deep Reinforcement Learning for Continuous Control. Jörg K.H. Franke, Gregor Koehler, Noor Awad, Frank Hutter
- Continuous Meta-Learning without Task Supervision. James Harrison, Apoorva Sharma, Chelsea Finn, Marco Pavone
- Online Meta-Learning on Non-convex Setting. Zhenxun Zhuang, Kezi Yu, Songtao Lu, Lucas Glass, Yunlong Wang
- PAC-Bayes Objectives for Meta-Learning using Deep Probabilistic Programs. Jonathan Warrell
- A Baseline for Few-Shot Image Classification. Guneet S Dhillon, Pratik Chaudhari, Avinash Ravichandran, Stefano Soatto
- NASIB: Neural Architecture Search withIn Budget. Abhishek Singh, Anubhav Garg, Debo Dutta
- Understanding and Robustifying Differentiable Architecture Search. Arber Zela, Thomas Elsken, Yassine Marrakchi, Tonmoy Saikia, Thomas Brox, Frank Hutter
- Learning to Estimate Point-Prediction Uncertainty and Correct Output in Neural Networks. Xin Qiu, Elliot Meyerson, Risto Miikkulainen
- Towards Benchmarking and Dissecting One-shot Neural Architecture Search. Arber Zela, Julien Siems, Frank Hutter
- Decoupled Meta Learning with Structured Latents. Russell Mendonca, Sergey Levine, Chelsea Finn
- Automated Model Search Using Bayesian Optimization and Genetic Programming. Louis B Schlessinger, Gustavo Malkomes, Roman Garnett
- Meta-Learning for Algorithm and Hyperparameter Optimization with Surrogate Model Ensembles. Georgiana Manolache, Joaquin Vanschoren
We thank the program committee for shaping the excellent technical program (in alphabetical order):
Aaron Klein, Abhishek Gupta, Alexander Toshev, Alexandre Galashov, Andre Carvalho, Andrei A. Rusu, Ang Li, Ashvin V. Nair, Avi Singh, Aviral Kumar, Ben Eysenbach, Benjamin Letham, Bradly C, Brandon Schoenfeld, Brian Cheung, Carlos Soares, Daniel Hernandez, Deirdre Quillen, Devendra ingh, Dumitru Erhan, Dushyant Rao, Eleni Triantafillou, Erin Grant, Esteban Real, Eytan Bakshy, Frank Hutter, Haoran Tang, Hugo air, Igor Mordatch, Jakub Sygnowski, Jan Humplik, Jan N. van Rijn, Jan endrik, Jiajun Wu, Jonas Rothfuss, Jonathan Schwarz, Jürgen Schmidhuber, Kate Rakelly, Katharina Eggensperger, Kevin Swersky, Kyle Hsu, Lars Kotthoff, Leonard Hasenclever, Lerrel Pinto, Luisa Zintgraf, Marc Pickett, Marta Garnelo, Marvin Zhang, Matthias Seeger, Maximilian Igl, Misha Denil, Parminder Bhatia, Parsa Mahmoudieh, Pavel Brazdil, Pieter Gijsbers, Piotr Mirowski, Rachit Dubey, Rafael Gomes, Razvan Pascanu, Ricardo B. Prudencio, Roger B. Grosse, Rowan McAllister, Sayna Ebrahimi, Sebastien Racaniere, Sergio Escalera, Siddharth Reddy, Stephen Roberts, Sungryull Sohn, Surya Bhupatiraju, Thomas Elsken, Tin K. Ho, Udayan Khurana, Vincent Dumoulin, Vitchyr H. Pong, Zeyu Zheng
Workshop on Meta-Learning (MetaLearn 2017) @ NeurIPS 2017
Workshop on Meta-Learning (MetaLearn 2018) @ NeurIPS 2018
For any further questions, you can contact us at firstname.lastname@example.org.