ML
Metalearning
symposium
NeurIPS 2017
Pieter Abbeel
Embodied Intelligence and UC Berkeley
https://people.eecs.berkeley.edu/~pabbeel/
Chrisantha Fernando
DeepMind
http://www.sussex.ac.uk/profiles/127298
Roman Garnett
Washington Univ. St. Louis
http://www.cse.wustl.edu/~garnett/
Frank Hutter
Freiburg Univ.
http://www2.informatik.uni-freiburg.de/~hutter/
Max Jaderberg
DeepMind
Quoc Le
Google Brain
https://research.google.com/pubs/QuocLe.html
Risto Miikkulainen
Sentient and UT Austin
Juergen Schmidhuber
Nnaisense and IDSIA
http://people.idsia.ch/~juergen/
Satinder Singh
Cogitai and Univ. of Michigan
http://web.eecs.umich.edu/~baveja/
Ilya Sutskever
OpenAI
http://www.cs.toronto.edu/~ilya/
Kenneth Stanley
Uber and UCF
https://www.cs.ucf.edu/~kstanley/
Oriol Vinyals
DeepMind
https://research.google.com/pubs/OriolVinyals.html
Jane Wang
DeepMind
Risto Miikkulainen
Sentient and UT Austin
http://www.cs.utexas.edu/~risto/
Quoc Le
Google Brain
https://research.google.com/pubs/QuocLe.html
Kenneth Stanley
Uber and UCF
https://www.cs.ucf.edu/~kstanley/
Chrisantha Fernando
DeepMind
Modern learning systems, such as the recent deep learning, reinforcement learning, and probabilistic inference architectures, have become increasingly complex, often beyond the human ability to comprehend them. Such complexity is important: The more complex these systems are, the more powerful they often are. A new research problem has therefore emerged: How can the complexity, i.e. the design, components, and hyperparameters, be configured automatically so that these systems perform as well as possible? This is the problem of metalearning. Several approaches have emerged, including those based on Bayesian optimization, gradient descent, reinforcement learning, and evolutionary computation. The symposium presents an overview of these approaches, given by the researchers who developed them. Panel discussion compares the strengths of the different approaches and potential for future developments and applications. The audience will thus obtain a practical understanding of how to use metalearning to improve the learning systems they are using, as well as opportunities for research on metalearning in the future.
Thursday, December 7
2:00 — 9:30 PM @ The Grand Ballroom
2:00 – 2:10 Opening remarks
Quoc Le (slides, video)
Topic I: Evolutionary Optimization
Session Chair: Quoc Le
2:10 – 2:30 Evolving Multitask Neural Network Structure
Risto Miikkulainen (slides, video)
2:30 – 2:50 Evolving to Learn through Synaptic Plasticity
2:50 – 3:10 PathNet and Beyond
Chrisantha Fernando (slides, video)
Topic II: Bayesian Optimization
3:10 – 3:30 Bayesian Optimization for Automated Model Selection
3:30 – 3:50 Automatic Machine Learning (AutoML) and How To Speed It Up
4:00 – 4:30 COFFEE BREAK
Topic III: Gradient Descent
Session Chair: Chrisantha Fernando
4:30 – 4:50 Contrasting Model- and Optimization-based Metalearning
4:50 – 5:10 Population-based Training for Neural Network Meta-Optimization
5:10 – 5:30 Learning to Learn for Robotic Control
5:30 – 5:50 On Learning How to Learn Learning Strategies
Juergen Schmidhuber (slides, video )
Topic IV: Reinforcement Learning
5:50 – 6:10 Intrinsically Motivated Reinforcement Learning
Satinder Singh (video)
6:10 - 6:30 Self-Play
Ilya Sutskever (slides , video)
6:30 – 7:30: DINNER BREAK
Session Chair: Ken Stanley
7:30 – 7:50 Neural Architecture Search
7:50 – 8:10 Multiple scales of reward and task learning
Jane Wang (, slides video)
8:10 – 9:30 Panel discussion
Moderator: Risto Miikkulainen
Panelists: Frank Hutter, Juergen Schmidhuber, Ken Stanley, Ilya Sutskever (video)
Sponsors
© 2017 Sentient Technologies, Inc.
ML
Metalearning
symposium
NeurIPS 2017