Only institutional emails for accounts, please. Pour demander un compte, indiquez une adresse mail institutionnelle.
Bienvenue sur le catalogue des événements organisés par le LAL
Pour accéder aux catégories Direction et CHS, consulter les news.

# AutoML 2015 workshop @ ICML 2015

présidé par ,
samedi 11 juillet 2015 de à (Europe/Paris)
à Lille Grand Palais
1 Boulevard des Cités Unies, 59777 Lille-Euralille
 Description The web site of the event: https://sites.google.com/site/automlwsicml15/ Please submit the questions you would like to raise in the panel discussion at this site.
Go to day
• samedi 11 juillet 2015
• 08:30 - 10:00 Session 1
• 08:30 Invited Talk: Open Research Problems in AutoML 40'  Intervenant: Rich Caruana (Microsoft Research) Documents:
• 09:10 Invited Talk: Bandits and Bayesian optimization for AutoML 40'
Complex optimization and decision making tasks are beginning to play an
increasingly crucial role across a wide variety of scientific fields. This is
becoming more and more evident as entire research programs are being automated.

In this talk I'll describe a set of methods, known as Bayesian optimization,
which provide a very sample efficient approach to this problem. Much of the
gains of these methods are obtained by building a posterior model of a function
during optimization in order to efficiently explore its surface. I will
further describe a number of advanced search mechanisms and models and show how these can be used for automating Machine Learning problems. Finally, I will also briefly provide links to related bandit literature.
 Intervenant: Matthew Hoffmann (University of Cambridge) Documents:
• 09:50 Poster Spotlights 1 10'
5 spotlights of 2 minutes each
• Using Internal Validity Measures to Compare Clustering Algorithms 2'  Intervenant(s): Toon Van Craenendonck, Hendrik Blockeel Documents:
• Redundant Feature Selection using Permutation Methods 2'  Intervenant(s): Phillip Taylor, Nathan Griffiths, Abhir Bhalerao Documents:
• A Linear-Time Particle Gibbs Sampler for Infinite Hidden Markov Models 2'  Intervenant(s): Nilesh Tripuraneni, Shane Gu, Hong Ge, Zoubin Ghahramani Documents:
• Autograd: Effortless Gradients in Pure Numpy 2'  Intervenant(s): Dougal Maclaurin, David Duvenaud, Ryan P. Adams Documents:
• Autonomous learning of parameters in differential equations 2'  Intervenant(s): Adel Mezine, Artémis Llamosi, Veronique Letort, Michele Sebag, Florence d'Alché-Buc Documents:
• 10:00 - 10:30 Coffee break
• 10:30 - 12:00 Session 2
• 10:30 Invited Talk: Algorithm Recommendation as Collaborative Filtering 40'  Intervenant: Michele Sebag (CNRS) Documents:
• 11:10 Poster spotlights 2 18'
9 spotlights of 2 minutes each
• Improving reproducibility of data science experiments 2'  Intervenant(s): Tatiana Likhomanenko, Alexey Rogozhnikov, Alexander Baranov, Egor Khairullin, Andrey Ustyuzhanin Documents:
• Introducing Sacred: A Tool to Facilitate Reproducible Research 2'  Intervenant(s): Klaus Greff, Jürgen Schmidhuber Documents:
• DIGITS: the Deep learning GPU Training System 2'  Intervenant(s): Luke Yeager, Julie Bernauer, Allison Gray, Michael Houston Documents:
• Design of the 2015 ChaLearn AutoML Challenge 2'  Intervenant(s): Isabelle Guyon, Kristin Bennett, Gavin Cawley, Hugo Jair Escalante, Sergio Escalera, Tin Kam Ho Documents:
• Autokit: automatic machine learning via representation and model search 2'  Intervenant: Tadej Štajner Documents:
• AutoCompete: A Framework for Machine Learning Competitions 2'  Intervenant(s): Abbishek Thakur, Artus Krohn-Grimberghe Documents:
• Methods for Improving Bayesian Optimization for AutoML 2'  Intervenant(s): Matthias Feurer, Aaron Klein, Katharina Eggensperger, Jost Tobias Springenberg, Manuel Blum, Frank Hutter Documents:
• Fast Cross-Validation for Incremental Learning 2'  Intervenant(s): Pooria Joulani, András György, Csaba Szepesvári Documents:
• Active Structure Discovery for Gaussian Processes 2'  Intervenant(s): Gustavo Malkomes, Roman Garnett Documents:
• 11:30 1st Poster Session 30'
• 12:00 - 14:00 Lunch Break
• 14:00 - 16:00 Session 3
• 14:00 Invited Talk: Recursive Self-Improvement 40'
Most machine learning researchers focus on domain-specific learning algorithms. Can we also construct meta-learning algorithms that can learn better learning algorithms, and better ways of learning better learning algorithms, and so on, restricted only by the fundamental limitations of computability? In 1965, J. Good already made informal remarks on an intelligence explosion through such recursive self-improvement (RSI).

I will discuss various concrete algorithms (not just vague ideas) for RSI: 1. My diploma thesis (1987) proposed an evolutionary system that learns to inspect and improve its own learning algorithm, where Genetic Programming (GP) is recursively applied to itself, to invent better learning methods, meta-learning methods, meta-meta-learning etc. 2. RSI based on the self-referential Success-Story Algorithm for self-modifying probabilistic programs (1997) was already able to solve complex tasks. 3. My self-referential deep recurrent neural networks (since 1993) run and inspect and change their own weight change algorithms. Back in 2001, my former student Hochreiter (now prof) already had a practical implementation of such an RNN that meta-learns an excellent learning algorithm, at least for a limited domain. 4. The Goedel machine (2006) is the first RSI that is mathematically optimal in a particular sense. Will RSI finally take off in the near future?
 Intervenant: Juergen Schmidhuber (IDSIA) Documents:
• 14:40 Invited Talk: Automatically constructing models, and automatically explaining them, too. 40'
How could an artificial intelligence do statistics?  It would need an open-ended language of models, and a way to search through and compare those models.  Even better would be a system that could explain the different types of structure found, even if that type of structure had never been seen before.  This talk presents a prototype of such a system, which builds structured Gaussian processes regression models by combining covariance kernels to build a custom model for each dataset.  The resulting models can be broken down into relatively simple components, and surprisingly, it's not hard to write code that automatically describes each component, even for novel combinations of kernels.  The result is a procedure that takes in a dataset, and outputs a report with plots and English descriptions of the different types of structure found in that dataset.
 Intervenant: David Duvenaud (Harvard University) Documents:
• 15:20 2nd Poster Session 40'
• 16:00 - 16:30 Coffee break
• 16:30 - 18:00 Session 4
• 16:30 Invited Talk: OpenML: A Foundation for Networked & Automatic Machine Learning 40'
OpenML is an online machine learning platform where scientists can automatically log and share data sets, code, and experiments, organize them online, and collaborate with researchers all over the world. It helps to automate many tedious aspects of research, is readily integrated into several machine learning tools, and offers easy-to-use APIs. It also enables large-scale and real-time collaboration, allowing researchers to build directly on each other's latest results, and track the wider impact of their work. Ultimately, this provides a wealth of information for building systems that learn from previous experiments, to either assist people while analyzing data, or automate the process altogether.
 Intervenant: Joaquin Vanschoren (Eindhoven University of Technology) Documents:
• 17:10 AutoML Challenge 20'  Intervenant: Marc Boulle (Orange)
• 17:30 Panel Discussion: Next steps for AutoML 30'
Panelists:    Marc Boulle, Rich Caruana, David Duvenaud, Matthew Hoffmann, Juergen Schmidhuber, Michèle Sebag,  Joaquin Vanschoren.