Timezone: »
Modern neural networks are powerful function approximators which manipulate complex input data to extract information useful for one or more tasks. Extracting information is a process which transforms data to make it simpler and simpler to use. For example, binary classification might start from high dimensional images, and progressively simplify this data to shrink it down to a single number between 0 and 1: the probability that the original image contained a dog. Many transformations of the input image should lead to the same output. For example, changing background colour, rotating images, and so on, should not affect the answer provided by the neural network. When the transformations applied to the input are invertible, we are dealing with symmetries of the inputs. And a question arises of whether knowledge of such symmetries can help researchers devise better neural networks. In this section, we will visit the subject of symmetries, their benefits, and see examples of their usage.
Author Information
Sébastien Racanière (DeepMind)
Sébastien Racanière is a Staff Research Engineer in DeepMind. His current interests in ML revolve around the interaction between Physics and Machine Learning, with an emphasis on the use of symmetries. He got his PhD in pure mathematics from the Université Louis Pasteur, Strasbourg, in 2002, with co-supervisors Michèle Audin (Strasbourg) and Frances Kirwan (Oxford). This was followed by a two years Marie-Curie Individual Fellowship in Imperial College, London, and another postdoc in Cambridge (UK). His first job in the industry was at the Samsung European Research Institute, investigating the use of Learning Algorithms in mobile phones, followed by UGS, a Cambridge based company, working on a 3D search engine. He afterwards worked for Maxeler, in London, programming FPGAs. He then moved to Google, and finally DeepMind.
More from the Same Authors
-
2021 : Implicit Riemannian Concave Potential Maps »
Danilo Jimenez Rezende · Sébastien Racanière -
2021 : Implicit Riemannian Concave Potential Maps »
Danilo Jimenez Rezende · Sébastien Racanière -
2021 : Implicit Riemannian Concave Potential Maps »
Danilo Jimenez Rezende · Sébastien Racanière -
2021 Tutorial: Pay Attention to What You Need: Do Structural Priors Still Matter in the Age of Billion Parameter Models? »
Irina Higgins · Antonia Creswell · Sébastien Racanière -
2020 Poster: Disentangling by Subspace Diffusion »
David Pfau · Irina Higgins · Alex Botev · Sébastien Racanière -
2017 : Imagination-Augmented Agents for Deep Reinforcement Learning »
Sébastien Racanière -
2017 Poster: Imagination-Augmented Agents for Deep Reinforcement Learning »
Sébastien Racanière · Theophane Weber · David Reichert · Lars Buesing · Arthur Guez · Danilo Jimenez Rezende · Adrià Puigdomènech Badia · Oriol Vinyals · Nicolas Heess · Yujia Li · Razvan Pascanu · Peter Battaglia · Demis Hassabis · David Silver · Daan Wierstra -
2017 Oral: Imagination-Augmented Agents for Deep Reinforcement Learning »
Sébastien Racanière · Theophane Weber · David Reichert · Lars Buesing · Arthur Guez · Danilo Jimenez Rezende · Adrià Puigdomènech Badia · Oriol Vinyals · Nicolas Heess · Yujia Li · Razvan Pascanu · Peter Battaglia · Demis Hassabis · David Silver · Daan Wierstra