Timezone: »

 
Tutorial
Deep Belief Nets
Geoffrey E Hinton

Mon Dec 03 03:30 PM -- 05:30 PM (PST) @ None

Complex probabilistic models of unlabeled data can be created by combining simpler models. Mixture models are obtained by averaging the densities of simpler models and "products of experts" are obtained by multiplying the densities together and renormalizing. A far more powerful type of combination is to form a "composition of experts" by treating the values of the latent variables of one model as the data for learning the next model. The first half of the tutorial will show how deep belief nets -- directed generative models with many layers of hidden variables -- can be learned one layer at a time by composing simple, undirected, product of expert models that only have one hidden layer. It will also explain why composing directed models does not work.

Deep belief nets are trained as generative models on large, unlabeled datasets, but once multiple layers of features have been created by unsupervised learning, they can be fine-tuned to give excellent discrimination on small, labeled datasets. The second half of the tutorial will describe applications of deep belief nets to several tasks including object recognition, non-linear dimensionality reduction, document retrieval, and the interpretation of medical images. It will also show how the learning procedure for deep belief nets can be extended to high-dimensional time series and hierarchies of Conditional Random Fields.

Author Information

Geoffrey E Hinton (Google & University of Toronto)

Geoffrey Hinton received his PhD in Artificial Intelligence from Edinburgh in 1978 and spent five years as a faculty member at Carnegie-Mellon where he pioneered back-propagation, Boltzmann machines and distributed representations of words. In 1987 he became a fellow of the Canadian Institute for Advanced Research and moved to the University of Toronto. In 1998 he founded the Gatsby Computational Neuroscience Unit at University College London, returning to the University of Toronto in 2001. His group at the University of Toronto then used deep learning to change the way speech recognition and object recognition are done. He currently splits his time between the University of Toronto and Google. In 2010 he received the NSERC Herzberg Gold Medal, Canada's top award in Science and Engineering.

More from the Same Authors