Workshop
Adversarial Training
David Lopez-Paz · Leon Bottou · Alec Radford

Fri Dec 9th 08:00 AM -- 06:30 PM @ Area 3
Event URL: https://sites.google.com/site/nips2016adversarial/ »

In adversarial training, a set of machines learn together by pursuing competing goals. For instance, in Generative Adversarial Networks (GANs, Goodfellow et al., 2014) a generator function learns to synthesize samples that best resemble some dataset, while a discriminator function learns to distinguish between samples drawn from the dataset and samples synthesized by the generator. GANs have emerged as a promising framework for unsupervised learning: GAN generators are able to produce images of unprecedented visual quality, while GAN discriminators learn features with rich semantics that lead to state-of-the-art semi-supervised learning (Radford et al., 2016). From a conceptual perspective, adversarial training is fascinating because it bypasses the need of loss functions in learning, and opens the door to new ways of regularizing (as well as fooling or attacking) learning machines. In this one-day workshop, we invite scientists and practitioners interested in adversarial training to gather, discuss, and establish new research collaborations. The workshop will feature invited talks, a hands-on demo, a panel discussion, and contributed spotlights and posters.

Among the research topics to be addressed by the workshop are

* Novel theoretical insights on adversarial training
* New methods and stability improvements for adversarial optimization
* Adversarial training as a proxy to unsupervised learning of representations
* Regularization and attack schemes based on adversarial perturbations
* Adversarial model evaluation
* Adversarial inference models
* Novel applications of adversarial training

Want to learn more? Get started by generating your own MNIST digits using a GAN in 100 lines of Torch: https://goo.gl/Z2leZF

09:00 AM Set up posters (Setup)
09:15 AM Welcome (Talk) David Lopez-Paz, Alec Radford, Leon Bottou
09:30 AM Introduction to Generative Adversarial Networks (Talk) Ian Goodfellow
10:00 AM How to train a GAN? (Talk) Soumith Chintala
11:00 AM Learning features to compare distributions (Talk)
11:00 AM Learning features to distinguish distributions (Talk) Arthur Gretton
11:30 AM Training Generative Neural Samplers using Variational Divergence (Talk) Sebastian Nowozin
12:00 PM Lunch break (Break)
02:00 PM Adversarially Learned Inference (ALI) and BiGANs (Talk) Aaron Courville
02:30 PM Energy-Based Adversarial Training and Video Prediction (Talk) Yann LeCun
03:00 PM Discussion panel <span> <a href="#"></a> </span> Ian Goodfellow, Soumith Chintala, Arthur Gretton, Sebastian Nowozin, Aaron Courville, Yann LeCun, Emily Denton
04:00 PM Coffee break (Break)
04:30 PM Spotlight presentations (Talk)
06:00 PM Poster session <span> <a href="#"></a> </span>
06:00 PM Additional poster and open discussions (Poster session)

Author Information

David Lopez-Paz (Facebook AI Research)
Leon Bottou (Facebook AI Research)

Léon Bottou received a Diplôme from l'Ecole Polytechnique, Paris in 1987, a Magistère en Mathématiques Fondamentales et Appliquées et Informatiques from Ecole Normale Supérieure, Paris in 1988, and a PhD in Computer Science from Université de Paris-Sud in 1991. He joined AT&T Bell Labs from 1991 to 1992 and AT&T Labs from 1995 to 2002. Between 1992 and 1995 he was chairman of Neuristique in Paris, a small company pioneering machine learning for data mining applications. He has been with NEC Labs America in Princeton since 2002. Léon's primary research interest is machine learning. His contributions to this field address theory, algorithms and large scale applications. Léon's secondary research interest is data compression and coding. His best known contribution in this field is the DjVu document compression technology (http://www.djvu.org.) Léon published over 70 papers and is serving on the boards of JMLR and IEEE TPAMI. He also serves on the scientific advisory board of Kxen Inc .

Alec Radford (OpenAI)

More from the Same Authors