Timezone: »

 
Workshop
Adversarial Training
David Lopez-Paz · Leon Bottou · Alec Radford

Thu Dec 08 11:00 PM -- 09:30 AM (PST) @ Area 3
Event URL: https://sites.google.com/site/nips2016adversarial/ »

In adversarial training, a set of machines learn together by pursuing competing goals. For instance, in Generative Adversarial Networks (GANs, Goodfellow et al., 2014) a generator function learns to synthesize samples that best resemble some dataset, while a discriminator function learns to distinguish between samples drawn from the dataset and samples synthesized by the generator. GANs have emerged as a promising framework for unsupervised learning: GAN generators are able to produce images of unprecedented visual quality, while GAN discriminators learn features with rich semantics that lead to state-of-the-art semi-supervised learning (Radford et al., 2016). From a conceptual perspective, adversarial training is fascinating because it bypasses the need of loss functions in learning, and opens the door to new ways of regularizing (as well as fooling or attacking) learning machines. In this one-day workshop, we invite scientists and practitioners interested in adversarial training to gather, discuss, and establish new research collaborations. The workshop will feature invited talks, a hands-on demo, a panel discussion, and contributed spotlights and posters.

Among the research topics to be addressed by the workshop are

* Novel theoretical insights on adversarial training
* New methods and stability improvements for adversarial optimization
* Adversarial training as a proxy to unsupervised learning of representations
* Regularization and attack schemes based on adversarial perturbations
* Adversarial model evaluation
* Adversarial inference models
* Novel applications of adversarial training

Want to learn more? Get started by generating your own MNIST digits using a GAN in 100 lines of Torch: https://goo.gl/Z2leZF

Author Information

David Lopez-Paz (Facebook AI Research)
Leon Bottou (Facebook AI Research)

Léon Bottou received a Diplôme from l'Ecole Polytechnique, Paris in 1987, a Magistère en Mathématiques Fondamentales et Appliquées et Informatiques from Ecole Normale Supérieure, Paris in 1988, and a PhD in Computer Science from Université de Paris-Sud in 1991. He joined AT&T Bell Labs from 1991 to 1992 and AT&T Labs from 1995 to 2002. Between 1992 and 1995 he was chairman of Neuristique in Paris, a small company pioneering machine learning for data mining applications. He has been with NEC Labs America in Princeton since 2002. Léon's primary research interest is machine learning. His contributions to this field address theory, algorithms and large scale applications. Léon's secondary research interest is data compression and coding. His best known contribution in this field is the DjVu document compression technology (http://www.djvu.org.) Léon published over 70 papers and is serving on the boards of JMLR and IEEE TPAMI. He also serves on the scientific advisory board of Kxen Inc .

Alec Radford (OpenAI)

More from the Same Authors