Timezone: »

 
Workshop
Adversarial Training
David Lopez-Paz · Leon Bottou · Alec Radford

Thu Dec 08 11:00 PM -- 09:30 AM (PST) @ Area 3
Event URL: https://sites.google.com/site/nips2016adversarial/ »

In adversarial training, a set of machines learn together by pursuing competing goals. For instance, in Generative Adversarial Networks (GANs, Goodfellow et al., 2014) a generator function learns to synthesize samples that best resemble some dataset, while a discriminator function learns to distinguish between samples drawn from the dataset and samples synthesized by the generator. GANs have emerged as a promising framework for unsupervised learning: GAN generators are able to produce images of unprecedented visual quality, while GAN discriminators learn features with rich semantics that lead to state-of-the-art semi-supervised learning (Radford et al., 2016). From a conceptual perspective, adversarial training is fascinating because it bypasses the need of loss functions in learning, and opens the door to new ways of regularizing (as well as fooling or attacking) learning machines. In this one-day workshop, we invite scientists and practitioners interested in adversarial training to gather, discuss, and establish new research collaborations. The workshop will feature invited talks, a hands-on demo, a panel discussion, and contributed spotlights and posters.

Among the research topics to be addressed by the workshop are

* Novel theoretical insights on adversarial training
* New methods and stability improvements for adversarial optimization
* Adversarial training as a proxy to unsupervised learning of representations
* Regularization and attack schemes based on adversarial perturbations
* Adversarial model evaluation
* Adversarial inference models
* Novel applications of adversarial training

Want to learn more? Get started by generating your own MNIST digits using a GAN in 100 lines of Torch: https://goo.gl/Z2leZF

Fri 12:00 a.m. - 12:15 a.m. [iCal]
Set up posters (Setup)
Fri 12:15 a.m. - 12:30 a.m. [iCal]

Just a quick introduction to the first NIPS workshop on Adversarial Training.

David Lopez-Paz, Alec Radford, Leon Bottou
Fri 12:30 a.m. - 1:00 a.m. [iCal]

Generative adversarial networks are deep models that learn to generate samples drawn from the same distribution as the training data. As with many deep generative models, the log-likelihood for a GAN is intractable. Unlike most other models, GANs do not require Monte Carlo or variational methods to overcome this intractability. Instead, GANs are trained by seeking a Nash equilibrium in a game played between a discriminator network that attempts to distinguish real data from model samples and a generator network that attempts to fool the discriminator. Stable algorithms for finding Nash equilibria remain an important research direction. Like many other models, GANs can also be applied to semi-supervised learning.

Ian Goodfellow
Fri 1:00 a.m. - 1:30 a.m. [iCal]
How to train a GAN? (Talk)
Soumith Chintala
Fri 2:00 a.m. - 2:30 a.m. [iCal]
Learning features to distinguish distributions (Talk)
Arthur Gretton
Fri 2:00 a.m. - 2:30 a.m. [iCal]

An important component of GANs is the discriminator, which tells apart samples from the generator and samples from a reference set. Discriminators implement empirical approximations to various divergence measures between probability densities (originally Jensen-Shannon, and more recently other f-divergences and integral probability metrics). If we think about this problem in the setting of hypothesis testing, a good discriminator can tell generator samples from reference samples with high probability: in other words, it maximizes the test power. A reasonable goal then becomes to learn a discriminator to directly maxmize test power (we will briefly look at relations between test power and classifier performance).

I will demonstrate ways of training a discriminator with maximum test power using two divergence measures: the maximum mean discrepancy (MMD), and differences of learned smooth features (the ME test, NIPS 2016). In both cases, the key point is that variance matters: it is not enough to have a large empirical divergence; we also need to have high confidence in the value of our divergence. Using an optimized MMD discriminator, we can detect subtle differences in the distribution of GAN outputs and real hand-written digits which humans are unable to find (for instance, small imbalances in the proportions of certain digits, or minor distortions that are implausible in normal handwriting).

Fri 2:30 a.m. - 3:00 a.m. [iCal]

Generative neural samplers are probabilistic models that implement sampling using feedforward neural networks: they take a random input vector and produce a sample from a probability distribution defined by the network weights. These models are expressive and allow efficient computation of samples and derivatives, but cannot be used for computing likelihoods or for marginalization. The generative-adversarial training method allows to train such models through the use of an auxiliary discriminative neural network. We show that the generative-adversarial approach is a special case of an existing more general variational divergence estimation approach. We show that any f-divergence can be used for training generative neural samplers. We discuss the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models.

Sebastian Nowozin
Fri 3:00 a.m. - 5:00 a.m. [iCal]
Lunch break (Break)
Fri 5:00 a.m. - 5:30 a.m. [iCal]

We introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process. The generation network maps samples from stochastic latent variables to the data space while the inference network maps training examples in data space to the space of latent variables. An adversarial game is cast between these two networks and a discriminative network that is trained to distinguish between joint latent/data-space samples from the generative network and joint samples from the inference network. We illustrate the ability of the model to learn mutually coherent inference and generation networks through the inspections of model samples and reconstructions and confirm the usefulness of the learned representations by obtaining a performance competitive with other recent approaches on the semi-supervised SVHN task.

Aaron Courville
Fri 5:30 a.m. - 6:00 a.m. [iCal]
Energy-Based Adversarial Training and Video Prediction (Talk)
Yann LeCun
Fri 6:00 a.m. - 7:00 a.m. [iCal]

Submit your questions to

https://www.reddit.com/r/MachineLearning/comments/5fm66i/dnips2016askaworkshopanything_adversarial/

Ian Goodfellow, Soumith Chintala, Arthur Gretton, Sebastian Nowozin, Aaron Courville, Yann LeCun, Emily Denton
Fri 7:00 a.m. - 7:30 a.m. [iCal]
Coffee break (Break)
Fri 7:30 a.m. - 9:00 a.m. [iCal]

David Pfau and Oriol Vinyals. Connecting Generative Adversarial Networks and Actor-Critic Methods

Shakir Mohamed and Balaji Lakshminarayanan. Learning in Implicit Generative Models

Guim Perarnau, Joost Van De Weijer, Bogdan Raducanu and Jose M. Álvarez. Invertible Conditional GANs for image editing

Augustus Odena, Christopher Olah and Jonathon Shlens. Conditional Image Synthesis with Auxiliary Classifier GANs

Luke Metz, Ben Poole, David Pfau and Jascha Sohl-Dickstein. Unrolled Generative Adversarial Networks

Chelsea Finn, Paul Christiano, Pieter Abbeel and Sergey Levine. A Connection Between Generative Adversarial Networks, Inverse Reinforcement Learning, and Energy-Based Models

Pauline Luc, Camille Couprie, Soumith Chintala and Jakob Verbeek. Semantic Segmentation using Adversarial Networks

Tarik Arici and Asli Celikyilmaz. Associative Adversarial Networks

Nina Narodytska and Shiva Kasiviswanathan. Simple Black-Box Adversarial Perturbations for Deep Networks

Pedro Tabacof, Julia Tavares and Eduardo Valle. Adversarial Images for Variational Autoencoders

Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov and Roger Grosse. On the Quantitative Analysis of Decoder-Based Generative Models

Takeru Miyato, Andrew Dai and Ian Goodfellow. Adversarial Training Methods for Semi-Supervised Text Classification

Fri 9:00 a.m. - 1:00 p.m. [iCal]

The posters will be up and running from the beginning of the day, and accessible during all breaks. However, from this point in time we leave the room for their dedicated exposition and discussion.

Browse the list of papers at https://sites.google.com/site/nips2016adversarial/home/accepted-papers

Fri 9:00 a.m. - 1:00 p.m. [iCal]

The posters will be up and running from the beginning of the day, and accessible during all breaks.

However, from this point in time we leave the room for their dedicated exposition and discussion.

Browse the list of papers at https://sites.google.com/site/nips2016adversarial/home/accepted-papers

Author Information

David Lopez-Paz (Facebook AI Research)
Leon Bottou (Facebook AI Research)

Léon Bottou received a Diplôme from l'Ecole Polytechnique, Paris in 1987, a Magistère en Mathématiques Fondamentales et Appliquées et Informatiques from Ecole Normale Supérieure, Paris in 1988, and a PhD in Computer Science from Université de Paris-Sud in 1991. He joined AT&T Bell Labs from 1991 to 1992 and AT&T Labs from 1995 to 2002. Between 1992 and 1995 he was chairman of Neuristique in Paris, a small company pioneering machine learning for data mining applications. He has been with NEC Labs America in Princeton since 2002. Léon's primary research interest is machine learning. His contributions to this field address theory, algorithms and large scale applications. Léon's secondary research interest is data compression and coding. His best known contribution in this field is the DjVu document compression technology (http://www.djvu.org.) Léon published over 70 papers and is serving on the boards of JMLR and IEEE TPAMI. He also serves on the scientific advisory board of Kxen Inc .

Alec Radford (OpenAI)

More from the Same Authors