Workshop
Adversarial Training
David Lopez-Paz 路 Leon Bottou 路 Alec Radford
Thu 8 Dec, 11 p.m. PST
In adversarial training, a set of machines learn together by pursuing competing goals. For instance, in Generative Adversarial Networks (GANs, Goodfellow et al., 2014) a generator function learns to synthesize samples that best resemble some dataset, while a discriminator function learns to distinguish between samples drawn from the dataset and samples synthesized by the generator. GANs have emerged as a promising framework for unsupervised learning: GAN generators are able to produce images of unprecedented visual quality, while GAN discriminators learn features with rich semantics that lead to state-of-the-art semi-supervised learning (Radford et al., 2016). From a conceptual perspective, adversarial training is fascinating because it bypasses the need of loss functions in learning, and opens the door to new ways of regularizing (as well as fooling or attacking) learning machines. In this one-day workshop, we invite scientists and practitioners interested in adversarial training to gather, discuss, and establish new research collaborations. The workshop will feature invited talks, a hands-on demo, a panel discussion, and contributed spotlights and posters.
Among the research topics to be addressed by the workshop are
* Novel theoretical insights on adversarial training
* New methods and stability improvements for adversarial optimization
* Adversarial training as a proxy to unsupervised learning of representations
* Regularization and attack schemes based on adversarial perturbations
* Adversarial model evaluation
* Adversarial inference models
* Novel applications of adversarial training
Want to learn more? Get started by generating your own MNIST digits using a GAN in 100 lines of Torch: https://goo.gl/Z2leZF
Schedule
Fri 12:00 a.m. - 12:15 a.m.
|
Set up posters
(
Setup
)
>
|
馃敆 |
Fri 12:15 a.m. - 12:30 a.m.
|
Welcome
(
Talk
)
>
|
David Lopez-Paz 路 Alec Radford 路 Leon Bottou 馃敆 |
Fri 12:30 a.m. - 1:00 a.m.
|
Introduction to Generative Adversarial Networks
(
Talk
)
>
|
Ian Goodfellow 馃敆 |
Fri 1:00 a.m. - 1:30 a.m.
|
How to train a GAN?
(
Talk
)
>
|
Soumith Chintala 馃敆 |
Fri 2:00 a.m. - 2:30 a.m.
|
Learning features to distinguish distributions
(
Talk
)
>
|
Arthur Gretton 馃敆 |
Fri 2:00 a.m. - 2:30 a.m.
|
Learning features to compare distributions
(
Talk
)
>
|
馃敆 |
Fri 2:30 a.m. - 3:00 a.m.
|
Training Generative Neural Samplers using Variational Divergence
(
Talk
)
>
|
Sebastian Nowozin 馃敆 |
Fri 3:00 a.m. - 5:00 a.m.
|
Lunch break
|
馃敆 |
Fri 5:00 a.m. - 5:30 a.m.
|
Adversarially Learned Inference (ALI) and BiGANs
(
Talk
)
>
|
Aaron Courville 馃敆 |
Fri 5:30 a.m. - 6:00 a.m.
|
Energy-Based Adversarial Training and Video Prediction
(
Talk
)
>
|
Yann LeCun 馃敆 |
Fri 6:00 a.m. - 7:00 a.m.
|
Discussion panel
(
Discussion panel
)
>
|
Ian Goodfellow 路 Soumith Chintala 路 Arthur Gretton 路 Sebastian Nowozin 路 Aaron Courville 路 Yann LeCun 路 Emily Denton 馃敆 |
Fri 7:00 a.m. - 7:30 a.m.
|
Coffee break
|
馃敆 |
Fri 7:30 a.m. - 9:00 a.m.
|
Spotlight presentations
(
Talk
)
>
|
馃敆 |
Fri 9:00 a.m. - 1:00 p.m.
|
Poster session
(
Poster session
)
>
|
馃敆 |
Fri 9:00 a.m. - 1:00 p.m.
|
Additional poster and open discussions
(
Poster session
)
>
|
馃敆 |