Timezone: »
Seeking informative projecting directions has been an important task in utilizing sliced Wasserstein distance in applications. However, finding these directions usually requires an iterative optimization procedure over the space of projecting directions, which is computationally expensive. Moreover, the computational issue is even more severe in deep learning applications, where computing the distance between two mini-batch probability measures is repeated several times. This nested-loop has been one of the main challenges that prevent the usage of sliced Wasserstein distances based on good projections in practice. To address this challenge, we propose to utilize the \textit{learning-to-optimize} technique or \textit{amortized optimization} to predict the informative direction of any given two mini-batch probability measures. To the best of our knowledge, this is the first work that bridges amortized optimization and sliced Wasserstein generative models. In particular, we derive linear amortized models, generalized linear amortized models, and non-linear amortized models which are corresponding to three types of novel mini-batch losses, named \emph{amortized sliced Wasserstein}. We demonstrate the favorable performance of the proposed sliced losses in deep generative modeling on standard benchmark datasets.
Author Information
Khai Nguyen (University of Texas, Austin)
Nhat Ho (University of Texas at Austin)
More from the Same Authors
-
2022 : Statistical and Computational Complexities of BFGS Quasi-Newton Method for Generalized Linear Models »
Qiujiang Jin · Aryan Mokhtari · Nhat Ho · Tongzheng Ren -
2022 Poster: Revisiting Sliced Wasserstein on Images: From Vectorization to Convolution »
Khai Nguyen · Nhat Ho -
2022 Poster: Stochastic Multiple Target Sampling Gradient Descent »
Hoang Phan · Ngoc Tran · Trung Le · Toan Tran · Nhat Ho · Dinh Phung -
2022 Poster: Beyond black box densities: Parameter learning for the deviated components »
Dat Do · Nhat Ho · XuanLong Nguyen -
2022 Poster: FourierFormer: Transformer Meets Generalized Fourier Integral Theorem »
Tan Nguyen · Minh Pham · Tam Nguyen · Khai Nguyen · Stanley Osher · Nhat Ho -
2022 Poster: Improving Transformer with an Admixture of Attention Heads »
Tan Nguyen · Tam Nguyen · Hai Do · Khai Nguyen · Vishwanath Saragadam · Minh Pham · Khuong Duy Nguyen · Nhat Ho · Stanley Osher -
2021 Poster: Structured Dropout Variational Inference for Bayesian Neural Networks »
Son Nguyen · Duong Nguyen · Khai Nguyen · Khoat Than · Hung Bui · Nhat Ho -
2021 Poster: On Robust Optimal Transport: Computational Complexity and Barycenter Computation »
Khang Le · Huy Nguyen · Quang M Nguyen · Tung Pham · Hung Bui · Nhat Ho -
2020 Poster: Projection Robust Wasserstein Distance and Riemannian Optimization »
Tianyi Lin · Chenyou Fan · Nhat Ho · Marco Cuturi · Michael Jordan -
2020 Poster: Fixed-Support Wasserstein Barycenters: Computational Hardness and Fast Algorithm »
Tianyi Lin · Nhat Ho · Xi Chen · Marco Cuturi · Michael Jordan -
2020 Spotlight: Projection Robust Wasserstein Distance and Riemannian Optimization »
Tianyi Lin · Chenyou Fan · Nhat Ho · Marco Cuturi · Michael Jordan