Timezone: »
We study the use of amortized optimization to predict optimal transport (OT) maps from the input measures, which we call Meta OT. It is useful when repeatedly solving similar OT problems between different measures because it leverages the knowledge and information present from past problems to rapidly predict and solve new problems. Otherwise, standard methods ignore the knowledge of the past solutions and suboptimally re-solve each problem from scratch. We demonstrate that Meta OT models surpass the standard convergence rates of log-Sinkhornsolvers in the discrete setting and convex potentials in the continuous setting. We evaluate on transport settings between images and spherical data, and show significant improvement in the computational time of standard OT solvers.
Author Information
Brandon Amos (Facebook AI Research)
Samuel Cohen (University College London)
Giulia Luise (University College London)
Ievgen Redko (Aalto University)
More from the Same Authors
-
2021 : Cross-Domain Imitation Learning via Optimal Transport »
Arnaud Fickinger · Samuel Cohen · Stuart Russell · Brandon Amos -
2021 : Imitation Learning from Pixel Observations for Continuous Control »
Samuel Cohen · Brandon Amos · Marc Deisenroth · Mikael Henaff · Eugene Vinitsky · Denis Yarats -
2021 : On Combining Expert Demonstrations in Imitation Learning via Optimal Transport »
ilana sebag · Samuel Cohen · Marc Deisenroth -
2021 : Sliced Multi-Marginal Optimal Transport »
Samuel Cohen · Alexander Terenin · Yannik Pitcan · Brandon Amos · Marc Deisenroth · Senanayak Sesh Kumar Karri -
2022 : Simultaneous alignment of cells and features of unpaired single-cell multi-omics datasets with co-optimal transport »
Pinar Demetci · Quang Huy TRAN · Ievgen Redko · Ritambhara Singh -
2022 : Optimal Transport for Offline Imitation Learning »
Yicheng Luo · zhengyao Jiang · Samuel Cohen · Edward Grefenstette · Marc Deisenroth -
2022 : Fair Synthetic Data Does not Necessarily Lead to Fair Models »
Yam Eitan · Nathan Cavaglione · Michael Arbel · Samuel Cohen -
2021 : The NeurIPS 2021 BEETL Competition: Benchmarks for EEG Transfer Learning + Q&A »
Xiaoxi Wei · Vinay Jayaram · Sylvain Chevallier · Giulia Luise · Camille Jeunet · Moritz Grosse-Wentrup · Alexandre Gramfort · Aldo A Faisal -
2020 Poster: A Non-Asymptotic Analysis for Stein Variational Gradient Descent »
Anna Korba · Adil Salim · Michael Arbel · Giulia Luise · Arthur Gretton -
2020 Poster: The Wasserstein Proximal Gradient Algorithm »
Adil Salim · Anna Korba · Giulia Luise -
2020 Poster: Exploiting MMD and Sinkhorn Divergences for Fair and Transferable Representation Learning »
Luca Oneto · Michele Donini · Giulia Luise · Carlo Ciliberto · Andreas Maurer · Massimiliano Pontil -
2019 Poster: Sinkhorn Barycenters with Free Support via Frank-Wolfe Algorithm »
Giulia Luise · Saverio Salzo · Massimiliano Pontil · Carlo Ciliberto -
2019 Spotlight: Sinkhorn Barycenters with Free Support via Frank-Wolfe Algorithm »
Giulia Luise · Saverio Salzo · Massimiliano Pontil · Carlo Ciliberto -
2018 Poster: Differential Properties of Sinkhorn Approximation for Learning with Wasserstein Distance »
Giulia Luise · Alessandro Rudi · Massimiliano Pontil · Carlo Ciliberto