Skip to yearly menu bar Skip to main content


Poster

Tree-Sliced Variants of Wasserstein Distances

Tam Le · Makoto Yamada · Kenji Fukumizu · Marco Cuturi

East Exhibition Hall B + C #80

Keywords: [ Algorithms -> Classification; Algorithms -> Kernel Methods; Algorithms -> Metric Learning; Optimization ] [ Convex Optimization ] [ Similarity and Distance Learning ] [ Algorithms ]


Abstract:

Optimal transport (\OT) theory defines a powerful set of tools to compare probability distributions. \OT~suffers however from a few drawbacks, computational and statistical, which have encouraged the proposal of several regularized variants of OT in the recent literature, one of the most notable being the \textit{sliced} formulation, which exploits the closed-form formula between univariate distributions by projecting high-dimensional measures onto random lines. We consider in this work a more general family of ground metrics, namely \textit{tree metrics}, which also yield fast closed-form computations and negative definite, and of which the sliced-Wasserstein distance is a particular case (the tree is a chain). We propose the tree-sliced Wasserstein distance, computed by averaging the Wasserstein distance between these measures using random tree metrics, built adaptively in either low or high-dimensional spaces. Exploiting the negative definiteness of that distance, we also propose a positive definite kernel, and test it against other baselines on a few benchmark tasks.

Live content is unavailable. Log in and register to view live content