Skip to yearly menu bar Skip to main content


Poster

Learning Elastic Costs to Shape Monge Displacements

Michal Klein · Aram-Alexandre Pooladian · Pierre Ablin · Eugene Ndiaye · Jonathan Niles-Weed · Marco Cuturi

West Ballroom A-D #7109
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract: Given a source and a target probability measure, the Monge problem studies efficient ways to map the former onto the latter.This efficiency is quantified by defining a *cost* function between source and target data. Such a cost is often set by default in the machine learning literature to the squared-Euclidean distance, 2_2(x,y):=12xy_22.The benefits of using *elastic* costs, defined using a regularizer τ as c(x,y):=22(x,y)+τ(xy), was recently highlighted in (Cuturi et al. 2023). Such costs shape the *displacements* of Monge maps T, namely the difference between a source point and its image T(x)x, by giving them a structure that matches that of the proximal operator of τ.In this work, we make two important contributions to the study of elastic costs:*(i)* For any elastic cost, we propose a numerical method to compute Monge maps that are provably optimal. This provides a much-needed routine to create synthetic problems where the ground-truth OT map is known, by analogy to the Brenier theorem, which states that the gradient of any convex potential is always a valid Monge map for the 22 cost; *(ii)* We propose a loss to *learn* the parameter θ of a parameterized regularizer τθ, and apply it in the case where τA(z):=Az22. This regularizer promotes displacements that lie on a low-dimensional subspace of Rd, spanned by the p rows of ARp×d. We illustrate the soundness of our procedure on synthetic data, generated using our first contribution, in which we show near-perfect recovery of A's subspace using only samples. We demonstrate the applicability of this method by showing predictive improvements on single-cell data tasks.

Chat is not available.