Poster
Learning Elastic Costs to Shape Monge Displacements
Michal Klein · Aram-Alexandre Pooladian · Pierre Ablin · Eugene Ndiaye · Jonathan Niles-Weed · Marco Cuturi
West Ballroom A-D #7109
Abstract:
Given a source and a target probability measure, the Monge problem studies efficient ways to map the former onto the latter.This efficiency is quantified by defining a *cost* function between source and target data. Such a cost is often set by default in the machine learning literature to the squared-Euclidean distance, .The benefits of using *elastic* costs, defined using a regularizer as , was recently highlighted in (Cuturi et al. 2023). Such costs shape the *displacements* of Monge maps , namely the difference between a source point and its image , by giving them a structure that matches that of the proximal operator of .In this work, we make two important contributions to the study of elastic costs:*(i)* For any elastic cost, we propose a numerical method to compute Monge maps that are provably optimal. This provides a much-needed routine to create synthetic problems where the ground-truth OT map is known, by analogy to the Brenier theorem, which states that the gradient of any convex potential is always a valid Monge map for the cost; *(ii)* We propose a loss to *learn* the parameter of a parameterized regularizer , and apply it in the case where . This regularizer promotes displacements that lie on a low-dimensional subspace of , spanned by the rows of . We illustrate the soundness of our procedure on synthetic data, generated using our first contribution, in which we show near-perfect recovery of 's subspace using only samples. We demonstrate the applicability of this method by showing predictive improvements on single-cell data tasks.
Chat is not available.