Skip to yearly menu bar Skip to main content

Workshop: AI for Science: from Theory to Practice

Representation Learning for Spatial Multimodal Data Integration with Optimal Transport

Xinhao Liu · Benjamin Raphael


Spatial sequencing technologies have advanced rapidly in the past few years, and recently multiple modalities of cells -- including mRNA expression, chromatin state, and other molecular modalities -- can be measured with corresponding spatial location in tissue slices. To facilitate scientific discoveries from spatial multi-omics sequencing experiments, methods for integrating multimodal spatial data are critically needed. Here we define the problem of spatial multimodal integration as integrating multiple modalities from related tissue slices into a Common Coordinate Framework (CCF) and learning biological meaningful representations for each spatial location in the CCF. We introduce a novel machine learning framework combining optimal transport and variational autoencoders to solve the spatial multimodal integration problem. Our method outperforms existing single-cell multi-omics integration methods that ignore spatial information. Our method allows researchers to analyze tissues comprehensively by integrating knowledge from spatial slices of multiple modalities.

Chat is not available.