Skip to yearly menu bar Skip to main content

Workshop: UniReps: Unifying Representations in Neural Models

On the Direct Alignment of Latent Spaces

Zorah Lähner · Michael Moeller

[ ] [ Project Page ]
presentation: UniReps: Unifying Representations in Neural Models
Fri 15 Dec 6:15 a.m. PST — 3:15 p.m. PST


With the wide adaption of deep learning and pre-trained models rises the question of how to effectively reuse existing latent spaces for new applications.One important question is how the geometry of the latent space changes in-between different training runs of the same architecture and different architectures trained for the same task. Previous works proposed that the latent spaces for similar tasks are approximately isometric. However, in this work we show that method restricted to this assumption perform worse than when just using a linear transformation to align the latent spaces. We propose directly computing a transformation between the latent codes of different architectures which is more efficient than previous approaches and flexible wrt. to the type of transformation used. Our experiments show that aligning the latent space with a linear transformation performs best while not needing more prior knowledge.

Chat is not available.