Skip to yearly menu bar Skip to main content

Spotlight Poster

DreamSim: Learning New Dimensions of Human Visual Similarity using Synthetic Data

Stephanie Fu · Netanel Tamir · Shobhita Sundaram · Lucy Chai · Richard Zhang · Tali Dekel · Phillip Isola

Great Hall & Hall B1+B2 (level 1) #127
[ ] [ Project Page ]
[ Paper [ Poster [ OpenReview
Wed 13 Dec 8:45 a.m. PST — 10:45 a.m. PST


Current perceptual similarity metrics operate at the level of pixels and patches. These metrics compare images in terms of their low-level colors and textures, but fail to capture mid-level similarities and differences in image layout, object pose, and semantic content. In this paper, we develop a perceptual metric that assesses images holistically. Our first step is to collect a new dataset of human similarity judgments over image pairs that are alike in diverse ways. Critical to this dataset is that judgments are nearly automatic and shared by all observers. To achieve this we use recent text-to-image models to create synthetic pairs that are perturbed along various dimensions. We observe that popular perceptual metrics fall short of explaining our new data, and we introduce a new metric, DreamSim, tuned to better align with human perception. We analyze how our metric is affected by different visual attributes, and find that it focuses heavily on foreground objects and semantic content while also being sensitive to color and layout. Notably, despite being trained on synthetic data, our metric generalizes to real images, giving strong results on retrieval and reconstruction tasks. Furthermore, our metric outperforms both prior learned metrics and recent large vision models on these tasks. Our project page:

Chat is not available.