Skip to yearly menu bar Skip to main content


Spotlight

MACK: Multimodal Aligned Conceptual Knowledge for Unpaired Image-text Matching

Yan Huang · Yuming Wang · Yunan Zeng · Liang Wang

[ ] [ Livestream: Visit Lightning Talks 5B-4 ]

Abstract:

Recently, the accuracy of image-text matching has been greatly improved by multimodal pretrained models, all of which are trained on millions or billions of paired images and texts. Different from them, this paper studies a new scenario as unpaired image-text matching, in which paired images and texts are assumed to be unavailable during model training. To deal with this, we propose a simple yet effective method namely Multimodal Aligned Conceptual Knowledge (MACK), which is inspired by the knowledge use in human brain. It can be directly used as general knowledge to correlate images and texts even without model training, or further fine-tuned based on unpaired images and texts to better generalize to certain datasets. In addition, we extend it as a re-ranking method, which can be easily combined with existing image-text matching models to substantially improve their performance.

Chat is not available.