Poster
Novel Object Synthesis via Adaptive Text-Image Harmony
Zeren Xiong · Zedong Zhang · Zikun Chen · Shuo Chen · Xiang Li · Gan Sun · Jian Yang · Jun Li
East Exhibit Hall A-C #1608
In this paper, we study an object synthesis task that combines an object text with an object image to create a new object image. However, most diffusion models struggle with this task due to the imbalance between text and image inputs. To address this issue, we propose a simple yet effective method called Adaptive Text-Image Harmony (ATIH) for generating novel and surprising objects. First, we introduce a scale factor and an injection step to balance text and image features in cross-attention and to preserve image information in self-attention during the text-image inversion diffusion process, respectively. Second, to adaptively adjust these parameters, we present a novel similarity score function that not only maximizes the similarities between the generated object image and the input text/image but also balances these similarities to harmonize text and image integration. Third, to better integrate object text and image, we design a balanced loss function with a noise parameter, ensuring both optimal editability and fidelity of the object image. Extensive experiments demonstrate the effectiveness of our approach, showcasing remarkable object creations such as dog-lobster.
Live content is unavailable. Log in and register to view live content