Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Synthetic Data Generation with Generative AI

Learning to Place Objects into Scenes by Hallucinating Scenes around Objects

Lu Yuan · James Hong · Vishnu Sarukkai · Kayvon Fatahalian

Keywords: [ object placement ] [ image diffusion ] [ Synthetic Data ]


Abstract:

The ability to modify images to add new objects into a scene stands to be a powerful image editing control, but is currently not robustly supported by existing diffusion-based image editing methods. We design a two-step method for inserting objects of a given class into images that first predicts where the object is likely to go in the image and, then, realistically inpaints the object at this location. The central challenge of our approach is predicting where an object should go in a scene, given only an image of the scene. We learn a prediction model entirely from synthetic data by using diffusion-based image outpainting to hallucinate novel images of scenes surrounding a given object. We demonstrate that this weakly supervised approach, which requires no human labels at all, is able to generate more realistic object addition image edits than prior text-controlled diffusion-based approaches. We also demonstrate that, for a limited set of object categories, our learned object placement prediction model, despite being trained entirely on generated data, makes more accurate object placements than prior state-of-the-art models for object placement that were trained on a large, manually annotated dataset.

Chat is not available.