Skip to yearly menu bar Skip to main content


Poster

Point Cloud Completion with Pretrained Text-to-Image Diffusion Models

Yoni Kasten · Ohad Rahamim · Gal Chechik

Great Hall & Hall B1+B2 (level 1) #110
[ ] [ Project Page ]
Thu 14 Dec 8:45 a.m. PST — 10:45 a.m. PST

Abstract:

Point cloud data collected in real-world applications are often incomplete. This is because they are observed from partial viewpoints, which capture only a specific perspective or angle, or due to occlusion and low resolution. Existing completion approaches rely on datasets of specific predefined objects to guide the completion of incomplete, and possibly noisy, point clouds. However, these approaches perform poorly with Out-Of-Distribution (OOD) objects, which are either absent from the dataset or poorly represented. In recent years, the field of text-guided image generation has made significant progress, leading to major breakthroughs in text guided shape generation. We describe an approach called SDS-Complete that uses a pre-trained text-to-image diffusion model and leverages the text semantic of a given incomplete point cloud of an object, to obtain a complete surface representation. SDS-Complete can complete a variety of objects at test time optimization without the need for an expensive collection of 3D information. We evaluate SDS-Complete on incomplete scanned objects, captured by real-world depth sensors and LiDAR scanners, and demonstrate that is effective in handling objects which are typically absent from common datasets.

Chat is not available.