Skip to yearly menu bar Skip to main content


Poster

LESS: Label-Efficient and Single-Stage Referring 3D Instance Segmentation

Xuexun Liu · Xiaoxu Xu · Jinlong Li · Qiudan Zhang · Xu Wang · Lin Ma · Nicu Sebe

East Exhibit Hall A-C #1710
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Referring 3D instance segmentation is a visual-language task that segments all points of the specified object from a 3D point cloud described by a sentence of query. Previous works perform a two-stage paradigm, first conducting language-agnostic instance segmentation then matching with given text query. However, the semantic concepts from text query and visual cues are separately interacted during the training, and both instance and semantic labels for each object are required, which is time consuming and human-labor intensive. To mitigate these issues, we propose a novel referring 3D instance segmentation pipeline, Label-Efficient and Single-Stage, dubbed LESS, which is only under the supervision of efficient binary mask. Specifcally, we design a Point-Word Cross-Modal Alignment module for aligning the fine-grained features of points and textual embedding. Query Mask Predictor module and Sentence Query Alignment module are introduced for coarse-grained alignment between masks and query. Furthermore, we propose an area regularization loss, which coarsely reduces irrelevant background predictions on a large scale. Besides, a point-to-point contrastive loss is proposed concentrating on distinguishing points with subtly similar features. Through extensive experiments, we achieve state-of-the-art performance on ScanRefer dataset by surpassing the previous methods about 3.7% mIoU using only binary labels.

Live content is unavailable. Log in and register to view live content