Poster

Let Images Give You More: Point Cloud Cross-Modal Training for Shape Analysis

Xu Yan · Heshen Zhan · Chaoda Zheng · Jiantao Gao · Ruimao Zhang · Shuguang Cui · Zhen Li

[ Abstract ]
[ Slides [ Poster [ OpenReview
 
Spotlight presentation: Lightning Talks 3A-3
Wed 7 Dec 10 a.m. PST — 10:15 a.m. PST

Abstract:

Although recent point cloud analysis achieves impressive progress, the paradigm of representation learning from single modality gradually meets its bottleneck. In this work, we take a step towards more discriminative 3D point cloud representation using 2D images, which inherently contain richer appearance information, e.g., texture, color, and shade. Specifically, this paper introduces a simple but effective point cloud cross-modality training (PointCMT) strategy, which utilizes view-images, i.e., rendered or projected 2D images of the 3D object, to boost point cloud classification. In practice, to effectively acquire auxiliary knowledge from view-images, we develop a teacher-student framework and formulate the cross-modal learning as a knowledge distillation problem. Through novel feature and classifier enhancement criteria, PointCMT eliminates the distribution discrepancy between different modalities and avoid potential negative transfer effectively. Note that PointCMT efficiently improves the point-only representation without any architecture modification. Sufficient experiments verify significant gains on various datasets based on several backbones, i.e., equipped with PointCMT, PointNet++ and PointMLP achieve state-of-the-art performance on two benchmarks, i.e., 94.4% and 86.7% accuracy on ModelNet40 and ScanObjectNN, respectively.

Chat is not available.