Timezone: »
Although recent point cloud analysis achieves impressive progress, the paradigm of representation learning from single modality gradually meets its bottleneck. In this work, we take a step towards more discriminative 3D point cloud representation using 2D images, which inherently contain richer appearance information, e.g., texture, color, and shade. Specifically, this paper introduces a simple but effective point cloud cross-modality training (PointCMT) strategy, which utilizes view-images, i.e., rendered or projected 2D images of the 3D object, to boost point cloud classification. In practice, to effectively acquire auxiliary knowledge from view-images, we develop a teacher-student framework and formulate the cross-modal learning as a knowledge distillation problem. Through novel feature and classifier enhancement criteria, PointCMT eliminates the distribution discrepancy between different modalities and avoid potential negative transfer effectively. Note that PointCMT efficiently improves the point-only representation without any architecture modification. Sufficient experiments verify significant gains on various datasets based on several backbones, i.e., equipped with PointCMT, PointNet++ and PointMLP achieve state-of-the-art performance on two benchmarks, i.e., 94.4% and 86.7% accuracy on ModelNet40 and ScanObjectNN, respectively.
Author Information
Xu Yan (The Chinese University of Hongkong, Shenzhen)
Heshen Zhan (The Chinese University of HongKong, ShenZhen)
Chaoda Zheng (The Chinese University of Hong Kong, Shenzhen)
Jiantao Gao (Shanghai University)
Ruimao Zhang (The Chinese University of Hong Kong (Shenzhen))
Shuguang Cui (The Chinese University of Hong Kong, Shenzhen)
Zhen Li (Chinese University of Hong Kong, Shenzhen)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Let Images Give You More: Point Cloud Cross-Modal Training for Shape Analysis »
Dates n/a. Room
More from the Same Authors
-
2022 Spotlight: Divide and Contrast: Source-free Domain Adaptation via Adaptive Contrastive Learning »
Ziyi Zhang · Weikai Chen · Hui Cheng · Zhen Li · Siyuan Li · Liang Lin · Guanbin Li -
2022 Spotlight: Lightning Talks 3A-3 »
Xu Yan · Zheng Dong · Qiancheng Fu · Jing Tan · Hezhen Hu · Fukun Yin · Weilun Wang · Ke Xu · Heshen Zhan · Wen Liu · Qingshan Xu · Xiaotong Zhao · Chaoda Zheng · Ziheng Duan · Zilong Huang · Xintian Shi · Wengang Zhou · Yew Soon Ong · Pei Cheng · Hujun Bao · Houqiang Li · Wenbing Tao · Jiantao Gao · Bin Kang · Weiwei Xu · Limin Wang · Ruimao Zhang · Tao Chen · Gang Yu · Rynson Lau · Shuguang Cui · Zhen Li -
2022 Poster: Divide and Contrast: Source-free Domain Adaptation via Adaptive Contrastive Learning »
Ziyi Zhang · Weikai Chen · Hui Cheng · Zhen Li · Siyuan Li · Liang Lin · Guanbin Li -
2022 Poster: AMOS: A Large-Scale Abdominal Multi-Organ Benchmark for Versatile Medical Image Segmentation »
Yuanfeng Ji · Haotian Bai · Chongjian GE · Jie Yang · Ye Zhu · Ruimao Zhang · Zhen Li · Lingyan Zhanng · Wanling Ma · Xiang Wan · Ping Luo -
2020 Poster: Skeleton-bridged Point Completion: From Global Inference to Local Adjustment »
Yinyu Nie · Yiqun Lin · Xiaoguang Han · Shihui Guo · Jian Chang · Shuguang Cui · Jian.J Zhang -
2018 Poster: Fast Similarity Search via Optimal Sparse Lifting »
Wenye Li · Jingwei Mao · Yin Zhang · Shuguang Cui