Skip to yearly menu bar Skip to main content


Geodesic Self-Attention for 3D Point Clouds

Zhengyu Li · XUAN TANG · Zihao Xu · Xihao Wang · Hui Yu · Mingsong Chen · xian wei

Keywords: [ Computer Vision. ] [ Point cloud ] [ transformer ] [ Geodesic ] [ Attention ]


Due to the outstanding competence in capturing long-range relationships, self-attention mechanism has achieved remarkable progress in point cloud tasks. Nevertheless, point cloud object often has complex non-Euclidean spatial structures, with the behavior changing dynamically and unpredictably. Most current self-attention modules highly rely on the dot product multiplication in Euclidean space, which cannot capture internal non-Euclidean structures of point cloud objects, especially the long-range relationships along the curve of the implicit manifold surface represented by point cloud objects. To address this problem, in this paper, we introduce a novel metric on the Riemannian manifold to capture the long-range geometrical dependencies of point cloud objects to replace traditional self-attention modules, namely, the Geodesic Self-Attention (GSA) module. Our approach achieves state-of-the-art performance compared to point cloud Transformers on object classification, few-shot classification and part segmentation benchmarks.

Chat is not available.