Skip to yearly menu bar Skip to main content


Poster

Canonical Capsules: Self-Supervised Capsules in Canonical Pose

Weiwei Sun · Andrea Tagliasacchi · Boyang Deng · Sara Sabour · Soroosh Yazdani · Geoffrey Hinton · Kwang Moo Yi

Keywords: [ Deep Learning ] [ Machine Learning ] [ Representation Learning ]


Abstract:

We propose a self-supervised capsule architecture for 3D point clouds. We compute capsule decompositions of objects through permutation-equivariant attention, and self-supervise the process by training with pairs of randomly rotated objects. Our key idea is to aggregate the attention masks into semantic keypoints, and use these to supervise a decomposition that satisfies the capsule invariance/equivariance properties. This not only enables the training of a semantically consistent decomposition, but also allows us to learn a canonicalization operation that enables object-centric reasoning. To train our neural network we require neither classification labels nor manually-aligned training datasets. Yet, by learning an object-centric representation in a self-supervised manner, our method outperforms the state-of-the-art on 3D point cloud reconstruction, canonicalization, and unsupervised classification.

Chat is not available.