Timezone: »

 
Poster
SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks
Fabian Fuchs · Daniel E Worrall · Volker Fischer · Max Welling

Thu Dec 10 09:00 AM -- 11:00 AM (PST) @ Poster Session 5 #1442

We introduce the SE(3)-Transformer, a variant of the self-attention module for 3D point-clouds, which is equivariant under continuous 3D roto-translations. Equivariance is important to ensure stable and predictable performance in the presence of nuisance transformations of the data input. A positive corollary of equivariance is increased weight-tying within the model. The SE(3)-Transformer leverages the benefits of self-attention to operate on large point clouds with varying number of points, while guaranteeing SE(3)-equivariance for robustness. We evaluate our model on a toy N-body particle simulation dataset, showcasing the robustness of the predictions under rotations of the input. We further achieve competitive performance on two real-world datasets, ScanObjectNN and QM9. In all cases, our model outperforms a strong, non-equivariant attention baseline and an equivariant model without attention.

Author Information

Fabian Fuchs (University of Oxford)
Fabian Fuchs

I am a Research Scientist at DeepMind and part of their Science team. After an undergrad in physics, I did my PhD at the Applied AI lab (A2I), supervised by Professor Ingmar Posner. In 2020, I did a research sabbatical at the BCAI collaborating with Max Welling’s lab at the University of Amsterdam.

Daniel E Worrall (Qualcomm)
Volker Fischer (Bosch Center for Artificial Intelligence)
Max Welling (University of Amsterdam / Qualcomm AI Research)

More from the Same Authors