Skip to yearly menu bar Skip to main content


Poster

GIFT: Learning Transformation-Invariant Dense Visual Descriptors via Group CNNs

Yuan Liu · Zehong Shen · Zhixuan Lin · Sida Peng · Hujun Bao · Xiaowei Zhou

East Exhibition Hall B + C #90

Keywords: [ Robotics ] [ Algorithms -> Representation Learning; Applications ] [ Computer Vision ] [ Applications ]


Abstract:

Finding local correspondences between images with different viewpoints requires local descriptors that are robust against geometric transformations. An approach for transformation invariance is to integrate out the transformations by pooling the features extracted from transformed versions of an image. However, the feature pooling may sacrifice the distinctiveness of the resulting descriptors. In this paper, we introduce a novel visual descriptor named Group Invariant Feature Transform (GIFT), which is both discriminative and robust to geometric transformations. The key idea is that the features extracted from the transformed versions of an image can be viewed as a function defined on the group of the transformations. Instead of feature pooling, we use group convolutions to exploit underlying structures of the extracted features on the group, resulting in descriptors that are both discriminative and provably invariant to the group of transformations. Extensive experiments show that GIFT outperforms state-of-the-art methods on several benchmark datasets and practically improves the performance of relative pose estimation.

Live content is unavailable. Log in and register to view live content