Skip to yearly menu bar Skip to main content


Hyperbolic Feature Augmentation via Distribution Estimation and Infinite Sampling on Manifolds

Zhi Gao · Yuwei Wu · Yunde Jia · Mehrtash Harandi

Hall J (level 1) #418

Keywords: [ Infinite Augmentation ] [ Feature Augmentation ] [ distribution estimation ] [ neural ode ] [ Hyperbolic space ]


Learning in hyperbolic spaces has attracted growing attention recently, owing to their capabilities in capturing hierarchical structures of data. However, existing learning algorithms in the hyperbolic space tend to overfit when limited data is given. In this paper, we propose a hyperbolic feature augmentation method that generates diverse and discriminative features in the hyperbolic space to combat overfitting. We employ a wrapped hyperbolic normal distribution to model augmented features, and use a neural ordinary differential equation module that benefits from meta-learning to estimate the distribution. This is to reduce the bias of estimation caused by the scarcity of data. We also derive an upper bound of the augmentation loss, which enables us to train a hyperbolic model by using an infinite number of augmentations. Experiments on few-shot learning and continual learning tasks show that our method significantly improves the performance of hyperbolic algorithms in scarce data regimes.

Chat is not available.