Timezone: »
This paper introduces hyperspherical prototype networks, which unify classification and regression with prototypes on hyperspherical output spaces. For classification, a common approach is to define prototypes as the mean output vector over training examples per class. Here, we propose to use hyperspheres as output spaces, with class prototypes defined a priori with large margin separation. We position prototypes through data-independent optimization, with an extension to incorporate priors from class semantics. By doing so, we do not require any prototype updating, we can handle any training size, and the output dimensionality is no longer constrained to the number of classes. Furthermore, we generalize to regression, by optimizing outputs as an interpolation between two prototypes on the hypersphere. Since both tasks are now defined by the same loss function, they can be jointly trained for multi-task problems. Experimentally, we show the benefit of hyperspherical prototype networks for classification, regression, and their combination over other prototype methods, softmax cross-entropy, and mean squared error approaches.
Author Information
Pascal Mettes (University of Amsterdam)
Elise van der Pol (University of Amsterdam)
Cees Snoek (University of Amsterdam)
More from the Same Authors
-
2021 : Equidistant Hyperspherical Prototypes Improve Uncertainty Quantification »
Gertjan Burghouts · Pascal Mettes -
2022 : Self-Contained Entity Discovery from Captioned Videos »
melika ayoughi · Paul Groth · Pascal Mettes -
2022 : Hyperbolic Image Segmentation »
Mina Ghadimi Atigh · Julian Schoep · Erman Acar · Nanne van Noord · Pascal Mettes -
2022 : Maximum Class Separation as Inductive Bias in One Matrix »
Tejaswi Kasarla · Gertjan Burghouts · Max van Spengler · Elise van der Pol · Rita Cucchiara · Pascal Mettes -
2022 : Fake It Until You Make It : Towards Accurate Near-Distribution Novelty Detection »
Hossein Mirzaei · Mohammadreza Salehi · Sajjad Shahabi · Efstratios Gavves · Cees Snoek · Mohammad Sabokrou · Mohammad Hossein Rohban -
2022 Poster: Maximum Class Separation as Inductive Bias in One Matrix »
Tejaswi Kasarla · Gertjan Burghouts · Max van Spengler · Elise van der Pol · Rita Cucchiara · Pascal Mettes -
2021 Workshop: Ecological Theory of Reinforcement Learning: How Does Task Design Influence Agent Learning? »
Manfred Díaz · Hiroki Furuta · Elise van der Pol · Lisa Lee · Shixiang (Shane) Gu · Pablo Samuel Castro · Simon Du · Marc Bellemare · Sergey Levine -
2021 Poster: Hyperbolic Busemann Learning with Ideal Prototypes »
Mina Ghadimi Atigh · Martin Keller-Ressel · Pascal Mettes -
2020 Poster: Learning to Learn Variational Semantic Memory »
Xiantong Zhen · Yingjun Du · Huan Xiong · Qiang Qiu · Cees Snoek · Ling Shao -
2020 Poster: MDP Homomorphic Networks: Group Symmetries in Reinforcement Learning »
Elise van der Pol · Daniel E Worrall · Herke van Hoof · Frans Oliehoek · Max Welling