Timezone: »
Maximizing the separation between classes constitutes a well-known inductive bias in machine learning and a pillar of many traditional algorithms. By default, deep networks are not equipped with this inductive bias and therefore many alternative solutions have been proposed through differential optimization. Current approaches tend to optimize classification and separation jointly: aligning inputs with class vectors and separating class vectors angularly. This paper proposes a simple alternative: encoding maximum separation as an inductive bias in the network by adding one fixed matrix multiplication before computing the softmax activations. The main observation behind our approach is that separation does not require optimization but can be solved in closed-form prior to training and plugged into a network. We outline a recursive approach to obtain the matrix consisting of maximally separable vectors for any number of classes, which can be added with negligible engineering effort and computational overhead. Despite its simple nature, this one matrix multiplication provides real impact. We show that our proposal directly boosts classification, long-tailed recognition, out-of-distribution detection, and open-set recognition, from CIFAR to ImageNet. We find empirically that maximum separation works best as a fixed bias; making the matrix learnable adds nothing to the performance. The closed-form implementation and code to reproduce the experiments are available on github.
Author Information
Tejaswi Kasarla (University of Amsterdam)
Gertjan Burghouts (TNO - Intelligent Imaging)
Max van Spengler (University of Amsterdam)
Elise van der Pol (Microsoft Research)
Rita Cucchiara (Univ. Modena Reg.)
Pascal Mettes (University of Amsterdam)
More from the Same Authors
-
2021 : Equidistant Hyperspherical Prototypes Improve Uncertainty Quantification »
Gertjan Burghouts · Pascal Mettes -
2022 : Self-Contained Entity Discovery from Captioned Videos »
melika ayoughi · Paul Groth · Pascal Mettes -
2022 : Hyperbolic Image Segmentation »
Mina Ghadimi Atigh · Julian Schoep · Erman Acar · Nanne van Noord · Pascal Mettes -
2022 : Maximum Class Separation as Inductive Bias in One Matrix »
Tejaswi Kasarla · Gertjan Burghouts · Max van Spengler · Elise van der Pol · Rita Cucchiara · Pascal Mettes -
2022 : Unlocking Slot Attention by Changing Optimal Transport Costs »
Yan Zhang · David Zhang · Simon Lacoste-Julien · Gertjan Burghouts · Cees Snoek -
2022 : Self-Guided Diffusion Model »
TAO HU · David Zhang · Yuki Asano · Gertjan Burghouts · Cees Snoek -
2022 : Unlocking Slot Attention by Changing Optimal Transport Costs »
Yan Zhang · David Zhang · Simon Lacoste-Julien · Gertjan Burghouts · Cees Snoek -
2022 : Unlocking Slot Attention by Changing Optimal Transport Costs »
Yan Zhang · David Zhang · Simon Lacoste-Julien · Gertjan Burghouts · Cees Snoek -
2022 Panel: Panel 2C-3: A permutation-free kernel… & Maximum Class Separation… »
Tejaswi Kasarla · Shubhanshu Shekhar -
2022 Poster: Equivariant Networks for Zero-Shot Coordination »
Darius Muglich · Christian Schroeder de Witt · Elise van der Pol · Shimon Whiteson · Jakob Foerster -
2022 : Contributed talk (Tejaswi Kasarla) - "Maximum Class Separation as Inductive Bias in One Matrix" »
Tejaswi Kasarla -
2021 Workshop: Ecological Theory of Reinforcement Learning: How Does Task Design Influence Agent Learning? »
Manfred Díaz · Hiroki Furuta · Elise van der Pol · Lisa Lee · Shixiang (Shane) Gu · Pablo Samuel Castro · Simon Du · Marc Bellemare · Sergey Levine -
2021 Poster: Independent Prototype Propagation for Zero-Shot Compositionality »
Frank Ruis · Gertjan Burghouts · Doina Bucur -
2021 Poster: Hyperbolic Busemann Learning with Ideal Prototypes »
Mina Ghadimi Atigh · Martin Keller-Ressel · Pascal Mettes -
2020 Poster: MDP Homomorphic Networks: Group Symmetries in Reinforcement Learning »
Elise van der Pol · Daniel E Worrall · Herke van Hoof · Frans Oliehoek · Max Welling -
2019 Poster: Hyperspherical Prototype Networks »
Pascal Mettes · Elise van der Pol · Cees Snoek -
2007 Poster: The Distribution Family of Similarity Distances »
Gertjan Burghouts · Arnold Smeulders · Jan-Mark Geusebroek