Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Mathematics of Modern Machine Learning (M3L)

Unraveling the Complexities of Simplicity Bias: Mitigating and Amplifying Factors

Xuchen Gong · Tianwen Fu


Abstract:

The success of neural networks depends on the generalization ability, while Shah et al. conclude that the inherent bias towards simplistic features, Simplicity Bias, hurts generalization by preferring simple but noisy features to complex yet predictive ones. We aim to understand the scenarios when simplicity bias occurs more severely and the factors that are helpful in mitigating its effects. We show that many traditional insights such as increasing training size and increasing informative feature dimensions are not as effective as balancing the modes of our data distribution, distorting the simplistic features, or even searching for a good initialization. Our empirical results reveal intriguing factors to simplicity bias, and we call for future investigations to a more thorough understanding of simplicity bias and its interplay among other related fields.

Chat is not available.