Skip to yearly menu bar Skip to main content


Poster

Curriculum Learning by Dynamic Instance Hardness

Tianyi Zhou · Shengjie Wang · Jeffrey A Bilmes

Poster Session 5 #1333

Abstract:

A good teacher can adjust a curriculum based on students' learning history. By analogy, in this paper, we study the dynamics of a deep neural network's (DNN) performance on individual samples during its learning process. The observed properties allow us to develop an adaptive curriculum that leads to faster learning of more accurate models. We introduce dynamic instance hardness (DIH), the exponential moving average of a sample's instantaneous hardness (e.g., a loss, or a change in output) over the training history. A low DIH indicates that a model retains knowledge about a sample over time. For DNNs, we find that a sample's DIH early in training predicts its DIH in later stages. Hence, we can train a model using samples mostly with higher DIH and safely deprioritize those with lower DIH. This motivates a DIH guided curriculum learning (DIHCL) procedure. Compared to existing CL methods: (1) DIH is more stable over time than using only instantaneous hardness, which is noisy due to stochastic training and DNN's non-smoothness; (2) DIHCL is computationally inexpensive since it uses only a byproduct of back-propagation and thus does not require extra inference. On 11 datasets, DIHCL significantly outperforms random mini-batch SGD and recent CL methods in terms of efficiency and final performance. The code of DIHCL is available at https://github.com/tianyizhou/DIHCL.

Chat is not available.