Timezone: »

Retaining Knowledge for Learning with Dynamic Definition
Zichang Liu · Benjamin Coleman · Tianyi Zhang · Anshumali Shrivastava

Thu Dec 01 09:00 AM -- 11:00 AM (PST) @ Hall J #103

Machine learning models are often deployed in settings where they must be constantly updated in response to the changes in class definitions while retaining high accuracy on previously learned definitions. A classical use case is fraud detection, where new fraud schemes come one after another. While such an update can be accomplished by re-training on the complete data, the process is inefficient and prevents real-time and on-device learning. On the other hand, efficient methods that incrementally learn from new data often result in the forgetting of previously-learned knowledge. We define this problem as Learning with Dynamic Definition (LDD) and demonstrate that popular models, such as the Vision Transformer and Roberta, exhibit substantial forgetting of past definitions. We present the first practical and provable solution to LDD. Our proposal is a hash-based sparsity model\textit{RIDDLE} that solves evolving definitions by associating samples only to relevant parameters. We prove that our model is a universal function approximator and theoretically bounds the knowledge lost during the update process. On practical tasks with evolving class definition in vision and natural language processing, \textit{RIDDLE} outperforms baselines by up to 30\% on the original dataset while providing competitive accuracy on the update dataset.

Author Information

Zichang Liu (Rice University)
Benjamin Coleman (Rice University)
Tianyi Zhang (Rice University)
Anshumali Shrivastava (Rice University / ThirdAI Corp.)

More from the Same Authors