Skip to yearly menu bar Skip to main content


Poster

Generative vs. Discriminative: Rethinking The Meta-Continual Learning

Mohammadamin Banayeeanzade · Rasoul Mirzaiezadeh · Hosein Hasani · Mahdieh Soleymani

Keywords: [ Meta Learning ] [ Continual Learning ] [ Generative Model ] [ Deep Learning ]


Abstract:

Deep neural networks have achieved human-level capabilities in various learning tasks. However, they generally lose performance in more realistic scenarios like learning in a continual manner. In contrast, humans can incorporate their prior knowledge to learn new concepts efficiently without forgetting older ones. In this work, we leverage meta-learning to encourage the model to learn how to learn continually. Inspired by human concept learning, we develop a generative classifier that efficiently uses data-driven experience to learn new concepts even from few samples while being immune to forgetting. Along with cognitive and theoretical insights, extensive experiments on standard benchmarks demonstrate the effectiveness of the proposed method. The ability to remember all previous concepts, with negligible computational and structural overheads, suggests that generative models provide a natural way for alleviating catastrophic forgetting, which is a major drawback of discriminative models.

Chat is not available.