Skip to yearly menu bar Skip to main content


Workshop

Workshop on Scalable Continual Learning for Lifelong Foundation Models

Beyza Ermis · Arslan Chaudhry · Çağatay Yıldız · Matthias Bethge · Bo Liu

Meeting 109, 110

Sat 14 Dec, 8:15 a.m. PST

In the past, continual learning (CL) was often overlooked as problems solvable by CL were efficiently addressed by offline learning, where resource efficiency wasn't a significant bottleneck. However, with the advent of foundation models, the landscape has shifted. For the pursuit of increasingly general intelligence, current foundation models are fundamentally limited by their training on static data, leading to outdated encoded information, saturation in knowledge accumulation, and wasteful use of compute resources. The increasing size of ML models puts ever more emphasis on scalable CL, as even fine-tuning large models is becoming increasingly resource-intensive and time-consuming. CL now emerges as a crucial framework in this new era, essential for dealing with the evolving scale and complexity of ML models. Yet, even the most recent methods in CL fall short of effectively addressing the challenges posed by the current data and compute scales. At this workshop, we discuss recent advances in scalable CL that could potentially replace static foundation model training, enabling us to model dynamic real-world information. Our workshop aims to bring together experts and researchers from various domains, including language, vision, speech, and multimodal AI to exchange ideas and foster collaboration. We are committed to advancing the development of next-generation foundation models that can learn and adapt continuously, addressing both technical and ethical aspects. With invited and contributed talks by distinguished researchers in the area, the workshop will delve into the evolving definition of CL, and how CL can enable the efficient development of foundation models. We will conclude the workshop with a panel discussion on how foundation models and their CL will arguably transform ML research, as well as their societal and environmental implications. We will also ensure there is ample time for discussions that encourage networking between researchers from different sub-communities, which we hope will result in new long-term collaborations. The workshop's program will showcase a diverse range of perspectives, reflecting the approaches pursued by specific sub-communities.

Live content is unavailable. Log in and register to view live content