Timezone: »
In this work we aim at extending theoretical foundations of lifelong learning. Previous work analyzing this scenario is based on the assumption that the tasks are sampled i.i.d. from a task environment or limited to strongly constrained data distributions. Instead we study two scenarios when lifelong learning is possible, even though the observed tasks do not form an i.i.d. sample: first, when they are sampled from the same environment, but possibly with dependencies, and second, when the task environment is allowed to change over time. In the first case we prove a PAC-Bayesian theorem, which can be seen as a direct generalization of the analogous previous result for the i.i.d. case. For the second scenario we propose to learn an inductive bias in form of a transfer procedure. We present a generalization bound and show on a toy example how it can be used to identify a beneficial transfer algorithm.
Author Information
Anastasia Pentina (IST Austria)
Christoph Lampert (IST Austria)

Christoph Lampert received the PhD degree in mathematics from the University of Bonn in 2003. In 2010 he joined the Institute of Science and Technology Austria (ISTA) first as an Assistant Professor and since 2015 as a Professor. There, he leads the research group for Machine Learning and Computer Vision, and since 2019 he is also the head of ISTA's ELLIS unit.
More from the Same Authors
-
2021 : SSSE: Efficiently Erasing Samples from Trained Machine Learning Models »
Alexandra Peste · Dan Alistarh · Christoph Lampert -
2021 : Poster: On the Impossibility of Fairness-Aware Learning from Corrupted Data »
Nikola Konstantinov · Christoph Lampert -
2023 Poster: Deep Neural Collapse Is Provably Optimal for the Deep Unconstrained Features Model »
Peter Súkeník · Marco Mondelli · Christoph Lampert -
2022 Poster: Fairness-Aware PAC Learning from Corrupted Data »
Nikola Konstantinov · Christoph Lampert -
2021 : On the Impossibility of Fairness-Aware Learning from Corrupted Data »
Nikola Konstantinov · Christoph Lampert -
2020 Poster: Unsupervised object-centric video generation and decomposition in 3D »
Paul Henderson · Christoph Lampert -
2017 Workshop: Learning with Limited Labeled Data: Weak Supervision and Beyond »
Isabelle Augenstein · Stephen Bach · Eugene Belilovsky · Matthew Blaschko · Christoph Lampert · Edouard Oyallon · Emmanouil Antonios Platanios · Alexander Ratner · Christopher Ré -
2016 Poster: Lifelong Learning with Weighted Majority Votes »
Anastasia Pentina · Ruth Urner -
2015 Workshop: Transfer and Multi-Task Learning: Trends and New Perspectives »
Anastasia Pentina · Christoph Lampert · Sinno Jialin Pan · Mingsheng Long · Judy Hoffman · Baochen Sun · Kate Saenko -
2014 Poster: Mind the Nuisance: Gaussian Process Classification using Privileged Noise »
Daniel Hernández-lobato · Viktoriia Sharmanska · Kristian Kersting · Christoph Lampert · Novi Quadrianto -
2013 Workshop: New Directions in Transfer and Multi-Task: Learning Across Domains and Tasks »
Urun Dogan · Marius Kloft · Tatiana Tommasi · Francesco Orabona · Massimiliano Pontil · Sinno Jialin Pan · Shai Ben-David · Arthur Gretton · Fei Sha · Marco Signoretto · Rajhans Samdani · Yun-Qian Miao · Mohammad Gheshlaghi azar · Ruth Urner · Christoph Lampert · Jonathan How -
2012 Poster: Dynamic Pruning of Factor Graphs for Maximum Marginal Prediction »
Christoph Lampert -
2011 Poster: Maximum Margin Multi-Label Structured Prediction »
Christoph Lampert