Skip to yearly menu bar Skip to main content


Poster

Prospective Learning: Learning for a Dynamic Future

Ashwin De Silva · Rahul Ramesh · Rubing Yang · Joshua T Vogelstein · Pratik Chaudhari

West Ballroom A-D #5709
[ ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

In real world applications, the distribution of the data and our goals, evolve over time. And we therefore care about performance over time, rather than just instantaneous performance. Yet, the prevailing theoretical framework in artificial intelligence (AI) is probably approximately correct (PAC), which ignores time. Existing strategies to address the dynamic nature of distributions and goals have typically not treated time formally, but rather, heuristically. We therefore enrich PAC learning to by assuming the data are sampled from a stochastic process, rather than a random variable, and adjust the loss accordingly. This generalizes the notion of learning to something we call "prospective learning". We prove that time-agnostic empirical risk minimization cannot solve certain trivially simple prospective learning problems. We then prove that a simple time-aware augmentation to empirical risk minimization provably solves certain prospective learning problems. Numerical experiments illustrate that a few different ways of incorporating time, including modifications of a transformer, lead to improved algorithms for prospective learning, including visual recognition tasks constructed from MNIST and CIFAR. This framework offers a conceptual link towards both (i) improving AI solutions for currently intractable problems, and (ii) better characterizing the naturally intelligent systems that solve them.

Live content is unavailable. Log in and register to view live content