Skip to yearly menu bar Skip to main content


Poster

Learning from Snapshots of Discrete and Continuous Data Streams

Pramith Devulapalli · Steve Hanneke

[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Imagine a smart camera trap selectively clicking pictures to understand animal movement patterns within a particular habitat. These "snapshots", or pieces of data captured from a data stream at adaptively chosen times, provide a glimpse of different animal movements unfolding through time. Learning a continuous-time process through snapshots, such as camera traps, is a central theme governing a wide array of online learning situations. In this paper, we adopt a learning-theoretic perspective in understanding the fundamental nature of learning different classes of functions from both discrete data streams and continuous data streams. In our first framework, the update-and-deploy setting, a learning algorithm discretely queries from a process to update a predictor designed to make predictions given as input the data stream. We construct a uniform sampling algorithm that can learn with bounded error any concept class with finite Littlestone dimension. Our second framework, known as the blind-prediction setting, a learning algorithm generates predictions independently of observing the process, only engaging with the process when it chooses to make inquiries. Interestingly, we show a stark contrast in learnability where non-trivial concept classes are unlearnable. However, we find that there are natural pattern classes, sets of time-dependent and data-dependent functions, which are learnable in the blind-prediction and update-and-deploy settings given adaptive learning algorithms. Finally, we develop a theory of pattern classes under discrete data streams for the blind-prediction setting.

Live content is unavailable. Log in and register to view live content