Skip to yearly menu bar Skip to main content


Prerecorded talk
in
Workshop: Consequential Decisions in Dynamic Environments

Invited Talk 4: From Moderate Deviations Theory to Distributionally Robust Optimization: Learning from Correlated Data

Daniel Kuhn


Abstract:

We aim to learn a performance function of the invariant state distribution of an unknown linear dynamical system based on a single trajectory of correlated state observations. The function to be learned may represent, for example, an identification objective or a value function. To this end, we develop a distributionally robust estimation scheme that evaluates the worst- and best-case values of the given performance function across all stationary state distributions that are sufficiently likely to have generated the observed state trajectory. By leveraging new insights from moderate deviations theory, we prove that our estimation scheme offers consistent upper and lower confidence bounds whose exponential convergence rate can be actively tuned. In the special case of a quadratic cost, we show that the proposed confidence bounds can be computed efficiently by solving Riccati equations.

Chat is not available.