Timezone: »

 
Poster
Online Learning of Quantum States
Scott Aaronson · Xinyi Chen · Elad Hazan · Satyen Kale · Ashwin Nayak

Thu Dec 06 07:45 AM -- 09:45 AM (PST) @ Room 517 AB #166
Suppose we have many copies of an unknown n-qubit state $\rho$. We measure some copies of $\rho$ using a known two-outcome measurement E_1, then other copies using a measurement E_2, and so on. At each stage t, we generate a current hypothesis $\omega_t$ about the state $\rho$, using the outcomes of the previous measurements. We show that it is possible to do this in a way that guarantees that $|\trace(E_i \omega_t) - \trace(E_i\rho)|$, the error in our prediction for the next measurement, is at least $eps$ at most $O(n / eps^2) $\ times. Even in the non-realizable setting---where there could be arbitrary noise in the measurement outcomes---we show how to output hypothesis states that incur at most $O(\sqrt {Tn}) $ excess loss over the best possible state on the first $T$ measurements. These results generalize a 2007 theorem by Aaronson on the PAC-learnability of quantum states, to the online and regret-minimization settings. We give three different ways to prove our results---using convex optimization, quantum postselection, and sequential fat-shattering dimension---which have different advantages in terms of parameters and portability.

Author Information

Scott Aaronson (UT Austin)
Xinyi Chen (Google Brain)
Elad Hazan (Princeton University)
Satyen Kale (Google)
Ashwin Nayak (University of Waterloo)

More from the Same Authors