Skip to yearly menu bar Skip to main content


Poster

Robust Learning of Fixed-Structure Bayesian Networks

Yu Cheng · Ilias Diakonikolas · Daniel Kane · Alistair Stewart

Room 210 #87

Keywords: [ Learning Theory ] [ Graphical Models ] [ Unsupervised Learning ]


Abstract: We investigate the problem of learning Bayesian networks in a robust model where an $\epsilon$-fraction of the samples are adversarially corrupted. In this work, we study the fully observable discrete case where the structure of the network is given. Even in this basic setting, previous learning algorithms either run in exponential time or lose dimension-dependent factors in their error guarantees. We provide the first computationally efficient robust learning algorithm for this problem with dimension-independent error guarantees. Our algorithm has near-optimal sample complexity, runs in polynomial time, and achieves error that scales nearly-linearly with the fraction of adversarially corrupted samples. Finally, we show on both synthetic and semi-synthetic data that our algorithm performs well in practice.

Live content is unavailable. Log in and register to view live content