Timezone: »
We present a meta-algorithm for learning a posterior-inference algorithm for restricted probabilistic programs. Our meta-algorithm takes a training set of probabilistic programs that describe models with observations, and attempts to learn an efficient method for inferring the posterior of a similar program.A key feature of our approach is the use of what we call a white-box inference algorithm that analyses the given program sequentially using multiple neural networks to compute an approximate posterior.The parameters of these networks are learnt from a training set by our meta-algorithm.We empirically demonstrate that the learnt inference algorithm generalises well to programs that are new in terms of both parameters and model structures, and report cases where our approach achieves greater test-time efficiency than alternatives such as HMC.
Author Information
Gwonsoo Che (KAIST)
Hongseok Yang (KAIST)
More from the Same Authors
-
2022 Poster: LobsDICE: Offline Learning from Observation via Stationary Distribution Correction Estimation »
Geon-Hyeong Kim · Jongmin Lee · Youngsoo Jang · Hongseok Yang · Kee-Eung Kim -
2022 Poster: Learning Symmetric Rules with SATNet »
Sangho Lim · Eun-Gyeol Oh · Hongseok Yang -
2021 : Meta-Learning an Inference Algorithm for Probabilistic Programs - Gwonsoo Che »
AIPLANS 2021 · Gwonsoo Che -
2020 Poster: On Correctness of Automatic Differentiation for Non-Differentiable Functions »
Wonyeol Lee · Hangyeol Yu · Xavier Rival · Hongseok Yang -
2020 Spotlight: On Correctness of Automatic Differentiation for Non-Differentiable Functions »
Wonyeol Lee · Hangyeol Yu · Xavier Rival · Hongseok Yang -
2018 Poster: Reparameterization Gradient for Non-differentiable Models »
Wonyeol Lee · Hangyeol Yu · Hongseok Yang