Skip to yearly menu bar Skip to main content


Poster

A Metalearned Neural Circuit for Nonparametric Bayesian Inference

Jake Snell · Gianluca Bencomo · Tom Griffiths

East Exhibit Hall A-C #3511
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Most applications of machine learning to classification assume a closed set of balanced classes. This is at odds with the real world, where class occurrence statistics often follow a long-tailed power-law distribution and it is unlikely that all classes are seen in a single sample. Nonparametric Bayesian models naturally capture this phenomenon, but have significant practical barriers to widespread adoption, namely implementation complexity and computational inefficiency. To address this, we present a method for extracting the inductive bias from a nonparametric Bayesian model and transferring it to an artificial neural network. By simulating data with a nonparametric Bayesian prior, we can metalearn a sequence model that performs inference over an unlimited set of classes. After training, this "neural circuit" has distilled the corresponding inductive bias and can successfully perform sequential inference over an open set of classes. Our experimental results show that the metalearned neural circuit achieves comparable or better performance than particle filter-based methods for inference in these models while being faster and simpler to use than methods that explicitly incorporate Bayesian nonparametric inference.

Live content is unavailable. Log in and register to view live content