Timezone: »

Learning Differentiable Programs with Admissible Neural Heuristics
Ameesh Shah · Eric Zhan · Jennifer Sun · Abhinav Verma · Yisong Yue · Swarat Chaudhuri

Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3 #832

We study the problem of learning differentiable functions expressed as programs in a domain-specific language. Such programmatic models can offer benefits such as composability and interpretability; however, learning them requires optimizing over a combinatorial space of program "architectures". We frame this optimization problem as a search in a weighted graph whose paths encode top-down derivations of program syntax. Our key innovation is to view various classes of neural networks as continuous relaxations over the space of programs, which can then be used to complete any partial program. All the parameters of this relaxed program can be trained end-to-end, and the resulting training loss is an approximately admissible heuristic that can guide the combinatorial search. We instantiate our approach on top of the A* and Iterative Deepening Depth-First Search algorithms and use these algorithms to learn programmatic classifiers in three sequence classification tasks. Our experiments show that the algorithms outperform state-of-the-art methods for program learning, and that they discover programmatic classifiers that yield natural interpretations and achieve competitive accuracy.

Author Information

Ameesh Shah (UC Berkeley)
Eric Zhan (Caltech)
Jennifer Sun (Caltech)
Abhinav Verma (University of Texas at Austin)
Yisong Yue (Caltech)
Swarat Chaudhuri (The University of Texas at Austin)

More from the Same Authors