Timezone: »
We propose an online training procedure for a transformer-based automated theorem prover. Our approach leverages a new search algorithm, HyperTree Proof Search (HTPS), that learns from previous proof searches through online training, allowing it to generalize to domains far from the training distribution. We report detailed ablations of our pipeline’s main components by studying performance on three environments of increasing complexity. In particular, we show that with HTPS alone, a model trained on annotated proofs manages to prove 65.4% of a held-out set of Metamath theorems, significantly outperforming the previous state of the art of 56.5% by GPT-f. Online training on these unproved theorems increases accuracy to 82.6%. With a similar computational budget, we improve the state of the art on the Lean-based miniF2F-curriculum dataset from 31% to 42% proving accuracy.
Author Information
Guillaume Lample (Facebook AI Research)
Timothee Lacroix (Facebook)
Marie-Anne Lachaux (Facebook AI Research)
Aurelien Rodriguez (Facebook)
Amaury Hayat (Ecole des Ponts Paristech)
Thibaut Lavril (Facebook)
Gabriel Ebner (Vrije Universiteit Amsterdam)
Xavier Martinet (Meta)
More from the Same Authors
-
2022 Poster: End-to-end Symbolic Regression with Transformers »
Pierre-alexandre Kamienny · Stéphane d'Ascoli · Guillaume Lample · Francois Charton -
2021 Poster: DOBF: A Deobfuscation Pre-Training Objective for Programming Languages »
Marie-Anne Lachaux · Baptiste Roziere · Marc Szafraniec · Guillaume Lample -
2020 Poster: Unsupervised Translation of Programming Languages »
Baptiste Roziere · Marie-Anne Lachaux · Lowik Chanussot · Guillaume Lample -
2019 Poster: Cross-lingual Language Model Pretraining »
Alexis CONNEAU · Guillaume Lample -
2019 Spotlight: Cross-lingual Language Model Pretraining »
Alexis CONNEAU · Guillaume Lample