Timezone: »

 
Poster
Autoformalization with Large Language Models
Yuhuai Wu · Albert Q. Jiang · Wenda Li · Markus Rabe · Charles Staats · Mateja Jamnik · Christian Szegedy

Thu Dec 01 02:00 PM -- 04:00 PM (PST) @ Hall J #512
Autoformalization is the process of automatically translating from natural language mathematics to formal specifications and proofs. A successful autoformalization system could advance the fields of formal verification, program synthesis, and artificial intelligence.While the long-term goal of autoformalization seemed elusive for a long time, we show large language models provide new prospects towards this goal. We make the surprising observation that LLMs can correctly translate a significant portion ($25.3\%$) of mathematical competition problems perfectly to formal specifications in Isabelle/HOL. We demonstrate the usefulness of this process by improving a previously introduced neural theorem prover via training on these autoformalized theorems. Our methodology results in a new state-of-the-art result on the MiniF2F theorem proving benchmark, improving the proof rate from~$29.6\%$ to~$35.2\%$.

Author Information

Yuhuai Wu (Google)
Albert Q. Jiang (University of Cambridge)
Wenda Li (University of Cambridge)
Markus Rabe (Google Research)
Charles Staats (Google)
Mateja Jamnik (University of Cambridge)
Christian Szegedy (Google)

Christian Szegedy is a Machine Learning scientist at Google Research. He has a PhD in Mathematics from the University of Bonn, Germany. His most influential past works include the discovery of adversarial examples and various computer vision architectures for image recognition and object detection. He is the co-inventor of Batch-normalization. He is currently working on automated theorem proving and auto-formalization of mathematics via deep learning.

More from the Same Authors