Timezone: »
Symbolic regression, the task of predicting the mathematical expression of a function from the observation of its values, is a difficult task which usually involves a two-step procedure: predicting the "skeleton" of the expression up to the choice of numerical constants, then fitting the constants by optimizing a non-convex loss function. The dominant approach is genetic programming, which evolves candidates by iterating this subroutine a large number of times. Neural networks have recently been tasked to predict the correct skeleton in a single try, but remain much less powerful.In this paper, we challenge this two-step procedure, and task a Transformer to directly predict the full mathematical expression, constants included. One can subsequently refine the predicted constants by feeding them to the non-convex optimizer as an informed initialization. We present ablations to show that this end-to-end approach yields better results, sometimes even without the refinement step. We evaluate our model on problems from the SRBench benchmark and show that our model approaches the performance of state-of-the-art genetic programming with several orders of magnitude faster inference.
Author Information
Pierre-alexandre Kamienny (Meta)
Stéphane d'Ascoli (ENS Paris / Meta AI)
Currently a joint Ph.D. student between ENS (supervised by Giulio Biroli) and FAIR (supervised by Levent Sagun). Working on theory of deep learning.
Guillaume Lample (Facebook AI Research)
Francois Charton (Meta AI)
More from the Same Authors
-
2022 : Symbolic-Model-Based Reinforcement Learning »
Pierre-alexandre Kamienny · Sylvain Lamprier -
2022 : Privileged Deep Symbolic Regression »
Luca Biggio · Tommaso Bendinelli · Pierre-alexandre Kamienny -
2022 : Symbolic-Model-Based Reinforcement Learning »
Pierre-alexandre Kamienny · Sylvain Lamprier -
2022 : What is my math transformer doing? Three results on interpretability and generalization »
Francois Charton -
2023 Poster: SALSA VERDE: a machine learning attack on LWE with sparse small secrets »
Cathy Li · Jana Sotakova · Emily Wenger · Zeyuan Allen-Zhu · Francois Charton · Kristin E. Lauter -
2022 Poster: HyperTree Proof Search for Neural Theorem Proving »
Guillaume Lample · Timothee Lacroix · Marie-Anne Lachaux · Aurelien Rodriguez · Amaury Hayat · Thibaut Lavril · Gabriel Ebner · Xavier Martinet -
2022 Poster: SALSA: Attacking Lattice Cryptography with Transformers »
Emily Wenger · Mingjie Chen · Francois Charton · Kristin E. Lauter -
2021 Poster: On the interplay between data structure and loss function in classification problems »
Stéphane d'Ascoli · Marylou Gabrié · Levent Sagun · Giulio Biroli -
2020 Poster: Triple descent and the two kinds of overfitting: where & why do they appear? »
Stéphane d'Ascoli · Levent Sagun · Giulio Biroli -
2020 Spotlight: Triple descent and the two kinds of overfitting: where & why do they appear? »
Stéphane d'Ascoli · Levent Sagun · Giulio Biroli -
2019 Poster: Cross-lingual Language Model Pretraining »
Alexis CONNEAU · Guillaume Lample -
2019 Spotlight: Cross-lingual Language Model Pretraining »
Alexis CONNEAU · Guillaume Lample -
2019 Poster: Finding the Needle in the Haystack with Convolutions: on the benefits of architectural bias »
Stéphane d'Ascoli · Levent Sagun · Giulio Biroli · Joan Bruna