Timezone: »
We investigate the failure cases and out-of-distribution behavior of transformers trained on matrix inversion, eigen decomposition and eigenvalue calculation. We show that incorrect model predictions still retain deep mathematical properties of the solution (e.g. correct eigenvalues, unit norm of eigenvectors), and that almost all model failures can be attributed to, and predicted from, properties of the problem or solution. This demonstrates that, when in doubt, math transformers do not hallucinate crazy solutions (as was sometimes proposed) but remain roughly right''. We also show that the careful choice of a training dataset can accelerate training, while allowing the model to generalize way out of its training distribution, invalidating the idea that transformers
merely interpolate'' from memorized examples.
Author Information
Francois Charton (Meta AI)
More from the Same Authors
-
2022 Poster: End-to-end Symbolic Regression with Transformers »
Pierre-alexandre Kamienny · Stéphane d'Ascoli · Guillaume Lample · Francois Charton -
2022 Poster: SALSA: Attacking Lattice Cryptography with Transformers »
Emily Wenger · Mingjie Chen · Francois Charton · Kristin E. Lauter