Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Machine Learning for Systems

Renamer: A Transformer Architecture Invariant to Variable Renaming

Zachary Ankner · Alex Renda · Michael Carbin


Abstract:

Many modeling tasks involve learning functions which are invariant to certain types of input transformations. We study a specific class of invariance: semantics- preserving variable renaming for models of code. We show that vanilla Transformers trained on renaming-invariant tasks do not exhibit renaming invariance. We propose Renamer, a Transformer architecture which is itself invariant to semantics- preserving variable renaming. On a CPU simulation task, Renamer reduces error by between 24.79% and 52.8% compared to a vanilla Transformer.

Chat is not available.