Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Machine Learning for Systems

ComPile: A Large IR Dataset from Production Sources

Aiden Grossman · Ludger Paehler · Konstantinos Parasyris · Tal Ben-Nun · Jacob Hegna · William Moses · Mircea Trofin · Johannes Doerfert


Abstract:

Code is increasingly becoming a core data modality of modern machine learning research impacting not only the way we write codewith conversational agents like OpenAI's ChatGPT, Google's Bard, or Anthropic's Claude, the way we translate code from one languageinto another, but also the compiler infrastructure underlying the language. While modeling approaches may vary and representations differ,the targeted tasks often remain the same within the individual classes of models. Relying solely on the ability of modern models to extractinformation from unstructured code does not take advantage of 70 years of programming language and compiler development by not utilizing thestructure inherent to programs in the data collection. This detracts from the performance of models working over a tokenized representationof input code and precludes the use of these models in the compiler itself. To work towards the first intermediaterepresentation (IR) based models, we fully utilize the LLVM compiler infrastructure, shared by a number of languages, to generatea 182B token dataset of LLVM IR. We generated this dataset from programming languages built on the shared LLVMinfrastructure, including Rust, Swift, Julia, and C/C++, by hooking into LLVM code generation either through the language's packagemanager or the compiler directly to extract the dataset of intermediate representations from production grade programs. Our dataset shows great promise for large language model training, and machine-learned compiler components.

Chat is not available.