Skip to yearly menu bar Skip to main content


Poster - Recorded Presentation
in
Workshop: Machine Learning for Systems

The Case for Learning Machine Language

Guangda Liu · Chieh-Jan Mike Liang · Shijie Cao · Shuai Lu · Leendert van Doorn


Abstract:

This paper focuses on enabling modern processors to better predict upcoming instructions that will be executed, in order to improve instruction-related speculations at runtime. Using branch prediction as a case study, we take the first step to motivate the potential of learning semantic correlations in machine language (i.e., CPU instructions), and we demonstrate how to apply language modeling to machine language. Although various approaches have been proposed for instruction-related runtime speculations, they remain general-purpose and rely on language-agnostic features. Furthermore, we present a branch predictor design that takes advantage of our Transformer-based language model. Empirical results from SPEC-CPU-2017 benchmarks (on RISC-V) show that language modeling can improve the branch prediction accuracy by up to 11.03%, and the processor IPC by up to 21.16%.

Chat is not available.