Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Machine Learning for Systems

PLPilot: Benchmark an Automated Programming Language Design Framework Enabled by Large Language Models

Kaiyan Chang · kubn wang · Mengdi Wang · shengwen Liang · Yinhe Han · Huawei Li · Xiaowei Li · ying wang


Abstract:

The design of new programming languages traditionally requires expertise across syntax and semantics. Recently, large language models(LLMs) have provided unprecedented power in the code generation field, which has the potential to revolutionize the current programming language design stack, including automating writing passes and formally defining a programming language's semantics and syntax. However, there is yet no framework to leverage LLMs to support programming language design. We propose an programming language design framework enabled by large language models, which decouples every part in the programming language design process into a form acceptable by LLMs. We then propose a set of benchmarks on LLM-based programming language tasks. We evaluate this framework on eight decoupled programming language design stages, which shows great productivity improvements over manually designed languages.

Chat is not available.