Skip to yearly menu bar Skip to main content

Workshop: Machine Learning for Systems

Improving Large Language Model Hardware Generating Quality through Post-LLM Search

Kaiyan Chang · Haimeng Ren · Mengdi Wang · shengwen Liang · Yinhe Han · Huawei Li · Xiaowei Li · ying wang


As large language models (LLMs) like ChatGPT exhibited unprecedented machine intelligence, it also shows great performance in assisting hardware engineers to realize higher-efficiency logic design via natural language interaction. However, due to the limitation of LLM, existing LLM-based hardware generating frameworks generate verilog register transfer language(RTL) without considering its performance, power, area(PPA). To overcome this challenge, we design a post LLM search approach to merge design space exploration(DSE) process into current LLM hardware generation workflow, which enables the PPA optimization. At first, our framework begins by generating prompts for the LLM, which then produces initial Verilog programs. Second, an output manager corrects and optimizes these programs before collecting them into the final design space, which is constructed as a HDL search tree. Eventually, the most important post-search stage, our work will do search through this space to select the optimal design under the target metrics.The evaluation shows that our approach improves generating Verilog quality, and shows broader design optimization space compared to prior work and native LLMs alone.

Chat is not available.