Timezone: »

 
Large Language Models Are Human-Level Prompt Engineers
Yongchao Zhou · Andrei Muresanu · Ziwen Han · Silviu Pitis · Harris Chan · Keiran Paster · Jimmy Ba
Event URL: https://openreview.net/forum?id=YdqwNaCLCx »

By conditioning on natural language instructions, large language models (LLMs) have displayed impressive capabilities as general-purpose computers. However, task performance depends significantly on the quality of the prompt used to steer the model, and most effective prompts have been handcrafted by humans. Inspired by classical program synthesis and the human approach to prompt engineering, we propose Automatic Prompt Engineer (APE) for automatic instruction generation and selection. In our method, we treat the instruction as the "program," optimized by searching over a pool of instruction candidates proposed by an LLM in order to maximize a chosen score function. To evaluate the quality of the selected instruction, we evaluate the zero-shot performance of another LLM following the selected instruction. Experiments on 24 NLP tasks show that our automatically generated instructions outperform the prior LLM baseline by a large margin and achieve better or comparable performance to the instructions generated by human annotators on 21/24 tasks. We conduct extensive qualitative and quantitative analyses to explore the performance of APE. We show that APE-engineered prompts can be applied to steer models toward truthfulness and/or informativeness, as well as to improve few-shot learning performance by simply prepending them to standard in-context learning prompts.

Author Information

Yongchao Zhou (University of Toronto)
Andrei Muresanu (Vector Institute)
Ziwen Han (University of Toronto)
Silviu Pitis (University of Toronto)
Harris Chan (University of Toronto, Vector Institute)
Keiran Paster (University of Toronto)
Jimmy Ba (University of Toronto / Vector Institute)

More from the Same Authors