Timezone: »

Self-Refine: Iterative Refinement with Self-Feedback
Aman Madaan · Niket Tandon · Prakhar Gupta · Skyler Hallinan · Luyu Gao · Sarah Wiegreffe · Uri Alon · Nouha Dziri · Shrimai Prabhumoye · Yiming Yang · Shashank Gupta · Bodhisattwa Prasad Majumder · Katherine Hermann · Sean Welleck · Amir Yazdanbakhsh · Peter Clark

Wed Dec 13 03:00 PM -- 05:00 PM (PST) @ Great Hall & Hall B1+B2 #406
Event URL: https://selfrefine.info/ »
Like humans, large language models (LLMs) do not always generate the best output on their first try. Motivated by how humans refine their written text, we introduce Self-Refine, an approach for improving initial outputs from LLMs through iterative feedback and refinement. The main idea is to generate an initial output using an LLMs; then, the same LLMs provides *feedback* for its output and uses it to *refine* itself, iteratively. Self-Refine does not require any supervised training data, additional training, or reinforcement learning, and instead uses a single LLM as the generator, refiner and the feedback provider. We evaluate Self-Refine across 7 diverse tasks, ranging from dialog response generation to mathematical reasoning, using state-of-the-art (GPT-3.5, ChatGPT, and GPT-4) LLMs. Across all evaluated tasks, outputs generated with Self-Refine are preferred by humans and automatic metrics over those generated with the same LLM using conventional one-step generation, improving by $\sim$20\% absolute on average in task performance. Our work demonstrates that even state-of-the-art LLMs like GPT-4 can be further improved at test-time using our simple, standalone approach.

Author Information

Aman Madaan (Carnegie Mellon University)
Niket Tandon (Allen Institute for Artificial Intelligence)
Prakhar Gupta (Google)
Prakhar Gupta

Prakhar is a research scientist at Google. Befoe that he ompleted his PhD at Language Technologies Institute, Carnegie Mellon University. His research interests include natural language generation and dialogue systems. He completed his undergraduate education in Computer Science at Indian Institute of Technology, Roorkee. Before joining CMU, he was at Adobe Research India. He has been involved in all stages of projects including inception, prototyping, and deployment in product.

Skyler Hallinan (University of Washington)
Luyu Gao (Carnegie Mellon University)
Sarah Wiegreffe (Allen Institute for AI)
Uri Alon (Carnegie Mellon University)
Nouha Dziri (Allen Institute for AI)

I'm PhD student at the University of Alberta where I investigate the generative deep learning models and natural language processing methods. In particular, my research focuses on modelling an intelligent agent which can have open-ended conversations indistinguishable from human ones. I am a member of the Alberta Machine Intelligence Institute (Amii), working under the supervision of Prof. Osmar Zaiane.

Shrimai Prabhumoye (NVIDIA)
Yiming Yang (CMU)
Shashank Gupta (Allen Institute for AI (AI2))
Bodhisattwa Prasad Majumder (University of California San Diego)
Katherine Hermann (Google)
Sean Welleck (University of Washington)
Amir Yazdanbakhsh (Google Research)
Peter Clark (Allen Institute for AI)

More from the Same Authors