DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines
Abstract
The ML community is rapidly exploring techniques for prompting language models (LMs), but existing LM pipelines often rely on hard-coded “prompt templates” discovered via trial and error. We introduce DSPy, a programming model that abstracts LM pipelines as imperative computation graphs where LMs are invoked through declarative modules. DSPy modules are parameterized so they can learn to apply compositions of prompting, finetuning, augmentation, and reasoning techniques. We design a compiler that will optimize any DSPy pipeline to maximize a given metric. We conduct two case studies and show that a few lines of DSPy allow GPT-3.5 and llama2-13b-chat to self-bootstrap pipelines that outperform standard few-shot prompting and pipelines with expert-created demonstrations.