Skip to yearly menu bar Skip to main content


NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day

Mark Saroufim · Weiwei Yang · Christian Puhrsch · Luca Antiga · Greg Bowyer · Driss Guessous · Artidoro Pagnoni · Supriya Rao · Joseph Isaacson · Vicki Boykis · Geeta Chauhan · aaron gonzales · Davide Eynard

Room 356
[ ] [ Project Page ]
Fri 15 Dec 11:30 a.m. PST — 2:30 p.m. PST


Large Language Models (LLMs) have been pivotal in the recent Cambrian explosion of generative AI applications. However, existing efforts to democratize access to fine-tune and query LLMs have been largely limited by growing hardware costs required to adapt and serve these models. Enabling low cost and efficient LLM fine-tuning and inference can have significant impact on industrial and scientific applications. Here, we present a single GPU fine-tuning and inference competition. Our goal is to accelerate the development of practical software methods to reduce the costs associated with utilizing LLMs. Furthermore, by advocating for goal-oriented and infrastructure-focused evaluation frameworks that stress reproducibility, our aim is to democratize access to these methods and enhance their accessibility to the wider public.

Chat is not available.