Carbon Literacy for Generative AI: Visualizing Training Emissions Through Human-Scale Equivalents
Abstract
Training large language models (LLMs) consumes vast energy and produces substantial carbon emissions, yet this impact remains largely invisible due to limited transparency. We compile reported and estimated training emissions (2018–2024) for 13 state-of-the-art models and reframe them through human-friendly comparisons, such as trees required for absorption, and per-capita footprints, via our interactive demo. Our findings highlight both the alarming scale of emissions and the lack of standardized reporting. We position this work as a contribution to \textbf{Creative Practices}, advancing sustainability and fostering more responsible, transparent use of generative AI (GenAI). Our demo is available: \url{https://neurips-c02-viz.vercel.app/}. \vspace{-0.5em}