From Proposals to Enactment: The Procedural Bottleneck in AI Safety Regulation
Abstract
While AI models advance at unprecedented rates, AI safety legislation remains largely symbolic, stalled, or unrealized. Through a year-by-year analysis of AI breakthroughs, U.S. congressional policy proposals, and international legislative enactments, this study identifies a structural gap: the United States is not deficient in AI safety bill proposals but in legislative action, with only 4.23\% of U.S. AI bills reaching any terminal outcome. We quantify enactment rates, map U.S. Congressional AI bills across thematic domains, identify procedural bottlenecks, and develop a logistic regression model to test which factors predict legislative stalling. This study contributes five key advances: (1) a quantitative comparison of AI legislation versus LLM breakthroughs, (2) a comprehensive taxonomy of proposed and enacted policy subfields, (3) a dataset elucidating the structural causes of AI legislation failure, (4) statistically significant evidence that number of sponsors negatively affect bills' progress, and (5) policy recommendations grounded in planned adaptation, preemptive enactment, and independent AI oversight. We demonstrate that without enactment, AI safety regulation remains inert, highlighting the urgent need for actionable, coalition-backed AI safety policies in the United States.