Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Socially Responsible Language Modelling Research (SoLaR)

An International Consortium for AI Risk Evaluations

Ross Gruetzemacher · Alan Chan · Štěpán Los · Kevin Frazier · Simeon Campos · Matija Franklin · José Hernández-Orallo · James Fox · Christin Manning · Philip M Tomei · Kyle Kilian


Abstract:

Given rapid progress in AI and potential risks from next-generation frontier AI systems, the urgency to create and implement AI governance and regulatory schemes is apparent. A regulatory gap has permitted labs to conduct research, development, and deployment with minimal oversight and regulatory guidance. In response, frontier AI evaluations have been proposed as a way of assessing risks from the development and deployment of frontier AI systems. Yet, the budding AI risk evaluation ecosystem faces significant present and future coordination challenges, such as a limited diversity of evaluators, suboptimal allocation of effort, and races to the bottom. This paper proposes a solution in the form of an international consortium for AI risk evaluations, comprising both AI developers and third party AI risk evaluators. Such a consortium could play a critical role in international efforts to mitigate societal-scale risks from advanced AI. In this paper, we discuss the current evaluation ecosystem and its problems, introduce the proposed consortium, review existing organizations performing similar functions in other domains, and, finally, recommend concrete steps to advance the establishment of the proposed consortium.

Chat is not available.