Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Regulatable ML: Towards Bridging the Gaps between Machine Learning Research and Regulations

In Pursuit of Regulatable LLMs

Eoin Kenny · Julie A Shah


Abstract:

Large-Language-Models (LLMs) are arguable the biggest breakthrough in artificial intelligence to date. Recently, they have come to the public Zeitgeist with a surge of media attention surrounding ChatGPT, a large generative language model released by OpenAI which quickly became the fastest growing application in history. This model achieved unparalleled human-AI conversational skills, and even passed various mutations of the popular Turing test which measures if AI systems have achieved general intelligence. Naturally, the world at large wants to utilize these systems for various applications, but in order to do-so in truly sensitive domains, the models must often be regulatable in order to be legally used. In this short paper, we propose one approach towards such systems by forcing them to reason using a combination of (1) human-defined concepts, (2) Case-Base Reasoning (CBR), and (3) counterfactual explanations. All of these have support in user testing and psychology that they are understandable and useful to practitioners of AI systems. We envision this approach will be able to provide transparent LLMs for text classification tasks and be fully regulatable and auditable.

Chat is not available.