Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Socially Responsible Language Modelling Research (SoLaR)

A Simple Test of Expected Utility Theory with GPT

Mengxin Wang


Abstract:

This paper tests GPT (specifically, GPT-3.5 with the model variant text-davinci-003) with one of the most classic behavioral choice experiments -- the Allais paradox, to understand the mechanism behind GPT's choices. The Allais paradox is well-known for exposing the irrationality of human choices. Our result shows that, like humans, GPT also falls into the trap of the Allais paradox by violating the independence axiom of the expected utility theory, indicating that its choices are irrational. However, GPT violates the independence axiom in the opposite direction compared to human subjects. Specifically, human subjects tend to be more risk-seeking in the event of an opportunity gain, while GPT displays more risk aversion. This observation implies that GPT's choices structurally differ from those of humans, which might serve as a caveat for developers using LLM to generate human-like data or assist human decision-making.

Chat is not available.