Skip to yearly menu bar Skip to main content


Poster

Can Large Language Model Agents Simulate Human Trust Behavior?

Chengxing Xie · Canyu Chen · Feiran Jia · Ziyu Ye · Shiyang Lai · Kai Shu · Jindong Gu · Adel Bibi · Ziniu Hu · David Jurgens · James Evans · Philip Torr · Bernard Ghanem · Guohao Li

East Exhibit Hall A-C #4204
[ ] [ Project Page ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Large Language Model (LLM) agents have been increasingly adopted as simulation tools to model humans in social science and role-playing applications. However, one fundamental question remains: can LLM agents really simulate human behavior? In this paper, we focus on one elemental behavior in human interactions, trust, and aim to investigate whether LLM agents can simulate human trust behavior. We first find that LLM agents generally exhibit trust behavior, referred to as agent trust, under the framework of Trust Games, which are widely recognized in behavioral economics. Then, we discover that GPT-4 agents can have high behavioral alignment with humans regarding trust behavior, indicating the feasibility of simulating human trust behavior with LLM agents. In addition, we probe into the biases in agent trust and the differences in agent trust towards other LLM agents and humans. We also explore the intrinsic properties of agent trust under conditions including advanced reasoning strategies and external manipulations. Our study provides new insights into the behaviors of LLM agents and the fundamental analogy between LLMs and humans. We further illustrate the broad implications of our discoveries for applications where trust is paramount. The code is available here.

Live content is unavailable. Log in and register to view live content