LLM-based Agents in Supply Chain Games: The Role of Incomplete Information and Model Heterogeneity
Abstract
In the intricate landscape of global supply chains, effective collaboration is paramount for mitigating market volatility, yet complete information sharing among partners is often infeasible. This study explores the dynamics of cooperative games in supply chains by leveraging a suite of Large Language Models (LLMs) as intelligent agents under incomplete information. We design experiments that restrict information sharing to subsets of collaborating enterprises, thereby creating a more realistic simulation of business environments. A key, and counterintuitive, finding is that partial information sharing can yield systemic benefits comparable to those achieved with full transparency. This carries significant practical implications, suggesting that limited strategic information disclosure can be a potent strategy for enhancing overall system efficiency. Furthermore, a comparative analysis of different LLMs, specifically DeepSeek, Qwen, and Llama, reveals distinct decision-making propensities and stability. Statistical tests confirm a stability hierarchy, with DeepSeek performing most consistently, followed by Qwen and Llama, a finding that aligns with the intuitive expectation that model capability increases with scale. Finally, through ensemble experiments in a Llama-based environment, we demonstrate that incorporating a higher-capability model can improve both stability and overall performance, though we observe this effect is subject to an upper bound. Collectively, our work contributes a novel methodology to the modeling of artificial societies, demonstrating the capability of LLM-based agent simulations for exploring complex socio-economic dynamics.