Misalignment of LLM-Generated Personas with Human Perceptions in Low-Resource Settings
Tabia Tanzin Prama · Peter Dodds · Christopher Danforth
Abstract
Recent advancements enable Large Language Models (LLMs) to generate AI personas, yet their lack of deep contextual, cultural, and emotional understanding poses a significant limitation. This study quantitatively compared human responses with eight LLM-generated social personas (e.g., Male, Female, Muslim, Political Supporter) within a low-resource environment like Bangladesh, using culturally specific questions. Results show real human persona responses significantly outperformed all LLMs in answering questions and across all matrices of persona perception, with particularly large gaps in empathy and credibility. Furthermore, LLM-generated content exhibited a systematic "Pollyanna Principle" bias, scoring measurably higher in positive sentiment ($\Phi_{avg} = 5.99$ for LLMs vs. $5.60$ for Humans). These findings suggest that LLM personas do not accurately reflect the authentic experience of real persons in resource-scarce environments and it is essential to validate LLM personas against real-world human data to ensure their alignment and reliability before deploying them in social science research.
Chat is not available.
Successful Page Load