PersonaX: Multimodal Datasets with LLM-Inferred Behavior Traits
Longkang Li · Wong Kang · Minghao Fu · Guangyi Chen · Zhenhao Chen · Gongxu Luo · Yuewen Sun · Salman Khan · Peter Spirtes · Kun Zhang
Abstract
Understanding human behavior traits is central to applications in human-computer interaction, computational social science, and personalized AI systems. Such understanding often requires integrating multiple modalities to capture nuanced patterns and relationships. However, existing resources rarely provide datasets that combine behavioral descriptors with complementary modalities such as facial attributes and biographical information. To address this gap, we present \texttt{Persona}$\mathbb{X}$, a curated collection of multimodal datasets designed to enable comprehensive analysis of public human traits across modalities. \texttt{Persona}$\mathbb{X}$ consists of (1) \texttt{CelebPersona}, featuring 9444 public figures from diverse occupations, and (2) \texttt{AthlePersona}, covering 4181 professional athletes across 7 major sports leagues. Each dataset includes behavioral trait assessments inferred by three high-performing large language models, alongside facial imagery and structured biographical features. We analyze \texttt{Persona}$\mathbb{X}$ at two complementary levels. First, we abstract high-level trait scores from text descriptions and apply five statistical independence tests to examine their relationships with other modalities. Second, we introduce a novel causal representation learning (CRL) framework tailored to multimodal and multi-measurement data, providing theoretical identifiability guarantees. Experiments on both synthetic and real-world data demonstrate the effectiveness of our approach. By unifying structured and unstructured analysis, \texttt{Persona}$\mathbb{X}$ establishes a foundation for studying LLM-inferred behavioral traits in conjunction with visual and biographical attributes, advancing multimodal trait analysis and causal reasoning.
Chat is not available.
Successful Page Load