How Do LLMs Ask Questions? A Pragmatic Comparison with Human Question-Asking
Chani Jung · Jimin Mun · Xuhui Zhou · Alice Oh · Maarten Sap · Hyunwoo Kim
Abstract
Question asking is a fundamental linguistic and cognitive skill that underpins collaboration and a wide range of social actions. Although large language models (LLMs) are known to under-use questions$-$often leading to misunderstandings or less productive interactions$-$little is known about how their question-asking behavior differs from that of humans in social contexts. To bridge this gap, we compare the distribution and characteristics of LLM-generated questions with those produced by humans in a real-world social environment, Reddit. Using a pragmatics-based taxonomy of social actions, we analyze six open- and closed-source model families. Our findings reveal that LLMs concentrate on a narrower range of question types, exhibiting significant distributional differences from human behavior. Prompting often introduces additional biases that diverge from human patterns, while the effects of alignment tuning vary across models and are inconsistent across different social actions. These results underscore the need for more fine-grained strategies to guide LLMs’ question-asking behavior, ultimately enhancing their communicative effectiveness in real-world social interactions.
Chat is not available.
Successful Page Load