Timezone: »
Recent advances in large language models (LLMs) have transformed the field of natural language processing (NLP). From GPT-3 to PaLM, the state-of-the-art performance on natural language tasks is being pushed forward with every new large language model. Along with natural language abilities, there has been a significant interest in understanding whether such models exhibit reasoning capabilities with the use of reasoning benchmarks. However, even though results are seemingly positive, these benchmarks prove to be simplistic in nature and the performance of LLMs on these benchmarks cannot be used as evidence to support, many a times outlandish, claims being made about LLMs' reasoning capabilities. Further, these only represent a very limited set of simple reasoning tasks and we need to look at more sophisticated reasoning problems if we are to measure the true limits of such LLM-based systems. Motivated by this, we propose an extensible assessment framework to test the capabilities of LLMs on reasoning about actions and change, a central aspect of human intelligence. We provide multiple test cases that are more involved than any of the previously established benchmarks and each test case evaluates a different aspect of reasoning about actions and change. Results on GPT-3 (davinci), Instruct-GPT3 (text-davinci-002) and BLOOM (176B), showcase subpar performance on such reasoning tasks.
Author Information
Karthik Valmeekam (Arizona State University)
Alberto Olmo (National Renewable Energy Laboratory)
Sarath Sreedharan (CSU)
Subbarao Kambhampati (Arizona State University)
More from the Same Authors
-
2021 Spotlight: Widening the Pipeline in Human-Guided Reinforcement Learning with Explanation and Context-Aware Data Augmentation »
Lin Guan · Mudit Verma · Sihang Guo · Ruohan Zhang · Subbarao Kambhampati -
2022 : Physics-Driven Convolutional Autoencoder Approach for CFD Data Compressions »
Alberto Olmo · Ahmed Zamzam · Andrew Glaws · Ryan King -
2022 : Revisiting Value Alignment Through the Lens of Human-Aware AI »
Sarath Sreedharan · Subbarao Kambhampati -
2022 : Towards customizable reinforcement learning agents: Enabling preference specification through online vocabulary expansion »
Utkarsh Soni · Sarath Sreedharan · Mudit Verma · Lin Guan · Matthew Marquez · Subbarao Kambhampati -
2022 : Advice Conformance Verification by Reinforcement Learning agents for Human-in-the-Loop »
Mudit Verma · Ayush Kharkwal · Subbarao Kambhampati -
2022 : Relative Behavioral Attributes: Filling the Gap between Symbolic Goal Specification and Reward Learning from Human Preferences »
Lin Guan · Karthik Valmeekam · Subbarao Kambhampati -
2021 Poster: Widening the Pipeline in Human-Guided Reinforcement Learning with Explanation and Context-Aware Data Augmentation »
Lin Guan · Mudit Verma · Sihang Guo · Ruohan Zhang · Subbarao Kambhampati -
2020 : Panel #2 »
Oren Etzioni · Heng Ji · Subbarao Kambhampati · Victoria Lin · Jiajun Wu -
2013 Poster: Synthesizing Robust Plans under Incomplete Domain Models »
Tuan A Nguyen · Subbarao Kambhampati · Minh Do -
2012 Poster: Action-Model Based Multi-agent Plan Recognition »
Hankz Hankui Zhuo · Qiang Yang · Subbarao Kambhampati