Timezone: »
Event URL: https://openreview.net/forum?id=tOk8eiMv5Lx »
We use the Rational Speech Act framework to examine AI explanations as a pragmatic inference process. This reveals fatal flaws in how we currently train and deploy AI explainers. To evolve from level-0 explanations to level-1, we present two proposals for data collection and training: learning from L1 feedback, and learning from S1 supervision.
Author Information
Shi Feng (University of Chicago)
Chenhao Tan (University of Chicago)
More from the Same Authors
-
2022 Poster: Probing Classifiers are Unreliable for Concept Removal and Detection »
Abhinav Kumar · Chenhao Tan · Amit Sharma -
2022 Spotlight: Probing Classifiers are Unreliable for Concept Removal and Detection »
Abhinav Kumar · Chenhao Tan · Amit Sharma -
2022 Spotlight: Lightning Talks 1B-1 »
Qitian Wu · Runlin Lei · Rongqin Chen · Luca Pinchetti · Yangze Zhou · Abhinav Kumar · Hans Hao-Hsun Hsu · Wentao Zhao · Chenhao Tan · Zhen Wang · Shenghui Zhang · Yuesong Shen · Tommaso Salvatori · Gitta Kutyniok · Zenan Li · Amit Sharma · Leong Hou U · Yordan Yordanov · Christian Tomani · Bruno Ribeiro · Yaliang Li · David P Wipf · Daniel Cremers · Bolin Ding · Beren Millidge · Ye Li · Yuhang Song · Junchi Yan · Zhewei Wei · Thomas Lukasiewicz