Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Trustworthy and Socially Responsible Machine Learning

On Causal Rationalization

Wenbo Zhang · TONG WU · Yunlong Wang · Yong Cai · Hengrui Cai


Abstract:

With recent advances in natural language processing, rationalization becomes an essential self-explaining diagram to disentangle the black box by selecting a subset of input texts to account for the major variation in prediction. Yet, existing association-based approaches on rationalization cannot identify true rationales when two or more rationales are highly intercorrelated, and thus provide a similar contribution to prediction accuracy, so-called spuriousness. To address this limitation, we novelly leverage two causal desiderata, non-spuriousness and efficiency, into rationalization from the causal inference perspective. We formally define the probability of causation in the rationale model with its identification established as the main component of learning necessary and sufficient rationales. The superior performance of our causal rationalization is demonstrated on real-world review and medical datasets with extensive experiments compared to state-of the-art methods.

Chat is not available.