Timezone: »

Spotlight Poster
Learning to Receive Help: Intervention-Aware Concept Embedding Models
Mateo Espinosa Zarlenga · Katie Collins · Krishnamurthy Dvijotham · Adrian Weller · Zohreh Shams · Mateja Jamnik

Tue Dec 12 03:15 PM -- 05:15 PM (PST) @ Great Hall & Hall B1+B2 #1505
Event URL: https://github.com/mateoespinosa/cem »

Concept Bottleneck Models (CBMs) tackle the opacity of neural architectures by constructing and explaining their predictions using a set of high-level concepts. A special property of these models is that they permit concept interventions, wherein users can correct mispredicted concepts and thus improve the model's performance. Recent work, however, has shown that intervention efficacy can be highly dependent on the order in which concepts are intervened on and on the model's architecture and training hyperparameters. We argue that this is rooted in a CBM's lack of train-time incentives for the model to be appropriately receptive to concept interventions. To address this, we propose Intervention-aware Concept Embedding models (IntCEMs), a novel CBM-based architecture and training paradigm that improves a model's receptiveness to test-time interventions. Our model learns a concept intervention policy in an end-to-end fashion from where it can sample meaningful intervention trajectories at train-time. This conditions IntCEMs to effectively select and receive concept interventions when deployed at test-time. Our experiments show that IntCEMs significantly outperform state-of-the-art concept-interpretable models when provided with test-time concept interventions, demonstrating the effectiveness of our approach.

Author Information

Mateo Espinosa Zarlenga (University of Cambridge)
Katie Collins (University of Cambridge)
Krishnamurthy Dvijotham (DeepMind)

Krishnamurthy Dvijotham is a research scientist at Google Deepmind. Until recently, he was a research engineer at Pacific Northwest National Laboratory (PNNL) in the optimization and control group. He was previously a postdoctoral fellow at the Center for Mathematics of Information at Caltech. He received his PhD in computer science and engineering from the University of Washington, Seattle in 2014 and a bachelors from IIT Bombay in 2008. His research interests span stochastic control theory, artificial intelligence, machine learning and markets/economics, and his work is motivated primarily by problems arising in large-scale infrastructure systems like the power grid. His research has won awards at several conferences in optimization, AI and machine learning.

Adrian Weller (Cambridge, Alan Turing Institute)
Adrian Weller

Adrian Weller MBE is a Director of Research in Machine Learning at the University of Cambridge, and at the Leverhulme Centre for the Future of Intelligence where he is Programme Director for Trust and Society. He is a Turing AI Fellow in Trustworthy Machine Learning, and heads Safe and Ethical AI at The Alan Turing Institute, the UK national institute for data science and AI. His interests span AI, its commercial applications and helping to ensure beneficial outcomes for society. He serves on several boards and previously held senior roles in finance.

Zohreh Shams (Babylon Health, University of Cambridge)
Mateja Jamnik (University of Cambridge)

More from the Same Authors