`

Timezone: »

 
Poster
Test-time Collective Prediction
Celestine Mendler-Dünner · Wenshuo Guo · Stephen Bates · Michael Jordan

Thu Dec 09 08:30 AM -- 10:00 AM (PST) @ None #None

An increasingly common setting in machine learning involves multiple parties, each with their own data, who want to jointly make predictions on future test points. Agents wish to benefit from the collective expertise of the full set of agents to make better predictions than they would individually, but may not be willing to release labeled data or model parameters. In this work, we explore a decentralized mechanism to make collective predictions at test time, that is inspired by the literature in social science on human consensus-making. Building on a query model to facilitate information exchange among agents, our approach leverages each agent’s pre-trained model without relying on external validation, model retraining, or data pooling. A theoretical analysis shows that our approach recovers inverse mean-squared-error (MSE) weighting in the large-sample limit which is known to be the optimal way to combine independent, unbiased estimators. Empirically, we demonstrate that our scheme effectively combines models with differing quality across the input space: the proposed consensus prediction achieves significant gains over classical model averaging, and even outperforms weighted averaging schemes that have access to additional validation data. Finally, we propose a decentralized Jackknife procedure as a tool to evaluate the sensitivity of the collective predictions with respect to a single agent's opinion.

Author Information

Celestine Mendler-Dünner (Max Planck Institute for Intelligent Systems)
Wenshuo Guo (UC Berkeley)
Stephen Bates (University of California Berkeley)
Michael Jordan (UC Berkeley)

More from the Same Authors