Poster

Can You Rely on Your Model Evaluation? Improving Model Evaluation with Synthetic Test Data

Boris van Breugel · Nabeel Seedat · Fergus Imrie · Mihaela van der Schaar

Great Hall & Hall B1+B2 (level 1) #907
[ ]
Thu 14 Dec 3 p.m. PST — 5 p.m. PST

Abstract:

Evaluating the performance of machine learning models on diverse and underrepresented subgroups is essential for ensuring fairness and reliability in real-world applications. However, accurately assessing model performance becomes challenging due to two main issues: (1) a scarcity of test data, especially for small subgroups, and (2) possible distributional shifts in the model's deployment setting, which may not align with the available test data. In this work, we introduce 3S Testing, a deep generative modeling framework to facilitate model evaluation by generating synthetic test sets for small subgroups and simulating distributional shifts. Our experiments demonstrate that 3S-Testing outperforms traditional baselines---including real test data alone---in estimating model performance on minority subgroups and under plausible distributional shifts. In addition, 3S offers intervals around its performance estimates, exhibiting superior coverage of the ground truth compared to existing approaches. Overall, these results raise the question of whether we need a paradigm shift away from limited real test data towards synthetic test data.

Chat is not available.