Skip to yearly menu bar Skip to main content


Poster

Weak Supervision Performance Evaluation via Partial Identification

Felipe Maia Polo · Subha Maity · Mikhail Yurochkin · Moulinath Banerjee · Yuekai Sun

East Exhibit Hall A-C #4505
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Programmatic Weak Supervision (PWS) enables supervised model training without direct access to ground truth labels, utilizing weak labels from heuristics, crowdsourcing, or pre-trained models. However, the absence of ground truth complicates model evaluation, as traditional metrics such as accuracy, precision, and recall cannot be directly calculated. In this work, we present a novel method to address this challenge by framing model evaluation as a partial identification problem and estimating performance bounds using Fréchet bounds. Our approach derives reliable bounds on key metrics without requiring labeled data, overcoming core limitations in current weak supervision evaluation techniques. Through scalable convex optimization, we obtain accurate and computationally efficient bounds for metrics including accuracy, precision, recall, and F1-score, even in high-dimensional settings. This framework offers a robust approach to assessing model quality without ground truth labels, enhancing the practicality of weakly supervised learning for real-world applications.

Live content is unavailable. Log in and register to view live content