Resilience Outcomes Benchmark: Toward an Outcome-Labeled Coping Strategy Dataset for Precision Mental Health
Saurabh Anand
Abstract
Most AI benchmarks still measure static competence—accuracy on fixed math, coding, and knowledge-recall tasks. But intelligence that matters in care is adaptive effectiveness: knowing which actions help which people, at what dose, and on what timeline. Mental health AI today lacks the foundational resource that transformed vision (ImageNet) and language (Common Crawl): outcome-labeled supervision. We propose the Resilience Outcomes Benchmark (ROB), a two-phase, openly shareable dataset that operationalizes outcome-supervised learning for recovery after major stressors (bereavement, divorce, job loss, illness). Phase 1 releases 10k+ expert-labeled vignettes linking context to coping strategies with effectiveness and harm-risk ratings (PHI-free), enabling contextual strategy ranking. Phase 2 is a governed outcomes cohort capturing consented, real-world strategy use with dose/adherence and validated outcomes at 30/90 days (PHQ-9, GAD-7, WHO-5), evaluated via a models-to-data server (no row-level export). ROB turns context→strategy→outcome into measurable supervision with benchmarks for NDCG@k, dose–response, and calibrated 30/90-day forecasts. By filling this gap, ROB could catalyze precision mental health—a domain with $1T+$ global costs.
Chat is not available.
Successful Page Load