Skip to yearly menu bar Skip to main content


HAMLETS: Human And Model in the Loop Evaluation and Training Strategies

Divyansh Kaushik · Bhargavi Paranjape · Forough Arabshahi · Yanai Elazar · Yixin Nie · Max Bartolo · Polina Kirichenko · Pontus Lars Erik Saito Stenetorp · Mohit Bansal · Zachary Lipton · Douwe Kiela

Sat 12 Dec, 8:15 a.m. PST

Human involvement in AI system design, development, and evaluation is critical to ensure that the insights being derived are practical, and the systems built are meaningful, reliable, and relatable to those who need them. Humans play an integral role in all stages of machine learning development, be it during data generation, interactively teaching machines, or interpreting, evaluating and debugging models. With growing interest in such “human in the loop” learning, we aim to highlight new and emerging research opportunities for the ML community that arise from the evolving needs to design evaluation and training strategies for humans and models in the loop. The specific focus of this workshop is on emerging and under-explored areas of human- and model-in-the-loop learning, such as employing humans to seek richer forms of feedback for data than labels alone, learning from dynamic adversarial data collection with humans employed to find weaknesses in models, learning from human teachers instructing computers through conversation and/or demonstration, investigating the role of humans in model interpretability, and assessing social impact of ML systems. This workshop aims to bring together interdisciplinary researchers from academia and industry to discuss major challenges, outline recent advances, and facilitate future research in these areas.

Chat is not available.
Timezone: America/Los_Angeles