Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Human in the Loop Learning (HiLL) Workshop at NeurIPS 2022

Exploratory Training: When Trainers Learn

Rajesh Shrestha · Omeed Habibelahian · Arash Termehchy · Papotti Paolo


Abstract:

AI and Data systems often present examples and solicit labels from users to learn a target concept. This selection of examples could be even done in an active fashion i.e., active learning. Current systems assume that users always provide correct labeling with potentially a fixed and small chance of mistake. In several settings, users may have to explore and learn about the underlying data to label examples correctly, particularly for complex target concepts and models. For example, to provide accurate labeling for a model of detecting noisy or abnormal values, users might need to investigate the underlying data to understand typical and clean values in the data. As users gradually learn about the target concept and data, they may revise their labeling strategies. Due to the significance and non-stationarity of errors in this setting, current systems may use incorrect labels and learn inaccurate models from the users. We report preliminary results for a user study over real- world datasets on modeling human learning during training the system and layout the next steps in this investigation.

Chat is not available.