Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Distribution shifts: connecting methods and applications (DistShift)

Improving Baselines in the Wild

Kazuki Irie · Imanol Schlag · Róbert Csordás · Jürgen Schmidhuber


Abstract:

We share our experience with the recently released WILDS benchmark which is a collection of ten datasets dedicated to developing models and training strategies which are robust to domain shifts. From a handful of experiments, we find a couple of critical observations which we believe are of general interest for any future work on WILDS. Our study focuses on two datasets: iWildCam and FMoW. We show that (1) conducting separate cross-validation for each evaluation metric is crucial for both datasets (2) a weak correlation between validation and test performance might make model development difficult for iWildCam (3) minor changes in the training of hyper-parameters improve the baseline by a relatively large margin (mainly on FMoW) (4) there is a strong correlation between certain domains and certain target labels (mainly on iWildCam). To the best of our knowledge, no prior work on these datasets has reported these observations despite their obvious importance.

Chat is not available.