Timezone: »

DABS: a Domain-Agnostic Benchmark for Self-Supervised Learning
Alex Tamkin · Vincent Liu · Rongfei Lu · Daniel Fein · Colin Schultz · Noah Goodman

Self-supervised learning algorithms, including BERT and SimCLR, have enabled significant strides in fields like natural language processing, computer vision, and speech processing. However, the domain-specificity of these algorithms means that solutions must be handcrafted for each new setting, including myriad healthcare, scientific, and multimodal domains. To catalyze progress towards more domain-agnostic methods, we introduce DABS: a Domain-Agnostic Benchmark for Self-supervised learning. To perform well on DABS, an algorithm must be pretrained on six unlabeled datasets from diverse domains: natural images, text, speech recordings, medical imaging, multichannel sensor data, and paired text and images, and then perform well on a set of labeled tasks in each domain. We also present e-Mix and ShED: two baseline domain-agnostic algorithms; their modest performance demonstrates that significant progress is needed before self-supervised learning is an out-of-the-box solution for arbitrary domains. Code for benchmark datasets and baseline algorithms is available at [redacted].

Author Information

Alex Tamkin (Stanford University)
Vincent Liu
Rongfei Lu (Stanford University)
Daniel Fein (Stanford University)
Colin Schultz
Noah Goodman (Stanford University)

More from the Same Authors