Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Meta-Learning

NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search

Julien Siems


Abstract:

Several tabular NAS benchmarks have been proposed to simulate runs of NAS methods in seconds in order to allow scientifically sound empirical evaluations. However, all existing tabular NAS benchmarks are limited to extremely small architectural spaces since they rely on exhaustive evaluations of the space. This leads to unrealistic results that do not transfer to larger search spaces. Motivated by the fact that similar architectures tend to yield comparable results, we propose NAS-Bench-301 which covers a search space many orders of magnitude larger than any previous NAS benchmark. We achieve this by meta-learning a performance predictor that predicts the capability of different neural architectures to facilitate base-level learning, and using it to define a surrogate benchmark. We fit various regression models on our dataset, which consists of ~60k architecture evaluations, and build surrogates via deep ensembles to also model uncertainty. We benchmark a wide range of NAS algorithms using NAS-Bench-301 and obtain comparable results to the true benchmark at a fraction of the real cost.

Chat is not available.