Timezone: »

Does Unsupervised Architecture Representation Learning Help Neural Architecture Search?
Shen Yan · Yu Zheng · Wei Ao · Xiao Zeng · Mi Zhang

Thu Dec 10 09:00 PM -- 11:00 PM (PST) @ Poster Session 6 #1776

Existing Neural Architecture Search (NAS) methods either encode neural architectures using discrete encodings that do not scale well, or adopt supervised learning-based methods to jointly learn architecture representations and optimize architecture search on such representations which incurs search bias. Despite the widespread use, architecture representations learned in NAS are still poorly understood. We observe that the structural properties of neural architectures are hard to preserve in the latent space if architecture representation learning and search are coupled, resulting in less effective search performance. In this work, we find empirically that pre-training architecture representations using only neural architectures without their accuracies as labels improves the downstream architecture search efficiency. To explain this finding, we visualize how unsupervised architecture representation learning better encourages neural architectures with similar connections and operators to cluster together. This helps map neural architectures with similar performance to the same regions in the latent space and makes the transition of architectures in the latent space relatively smooth, which considerably benefits diverse downstream search strategies.

Author Information

Shen Yan (Michigan State University)
Yu Zheng (Michigan State University)
Wei Ao (Michigan State University)
Xiao Zeng (Michigan State University)
Mi Zhang (Michigan State University)

More from the Same Authors