Timezone: »
Most existing neural architecture search (NAS) algorithms are dedicated to and evaluated by the downstream tasks, e.g., image classification in computer vision. However, extensive experiments have shown that, prominent neural architectures, such as ResNet in computer vision and LSTM in natural language processing, are generally good at extracting patterns from the input data and perform well on different downstream tasks. In this paper, we attempt to answer two fundamental questions related to NAS. (1) Is it necessary to use the performance of specific downstream tasks to evaluate and search for good neural architectures? (2) Can we perform NAS effectively and efficiently while being agnostic to the downstream tasks? To answer these questions, we propose a novel and generic NAS framework, termed Generic NAS (GenNAS). GenNAS does not use task-specific labels but instead adopts regression on a set of manually designed synthetic signal bases for architecture evaluation. Such a self-supervised regression task can effectively evaluate the intrinsic power of an architecture to capture and transform the input signal patterns, and allow more sufficient usage of training samples. Extensive experiments across 13 CNN search spaces and one NLP space demonstrate the remarkable efficiency of GenNAS using regression, in terms of both evaluating the neural architectures (quantified by the ranking correlation Spearman's rho between the approximated performances and the downstream task performances) and the convergence speed for training (within a few seconds). For example, on NAS-Bench-101, GenNAS achieves 0.85 rho while the existing efficient methods only achieve 0.38. We then propose an automatic task search to optimize the combination of synthetic signals using limited downstream-task-specific labels, further improving the performance of GenNAS. We also thoroughly evaluate GenNAS's generality and end-to-end NAS performance on all search spaces, which outperforms almost all existing works with significant speedup. For example, on NASBench-201, GenNAS can find near-optimal architectures within 0.3 GPU hour.
Author Information
Yuhong Li (University of Illinois at Urbana-Champaign)
Cong Hao (Georgia Institute of Technology)
Pan Li (Stanford University)
Jinjun Xiong (IBM Research)
Deming Chen (University of Illinois, Urbana Champaign)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: Generic Neural Architecture Search via Regression »
Dates n/a. Room
More from the Same Authors
-
2021 : Semi-supervised Graph Neural Network for Particle-level Noise Removal »
Tianchun Li · Shikun Liu · Nhan Tran · Mia Liu · Pan Li -
2022 Poster: Unsupervised Learning for Combinatorial Optimization with Principled Objective Relaxation »
Haoyu Peter Wang · Nan Wu · Hang Yang · Cong Hao · Pan Li -
2022 Poster: M³ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design »
hanxue liang · Zhiwen Fan · Rishov Sarkar · Ziyu Jiang · Tianlong Chen · Kai Zou · Yu Cheng · Cong Hao · Zhangyang Wang -
2021 Poster: Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks »
Shuai Zhang · Meng Wang · Sijia Liu · Pin-Yu Chen · Jinjun Xiong -
2021 Poster: Local Hyper-Flow Diffusion »
Kimon Fountoulakis · Pan Li · Shenghao Yang -
2021 Poster: Labeling Trick: A Theory of Using Graph Neural Networks for Multi-Node Representation Learning »
Muhan Zhang · Pan Li · Yinglong Xia · Kai Wang · Long Jin -
2021 Poster: Adversarial Graph Augmentation to Improve Graph Contrastive Learning »
Susheel Suresh · Pan Li · Cong Hao · Jennifer Neville -
2021 Poster: Nested Graph Neural Networks »
Muhan Zhang · Pan Li -
2018 Poster: Revisiting Decomposable Submodular Function Minimization with Incidence Relations »
Pan Li · Olgica Milenkovic -
2018 Poster: Quadratic Decomposable Submodular Function Minimization »
Pan Li · Niao He · Olgica Milenkovic -
2017 Poster: Interpretable and Globally Optimal Prediction for Textual Grounding using Image Concepts »
Raymond A. Yeh · Jinjun Xiong · Wen-Mei Hwu · Minh Do · Alex Schwing -
2017 Poster: Inhomogeneous Hypergraph Clustering with Applications »
Pan Li · Olgica Milenkovic -
2017 Oral: Interpretable and Globally Optimal Prediction for Textual Grounding using Image Concepts »
Raymond A. Yeh · Jinjun Xiong · Wen-Mei Hwu · Minh Do · Alex Schwing -
2017 Spotlight: Inhomogoenous Hypergraph Clustering with Applications »
Pan Li · Olgica Milenkovic