Skip to yearly menu bar Skip to main content


Poster

NAT: Neural Architecture Transformer for Accurate and Compact Architectures

Yong Guo · Yin Zheng · Mingkui Tan · Qi Chen · Jian Chen · Peilin Zhao · Junzhou Huang

East Exhibition Hall B, C #7

Keywords: [ Algorithms ] [ AutoML ] [ Algorithms -> Classification; Deep Learning -> CNN Architectures; Reinforcement Learning and Planning ] [ Reinforcement Learning ]


Abstract:

Designing effective architectures is one of the key factors behind the success of deep neural networks. Existing deep architectures are either manually designed or automatically searched by some Neural Architecture Search (NAS) methods. However, even a well-searched architecture may still contain many non-significant or redundant modules or operations (e.g., convolution or pooling), which may not only incur substantial memory consumption and computation cost but also deteriorate the performance. Thus, it is necessary to optimize the operations inside an architecture to improve the performance without introducing extra computation cost. Unfortunately, such a constrained optimization problem is NP-hard. To make the problem feasible, we cast the optimization problem into a Markov decision process (MDP) and seek to learn a Neural Architecture Transformer (NAT) to replace the redundant operations with the more computationally efficient ones (e.g., skip connection or directly removing the connection). Based on MDP, we learn NAT by exploiting reinforcement learning to obtain the optimization policies w.r.t. different architectures. To verify the effectiveness of the proposed strategies, we apply NAT on both hand-crafted architectures and NAS based architectures. Extensive experiments on two benchmark datasets, i.e., CIFAR-10 and ImageNet, demonstrate that the transformed architecture by NAT significantly outperforms both its original form and those architectures optimized by existing methods.

Live content is unavailable. Log in and register to view live content