TY - GEN
T1 - Accuracy vs. efficiency
T2 - 56th Annual Design Automation Conference, DAC 2019
AU - Jiang, Weiwen
AU - Zhang, Xinyi
AU - Sha, Edwin H.M.
AU - Yang, Lei
AU - Zhuge, Qingfeng
AU - Shi, Yiyu
AU - Hu, Jingtong
N1 - Publisher Copyright:
© 2019 Association for Computing Machinery.
PY - 2019/6/2
Y1 - 2019/6/2
N2 - A fundamental question lies in almost every application of deep neural networks: what is the optimal neural architecture given a specific data set? Recently, several Neural Architecture Search (NAS) frameworks have been developed that use reinforcement learning and evolutionary algorithm to search for the solution. However, most of them take a long time to find the optimal architecture due to the huge search space and the lengthy training process needed to evaluate each candidate. In addition, most of them aim at accuracy only and do not take into consideration the hardware that will be used to implement the architecture. This will potentially lead to excessive latencies beyond specifications, rendering the resulting architectures useless. To address both issues, in this paper we use Field Programmable Gate Arrays (FPGAs) as a vehicle to present a novel hardware-aware NAS framework, namely FNAS, which will provide an optimal neural architecture with latency guaranteed to meet the specification. In addition, with a performance abstraction model to analyze the latency of neural architectures without training, our framework can quickly prune architectures that do not satisfy the specification, leading to higher effciency. Experimental results on common data set such as ImageNet show that in the cases where the state-of-the-art generates architectures with latencies 7.81× longer than the specification, those from FNAS can meet the specs with less than 1% accuracy loss. Moreover, FNAS also achieves up to 11.13× speedup for the search process. To the best of the authors' knowledge, this is the very first hardware aware NAS.
AB - A fundamental question lies in almost every application of deep neural networks: what is the optimal neural architecture given a specific data set? Recently, several Neural Architecture Search (NAS) frameworks have been developed that use reinforcement learning and evolutionary algorithm to search for the solution. However, most of them take a long time to find the optimal architecture due to the huge search space and the lengthy training process needed to evaluate each candidate. In addition, most of them aim at accuracy only and do not take into consideration the hardware that will be used to implement the architecture. This will potentially lead to excessive latencies beyond specifications, rendering the resulting architectures useless. To address both issues, in this paper we use Field Programmable Gate Arrays (FPGAs) as a vehicle to present a novel hardware-aware NAS framework, namely FNAS, which will provide an optimal neural architecture with latency guaranteed to meet the specification. In addition, with a performance abstraction model to analyze the latency of neural architectures without training, our framework can quickly prune architectures that do not satisfy the specification, leading to higher effciency. Experimental results on common data set such as ImageNet show that in the cases where the state-of-the-art generates architectures with latencies 7.81× longer than the specification, those from FNAS can meet the specs with less than 1% accuracy loss. Moreover, FNAS also achieves up to 11.13× speedup for the search process. To the best of the authors' knowledge, this is the very first hardware aware NAS.
UR - https://www.scopus.com/pages/publications/85067832121
U2 - 10.1145/3316781.3317757
DO - 10.1145/3316781.3317757
M3 - 会议稿件
AN - SCOPUS:85067832121
T3 - Proceedings - Design Automation Conference
BT - Proceedings of the 56th Annual Design Automation Conference 2019, DAC 2019
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 2 June 2019 through 6 June 2019
ER -