Browse > Article
http://dx.doi.org/10.5909/JBE.2020.26.7.855

Training-Free Hardware-Aware Neural Architecture Search with Reinforcement Learning  

Tran, Linh Tam (Department of Computer Science and Engineering, Kyung Hee University)
Bae, Sung-Ho (Department of Computer Science and Engineering, Kyung Hee University)
Publication Information
Journal of Broadcast Engineering / v.26, no.7, 2021 , pp. 855-861 More about this Journal
Abstract
Neural Architecture Search (NAS) is cutting-edge technology in the machine learning community. NAS Without Training (NASWOT) recently has been proposed to tackle the high demand of computational resources in NAS by leveraging some indicators to predict the performance of architectures before training. The advantage of these indicators is that they do not require any training. Thus, NASWOT reduces the searching time and computational cost significantly. However, NASWOT only considers high-performing networks which does not guarantee a fast inference speed on hardware devices. In this paper, we propose a multi objectives reward function, which considers the network's latency and the predicted performance, and incorporate it into the Reinforcement Learning approach to search for the best networks with low latency. Unlike other methods, which use FLOPs to measure the latency that does not reflect the actual latency, we obtain the network's latency from the hardware NAS bench. We conduct extensive experiments on NAS-Bench-201 using CIFAR-10, CIFAR-100, and ImageNet-16-120 datasets, and show that the proposed method is capable of generating the best network under latency constrained without training subnetworks.
Keywords
Neural Architecture Search; Hardware-Aware Neural Architecture Search; NAS without Training;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Chaojian Li, Zhongzhi Yu, Yonggan Fu, Yongan Zhang, Yang Zhao, Haoran You, Qixuan Yu, Yue Wang, Cong Hao, and Yingyan Lin, "HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark," in International Conference on Learning Representations, 2021.
2 Huan Xiong, Lei Huang, Mengyang Yu, Li Liu, Fan Zhu, and Ling Shao, "On the Number of Linear Regions of Convolutional Neural Networks," in Proceedings of the 37th International Conference on Machine Learning, PMLR 119:10514-10523, 2020.
3 Lechao Xiao, Jeffrey Pennington, and Samuel S. Schoenholz, "Disentangling Trainability and Generalization in Deep Neural Networks," in Proceedings of the 37th International Conference on Machine Learning, pp. 10462-10472, 2020.
4 Jaehoon Lee, Lechao Xiao, Samuel S. Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl-Dickstein, and Jeffrey Pennington, "Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent," in Advances in Neural Information Processing Systems 32, 2019.
5 Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer, "FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), pp. 10734-10742, 2019.
6 Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V. Le, "MnasNet: Platform-Aware Neural Architecture Search for Mobile," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2820-2828, 2019.
7 Terrance DeVries, Graham W. Taylor, "Improved Regularization of Convolutional Neural Networks with Cutout", arXiv: 1708.04552, 2017.
8 Razvan Pascanu, Guido F. Montufar, and Yoshua Bengio, "On the number of inference regions of deep feed forward networks with piece-wise linear activations," CoRR, arXiv:1312.6098, 2014.
9 Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam, "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications," arXiv: 1704.04861, 2017.
10 Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, "Deep Residual Learning for Image Recognition," arXiv:1512.03385, 2015.
11 Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi, "You Only Look Once: Unified, Real-Time Object Detection," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , pp. 779-788, 2016.
12 Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg, "SSD: Single Shot MultiBox Detector," in European Conference on Computer Vision, pp 21-37, 2016.
13 Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick, "Mask-RCNN," in Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 2961-2969, 2017.
14 Arthur Jacot, Franck Gabriel, and Clement Hongler, "Neural Tangent Kernel: Convergence and Generalization in Neural Networks," in Advances in Neural Information Processing Systems 31, 2018.
15 Joseph Mellor, Jack Turner, Amos Storkey, and Elliot J. Crowley, "Neural Architecture Search without Training," under review at https://openreview.net/forum?id=g4E6SAAvACo.
16 Mahsa Forouzesh, Farnood Salehi, and Patrick Thiran, "Generalization Comparison of Deep Neural Networks via Output Sensitivity," in International Conference on Pattern Recognition 25th, pp. 7411-7418, 2020.
17 L.T. Tran, M. S. Ali and S. -H. Bae, "A Feature Fusion Based Indicator for Training-Free Neural Architecture Search," in IEEE Access, vol. 9, pp. 133914-133923, 2021.   DOI
18 Xuanyi Dong and Yi Yang, "NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search," in International Conference on Learning Representations, 2020.