DOI QR코드

DOI QR Code

Application and Performance Analysis of Double Pruning Method for Deep Neural Networks

심층신경망의 더블 프루닝 기법의 적용 및 성능 분석에 관한 연구

  • 이선우 (인하대학교 전기컴퓨터공학과) ;
  • 양호준 (인하대학교 컴퓨터공학과) ;
  • 오승연 (인하대학교 컴퓨터공학과) ;
  • 이문형 (인하대학교 컴퓨터공학과) ;
  • 권장우 (인하대학교 컴퓨터공학과)
  • Received : 2020.06.10
  • Accepted : 2020.08.20
  • Published : 2020.08.28

Abstract

Recently, the artificial intelligence deep learning field has been hard to commercialize due to the high computing power and the price problem of computing resources. In this paper, we apply a double pruning techniques to evaluate the performance of the in-depth neural network and various datasets. Double pruning combines basic Network-slimming and Parameter-prunning. Our proposed technique has the advantage of reducing the parameters that are not important to the existing learning and improving the speed without compromising the learning accuracy. After training various datasets, the pruning ratio was increased to reduce the size of the model.We confirmed that MobileNet-V3 showed the highest performance as a result of NetScore performance analysis. We confirmed that the performance after pruning was the highest in MobileNet-V3 consisting of depthwise seperable convolution neural networks in the Cifar 10 dataset, and VGGNet and ResNet in traditional convolutional neural networks also increased significantly.

최근 인공지능 딥러닝 분야는 컴퓨팅 자원의 높은 연산량과 가격문제로 인해 상용화에 어려움이 존재했다. 본 논문은 더블 프루닝 기법을 적용하여 심층신경망 모델들과 다수의 데이터셋에서의 성능을 평가하고자 한다. 더블 프루닝은 기본의 네트워크 간소화(Network-Slimming)과 파라미터 프루닝(Parameter-Pruning)을 결합한다. 이는 기존의 학습에 중요하지 않는 매개변수를 절감하여 학습 정확도를 저해하지 않고 속도를 향상시킬 수 있다는 장점이 있다. 다양한 데이터셋 학습 이후에 프루닝 비율을 증가시켜, 모델의 사이즈를 감소시켰다. NetScore 성능 분석 결과 MobileNet-V3가 가장 성능이 높게 나타났다. 프루닝 이후의 성능은 Cifar 10 데이터셋에서 깊이 우선 합성곱 신경망으로 구성된 MobileNet-V3이 가장 성능이 높았고, 전통적인 합성곱 신경망으로 이루어진 VGGNet, ResNet또한 높은 폭으로 성능이 증가함을 확인하였다.

Keywords

References

  1. E. Real, A. Aggarwal, Y. Huang & Q. V. Le. (2019, July). Regularized evolution for image classifier architecture search. In Proceedings of the aaai conference on artificial intelligence, 33, 4780-4789. DOI : 10.1609/aaai.v33i01.33014780
  2. S. Karen & Z. Andrew. (2014), Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv preprint arXiv:1409.1556.
  3. K. He, X. Zhang, S. Ren & J. Sun. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
  4. C. Szegedy et al. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1-9).
  5. A. Howard. et al. (2019). Searching for mobilenetv3. In Proceedings of the IEEE International Conference on Computer Vision (pp. 1314-1324).
  6. M. Sandler, A. Howard, M. Zhu, A. Zhmoginov & L. C. Chen. (2018). Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4510-4520).
  7. G. H. Andrew, Z. Menglong, C. Bo, K. Dmitry, W. Weijun, W. Tobias, A. Marco & A. Hartwig. (2017). MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv preprint arXiv:1704.04861.
  8. N. I, Forrest. H. Song, W. M. Matthew, A. Khalid, J. D. William & K. Kurt. (2016). SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size. arXiv preprint arXiv:1602.07360.
  9. X. Zhang, X. Zhou, M. Lin & J. Sun. (2018). Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6848-6856).
  10. F. Chollet. (2017). Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1251-1258).
  11. Gholami, A., Kwon K., Wu B., Tai Z., Yue X., Jin P., Zhao S., Keutzer K., (2018. June). Squeezenext: Hardware-aware neural network design. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 1638-1647).
  12. L. Yann, S. D. John & A. S. Sara. (1990). Optimal brain damage. In Advances in neural information processing systems (pp. 598-605).
  13. S. Han, J. Pool, J. Tran & W. Dally. (2015). Learning both weights and connections for efficient neural network. In Advances in neural information processing systems (pp. 1135-1143).
  14. H. Song, M. Huizi & J. D. William (2015). Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149.
  15. R. Reed, (1993). Pruning algorithms-a survey. IEEE transactions on Neural Networks, 4(5), 740-747. DOI : 10.1109/72.248452
  16. N. Lee, T. Ajanthan & P. H. Torr. (2018). Snip: Single-shot network pruning based on connection sensitivity. arXiv preprint arXiv:1810.02340.
  17. Z. Liu, J. Li, Z. Shen, G. Huang, S. Yan & C. Zhang, (2017). Learning efficient convolutional networks through network slimming. In Proceedings of the IEEE International Conference on Computer Vision (pp. 2736-2744).
  18. Y. He, X. Zhang & J. Sun, (2017). Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision (pp. 1389-1397).
  19. M. Tan & Q. V. Le. (2019). Efficientnet: Rethinking model scaling for convolutional neural networks. arXiv preprint arXiv:1905.11946.
  20. M. Tan & Q. V. Le. (2019). Mixconv: Mixed depthwise convolutional kernels. CoRR, abs/1907.09595
  21. J. H. Luo, J. Wu & W. Lin. (2017). Thinet: A filter level pruning method for deep neural network compression. The IEEE International Conference on Computer Vision (ICCV) (pp. 5058-5066)
  22. Z. Liu, M. Sun, T. Zhou, G. Huang, T. Darrell. (2019). Rethinking the Value of Network Pruning, International Conference on Learning Representations (ICLR) Seq
  23. A. Morcos, H. Yu, M. Paganini & Y. Tian. (2019). One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers. In Advances in Neural Information Processing Systems (pp. 4932-4942).
  24. A. Krizhevsky & Hinton, G. (2009). Learning multiple layers of features from tiny images.
  25. I. J. Goodfellow et al. (2013, Nov). Challenges in Representation Learning: A report on three machine learning contests. In International Conference on Neural Information Processing (pp. 117-124). Springer, Berlin, Heidelberg.
  26. E. Barsoum, C. Zhang, C. C. Ferrer & Z. Zhang. (2016, October). Training deep networks for facial expression recognition with crowd-sourced label distribution. In Proceedings of the 18th ACM International Conference on Multimodal Interaction (pp. 279-283).
  27. A. Wong. (2019, August). NetScore: Towards universal metrics for large-scale performance analysis of deep neural networks for practical on-device edge usage. In International Conference on Image Analysis and Recognition (pp. 15-26). Springer, Cham.
  28. M. Tan, B. Chen, R. Pang, V. Vasudevan, M. Sandler, A. Howard & Q. V. Le. (2019). Mnasnet: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2820-2828).
  29. P. Ramachandran, B. Zoph & Q. V. Le. (2017). Searching for activation functions. arXiv preprint arXiv:1710.05941.
  30. I. Loshchilov & F. Hutter. (2016). Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983.
  31. A. Aimar et al. (2018). Nullhop: A flexible convolutional neural network accelerator based on sparse representations of feature maps. IEEE transactions on neural networks and learning systems, 30(3), 644-656. DOI : 10.1109/TNNLS.2018.2852335
  32. M. Schmidt, G. Fung & R. Rosales. (2007, September). Fast optimization methods for l1 regularization: A comparative study and two new approaches. In European Conference on Machine Learning (pp. 286-297). Springer, Berlin, Heidelberg. DOI : 10.1007/978-3-540-74958-5_28