Browse > Article
http://dx.doi.org/10.6109/jkiice.2021.25.9.1158

Compact CNN Accelerator Chip Design with Optimized MAC And Pooling Layers  

Son, Hyun-Wook (Mixed Signal Integrated System Lab, ChungBuk National University)
Lee, Dong-Yeong (Mixed Signal Integrated System Lab, ChungBuk National University)
Kim, HyungWon (Mixed Signal Integrated System Lab, ChungBuk National University)
Abstract
This paper proposes a CNN accelerator which is optimized Pooling layer operation incorporated in Multiplication And Accumulation(MAC) to reduce the memory size. For optimizing memory and data path circuit, the quantized 8bit integer weights are used instead of 32bit floating-point weights for pre-training of MNIST data set. To reduce chip area, the proposed CNN model is reduced by a convolutional layer, a 4*4 Max Pooling, and two fully connected layers. And all the operations use specific MAC with approximation adders and multipliers. 94% of internal memory size reduction is achieved by simultaneously performing the convolution and the pooling operation in the proposed architecture. The proposed accelerator chip is designed by using TSMC65nmGP CMOS process. That has about half size of our previous paper, 0.8*0.9 = 0.72mm2. The presented CNN accelerator chip achieves 94% accuracy and 77us inference time per an MNIST image.
Keywords
CNN; SoC design; MAC; MNIST;
Citations & Related Records
연도 인용수 순위
  • Reference
1 M. V. Valueva, N. N. Nagornov, P. A. Lyakhoc, G. V. Valuev, and N. I. Chervyakov, "Application of the residue number system to reduce hardware costs of the convolutional neural network implementation," Mathematics and Computers in Simulation, vol. 177, pp. 232-243, 2020.   DOI
2 A. Kyriakos, V. Kitsakis, A. Louropoulos, E. A. Papatheofanous, I. Patronas, and D. Reisis, "High Performance Accelerator for CNN Applications," 2019 29th International Symposium on Power and Timing Modeling, Optimization and Simulation (PATMOS), pp. 135-140, 2019. doi: 10.1109/PATMOS.2019.8862166.   DOI
3 L. Kang, H. Li, X. Li, and H. Zheng, "Design of Convolution Operation Accelerator based on FPGA," 2020 2nd International Conference on Machine Learning, Big Data and Business Intelligence (MLBDBI), pp. 80-84, 2020. doi: 10.1109/MLBDBI51377.2020.00021.   DOI
4 L. Biewald, "Monitor and Improve GPU Usage for Training Deep Learning Models," Towards Data Science, Mar. 2019.
5 H. Nikolov, T. Stefanov, and E. Deprettere, "Efficient External Memory Interface for Multi-Processor Platforms Realized on FPGA Chips," 2007 International Conference on, Field Programmable Logic and Applications, pp. 580-584, 2007.
6 ROIS-DS Center for Open Data in the Humanities. Keras Simple CNN Benchmark [Internet]. Available: https://github.com/rois-codh/kmnist/blob/master/benchmarks/kuzushiji_mnist_cnn.py.
7 H. W. Son, D. Y. Lee, M. E. Elbtity, S. Arslan, and H. W. Kim, "High Speed, Convolutional Neural Network Accelerator Based On Parallel Memory Access," Research institute for Computer and Information Communication (RICIC), vol. 28, no. 1, pp. 45-52, 2020.
8 H. J. Kwon, P. Chatarasi, M. Pellauer, A. Parashar, V. Sarkar, and T. Krishna, "Understanding Reuse, Performance, and Hardware Cost of DNN Dataflows: A Date-Centric Approach Using MAESTRO," The 52nd Annual IEEE/ACM International Symposium on Microarchitecture(MICRO-52), pp. 754-768, Oct. 2019.
9 M. E. Elbtity, H. W. Son, D. Y. Lee, and H. W. Kim, "High Speed, Approximate Arithmetic Based Convolutional Neural Network Accelerator," 2020 International SoC Design conference (ISOCC), pp. 71-72, 2020.
10 K. R. Choi, W. Choi, K. H. Shin, and J. S. Park, "Bit-width reduction and customized register for low cost convolutional neural network accelerator," 2017 IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED), Taipei, pp. 1-6, 2017.