Browse > Article
http://dx.doi.org/10.14372/IEMEK.2020.15.5.235

Analysis of Reduced-Width Truncated Mitchell Multiplication for Inferences Using CNNs  

Kim, HyunJin (Dankook University)
Publication Information
Abstract
This paper analyzes the effect of reduced output width of the truncated logarithmic multiplication and application to inferences using convolutional neural networks (CNNs). For small hardware overhead, output width is reduced in the truncated Mitchell multiplier, so that fractional bits in multiplication output are minimized in error-resilient applications. This analysis shows that when reducing output width in the truncated Mitchell multiplier, even though worst-case relative error increases, average relative error can be kept small. When adopting 8 fractional bits in multiplication output in the evaluations, there is no significant performance degradation in target CNNs compared to existing exact and original Mitchell multipliers.
Keywords
Approximate computing; Convolutional neural network; Logarithmic multiplication; Mitchell algorithm; Reduced-width multiplier;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Z. Babic, A. Avramovic, P. Bulic, "An Iterative Mitchell's Algorithm Based Multiplier," Proc. of ISSPIT 2008, pp. 303-308, 2008.
2 Z. Babic, A. Avramovic, P. Bulic, “An Iterative Logarithmic Multiplier,” Microprocessors and Microsystems, Vol. 35, No. 1, pp. 23-33, 2011.   DOI
3 S. Hashemi, R. I. Bahar, S. Reda, "Drum:A Dynamic Range Unbiased Multiplier for Approximate Applications," Proc. of ICCAD, pp. 418-425, 2015.
4 R. Zendegani, M. Kamal, M. Bahadori, A. Afzali-Kusha, “Roba Multiplier: A Tounding-based Approximate Multiplier for High-speed yet Energy-efficient Digital Signal Processing,” IEEE Trans. VLSI Systems, Vol. 25, No. 2, pp. 393-401, 2017.   DOI
5 W. Liu, L. Qian, C. Wang, H. Jiang, J. Han, F. Lombardi, “Design of Approximate Radix-4 Booth Multipliers for Error-tolerant Computing,” IEEE Transactions on Computers, Vol. 66, No. 8, pp. 1435-1441, 2017.   DOI
6 M.S. Kim, A.A. Del Barrio, R. Hermida, N. Bagherzadeh, "Low-power Implementation of Mitchell's Approximate Logarithmic Multiplication for Convolutional Neural Networks," Proc. of ASP-DAC, pp. 617-622, 2018.
7 I. Alouani, H. Ahangari, O. Ozturk, S. Niar, “A Novel Heterogeneous Approximate Multiplier for Low Power and High Performance,” IEEE Embedded Systems Letters, Vol. 10, No. 2, pp. 45-48, 2018.   DOI
8 J.N. Mitchell, "Computer Multiplication and Division Using Binary Logarithms," IRE Trans. Electronic Computers, No. 4, pp. 512-517, 1962.
9 M.S. Kim, A.A. Del Barrio, L.T. Oliveira, R. Hermida, N. Bagherzadeh, “Efficient Mitchell’s Approximate Log Multipliers for Convolutional Neural Networks,” IEEE Trans. Computers, Vol. 68, No. 5, pp. 660-675, 2018.   DOI
10 S.J. Jon, H.H. Wang, "Fixed-width Multiplier for Dsp Application," Proc. of IEEE International Conference on Computer Design, pp. 318-322, 2000.
11 S.J. Jou, M.H. Tsai, Y.L. Tsao, “Low-error Reduced-width Booth Multipliers for Dsp Applications,” IEEE Trans. Circuits and Systems I: Fundamental Theory and Applications, Vol. 50, No. 11, pp. 1470-1474, 2003.   DOI
12 K.J. Cho, K.C. Lee, J.G. Chung, “Design of Low Error Fixed-width Modified Booth Multiplier,” IEEE Trans. VLSI Systems, Vol. 12, No. 5, pp. 522-531, 2004.   DOI
13 S.S. Bhusare, V.K. Bhaaskaran, "Fixed-width Multiplier with Simple Compensation Bias," Procedia Materials Science, Vol. 10, pp. 395-402, 2015.   DOI
14 K.H. Abed, R.E. Siferd, “Cmos Vlsi Implementation of a Low-power Logarithmic Converter,” IEEE Trans. Computers, Vol. 52, No. 11, pp. 1421-1433, 2003.   DOI
15 K. Kunaraj, R. Seshasayanan, “Leading one Detectors and Leading one Position Detectors an Evolutionary Design Methodology,” Canadian journal of electrical and computer engineering, Vol. 36, No. 3, pp. 103-110, 2013.   DOI
16 S.E. Ahmed, S. Kadam, M. Srinivas, "An Iterative Logarithmic Multiplier with Improved Precision," Proc. of IEEE Symposium Computer Arithmetic, pp. 104-111, 2016.
17 I. Koren, Computer arithmetic algorithms. AK Peters/CRC Press, 2001.
18 Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, T. Darrell, "Caffe: Convolutional Architecture for Fast Feature Embedding," Proc. ACM Int'l Conf. on Multimedia, pp. 675-678, 2014.
19 A. Krizhevsky G. Hinton, "Learning Multiple Layers of Features from Tiny Images," technical report, Citeseer, 2009.
20 M. Lin, Q. Chen, S. Yan, "Network in Network," arXiv:1312.4400, 2013.
21 A. Krizhevsky, I. Sutskever, G.E. Hinton, "Imagenet Classification with Deep Convolutional Neural Networks," Proc. of Advances in neural information processing systems, pp. 1097-1105, 2012.
22 C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, "Going Deeper with Convolutions," Proc. of the IEEE conference on computer vision and pattern recognition, pp. 1-9, 2015.
23 K. He, X. Zhang, S. Ren, J. Sun, "Deep Residual Learning for Image Recognition," Proc. of IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
24 O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A.C. Berg, L. Fei-Fei, “ImageNet Large Scale Visual Recognition Challenge,” International Journal of Computer Vision, Vol. 115, No. 3, pp. 211-252, 2015.   DOI