Browse > Article
http://dx.doi.org/10.7471/ikeee.2018.22.3.805

Accuracy Analysis of Fixed Point Arithmetic for Hardware Implementation of Binary Weight Network  

Kim, Jong-Hyun (Department of Computer and Telecomm. Engineering, Yonsei University)
Yun, SangKyun (Department of Computer and Telecomm. Engineering, Yonsei University)
Publication Information
Journal of IKEEE / v.22, no.3, 2018 , pp. 805-809 More about this Journal
Abstract
In this paper, we analyze the change of accuracy when fixed point arithmetic is used instead of floating point arithmetic in binary weight network(BWN). We observed the change of accuracy by varying total bit size and fraction bit size. If the integer part is not changed after fixed point approximation, there is no significant decrease in accuracy compared to the floating-point operation. When overflow occurs in the integer part, the approximation to the maximum or minimum of the fixed point representation minimizes the decrease in accuracy. The results of this paper can be applied to the minimization of memory and hardware resource requirement in the implementation of FPGA-based BWN accelerator.
Keywords
Binary Weight Network; Low precision network; Fixed point approximation; FPGA; CNN;
Citations & Related Records
연도 인용수 순위
  • Reference
1 A. Krizhevsky, I. Sutskeve, and G.E. Hinton, "Imagenet classification with deep convolutional neural networks," Advances in neural information processing systems, pp. 1097-1105, 2012. DOI:10.1145/3065386   DOI
2 K. Simonyan, and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
3 D. D. Lin and S. S. Talathi, "Overcoming challenges in fixed point training of deep convolutional networks," arXiv preprint arXiv: 1607.02241, 2016.
4 D. Miyashita, E.H. Lee, and B. Murmann, "Convolutional neural networks using logarithmic data representation," arXiv preprint arXiv: 1603.01025, 2016.
5 M. Courbariaux, et al., "Binarized neural networks: Training deep neural networks with weights and activations constrained to +1 or -1," arXiv preprint arXiv:1602.02830, 2016.
6 M. Rastegari, et al., "XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks," in European Conf. on Computer Vision (ECVV'16), Springer, pp. 525-542, 2016.
7 R. Zhao, W. Song. "Accelerating binarized convolutional neural networks with software programmable FPGAs," in Proc. ACM/SIGDA Int. Symp. on Field-Programmable Gate Arrays, ACM, pp. 15-24, 2017.DOI;10.1145/3020078.3021741   DOI
8 Y. Umuroglu, et al., "FINN: a framework for fast, scalable binarized neural network inference," in Proc. ACM/SIGDA Int. Symp. on Field-Programmable Gate Arrays, ACM, pp. 65-74, 2017. DOI:10.1145/3020078.3021744   DOI
9 S. Liang, et al., "FP-BNN: Binarized Neural Network on FPGA," Neurocomputing 275, pp. 1072-1086, 2018. DOI:10.1016/j.neucom.2017.09.046   DOI
10 R. Andri, et al. "YodaNN: An ultra-low power convolutional neural network accelerator based on binary weights," IEEE Computer Society Annual Symp. on VLSI (ISVLSI'16), pp. 236-241, 2016. DOI:10.1109/ISVLSI.2016.111   DOI
11 CIFAR-10 and CIFAR-100 datasets, https://www.cs.toronto.edu/-kriz/cifar.html