Browse > Article
http://dx.doi.org/10.14372/IEMEK.2020.15.1.1

Study on Derivation and Implementation of Quantized Gradient for Machine Learning  

Seok, Jinwuk (University of Science and Technology)
Publication Information
Abstract
A derivation method for a quantized gradient for machine learning on an embedded system is proposed, in this paper. The proposed differentiation method induces the quantized gradient vector to an objective function and provides that the validation of the directional derivation. Moreover, mathematical analysis shows that the sequence yielded by the learning equation based on the proposed quantization converges to the optimal point of the quantized objective function when the quantized parameter is sufficiently large. The simulation result shows that the optimization solver based on the proposed quantized method represents sufficient performance in comparison to the conventional method based on the floating-point system.
Keywords
Quantization; Machine learning; Learning equation; Learning rate;
Citations & Related Records
연도 인용수 순위
  • Reference
1 R.M. Gray, D.L. Neuhoff, "Quantization," Journal of IEEE Transaction on Information Theory, Vol. 44, No. 6, pp. 2325-2383, 2006.
2 D. Alistarh, D. Grubic, L. J. Ryota, V. Milan, "QSGD: Communication-efficient Sgd via Gradient Quantization and Encoding," Journal of Advances in Neural Information Processing Systems, Vol. 30, pp. 1709-1720, 2017.
3 Y. Benyamini, J. Lindenstrauss, Geometric Nonlinear Functional Analysis : Volume 1, American Mathematical Society Colloquium Publications. Providence, 1998.
4 Y.S. Boutalis, S. D Kollias, G. Carayannis, "A Fast Multichannel Approach to Adaptive Image Estimation," Journal of IEEE Transaction on Acoustics, Speech, and Signal Processing, Vol. 37, No. 7, pp. 1090-1098, 1989.   DOI
5 S. Cho, J. Seok, "Self-organizing Map with Time-invariant Learning Rate and Its Exponential Stability Analysis", Journal of Neurocomputing, Vol. 19, No. 21, pp. 1-11, 1998.   DOI
6 L. Ljung, System Identification: Theory for the User, Prentice Hall, New York, 1990.
7 M. Courbariaux, Y. Bengio, J.P. David, "Binaryconnect: Training Deep Neural Networks with Binary Weights During Propagations," Journal of Advances in Neural Information Processing Systems, Vol. 28, pp. 3123-3131, 2015
8 J. Faraone, N. Fraser, M. Blott, P.H. Leong, "Syq: Learning Symmetric Quantization for efficient Deep Neural Networks," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4300-4309, 2018.
9 S. Jung, C. Son, S. Lee, J. Son, Y. Kwak, H. Jae-Joon, H. Sung Ju, C. Choi, "Learning to Quantize Deep Networks by Optimizing Quantization Intervals with Task Loss", Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4350-4359, 2018.
10 J. Seok, J. Kim, "Study on Quantized Differentiation for Machine Learning in an Embedded System", Proceedings of IeMeK Symposium on Embedded Technology 2019, pp. 11-14, 2019
11 S. Boyd, L. Vandenberghe, Convex Optimization, Cambridge University Press, Cambridge, 2004
12 M.S. Bazaraa, H.D. Sherali, C.M. Shetty, Nonlinear Programming:Theory and Algorithms. Wiley-Interscience, New Jersey, 2006
13 D. Jimenez, L. Wang, Y. Wang, "White Noise Hypothesis for Uniform Quantization Errors", SIAM Journal on Mathematical Analysis, Vol. 38, No. 6, pp. 2042-2056, 2007.   DOI