DOI QR코드

DOI QR Code

Probability distribution-based approximation matrix multiplication simplification algorithm

확률분포 생성을 통한 근사 행렬 곱셈 간소화 방법

  • Kwon, Oh-Young (Department of Future Technology, Korea University of Technology & Education) ;
  • Seo, Kyoung-Taek (Office of Future Education Innovation (OFEI), Korea University of Technology & Education)
  • Received : 2022.01.27
  • Accepted : 2022.10.20
  • Published : 2022.11.30

Abstract

Matrix multiplication is a fundamental operation widely used in science and engineering. There is an approximate matrix multiplication method as a way to reduce the amount of computation of matrix multiplication. Approximate matrix multiplication determines an appropriate probability distribution for selecting columns and rows of matrices, and performs approximate matrix multiplication by selecting columns and rows of matrices according to this distribution. Probability distributions are generated by considering both matrices A and B participating in matrix multiplication. In this paper, we propose a method to generate a probability distribution that selects columns and rows of matrices to be used for approximate matrix multiplication, targeting only matrix A. Approximate matrix multiplication was performed on 1000×1000 ~ 5000×5000 matrices using existing and proposed methods. The approximate matrix multiplication applying the proposed method compared to the conventional method has been shown to be closer to the original matrix multiplication result, averaging 0.02% to 2.34%.

행렬 곱셈은 과학 및 공학 분야에서 널리 사용되는 기본 연산이다. 딥러닝의 학습 알고리즘에도 행렬 곱셈이 많이 사용된다. 따라서 행렬 곱셈을 효과적으로 수행하기 위한 다양한 알고리즘들 개발하고 있다. 이중 행렬 곱셈의 연산량을 줄이는 방법으로 근사 행렬 곱셈 방법이 있다. 근사 행렬 곱셈은 행렬의 열과 행을 선택하기 위한 적절한 확률 분포를 결정하고, 이 분포에 따라 행렬의 열과 행을 선택하여 근사 행렬 곱셈을 수행한다. 기존의 방법들을 행렬 곱셈에 참여하는 두 개의 행렬 A, B를 모두 고려하여 확률 분포를 생성한다. 본 논문은 행렬 A만을 대상으로 근사 행렬 곱셈에 사용될 행렬의 열과 행을 선택하는 확률 분포를 생성하는 방법을 제안하였다. 기존의 방법들과 제안된 방법들을 사용하여 1000×1000, 2000×2000, 3000×3000, 4000×4000, 5000×5000 행렬에 대하여 근사 행렬 곱셈을 수행하였다. 기존의 방법보다 제안한 방법을 적용한 근사 행렬 곱셈이 평균 0.02%에서 2.34%까지 원래 행렬 곱셈 결과에 더 근접하는 결과를 보였다.

Keywords

Acknowledgement

This paper was studied by supporting the 2020 KOREATECH professor's educational research promotion.

References

  1. K. Goto and R. A. van de Geijn, "Anatomy of High Performance Matrix Multiplication," ACM Transactions on Mathematical Software, vol. 34, no. 3, pp. 1-25, May 2008.
  2. J. -G. Dumas and V. Pan, "Fast Matrix Multiplication and Symbolic Computation," ArXiv, arXiv:1612.05766, Dec. 2016.
  3. P. Lai, H. Arafat, V. Elango, and P. Sadayappan, "Accelerating Strassen-Winograd's Matrix Multiplication Algorithm on GPUs," in Proceedings of 20th Annual InternationalConference on High Performance Computing, Bengaluru, India, pp. 139-148, 2013.
  4. P. Drineas, R. Kannan, and M. W. Mahoney, "Fast Monte Carlo Algorithms for Matrices I: Approximating Matrix Multiplication," SIAM Journal on Computing, vol. 36, no. 1, pp. 132-157, Jul. 2006. https://doi.org/10.1137/S0097539704442684
  5. B. Plancher, C. D. Brumar, I. Brumar, L. Pentecost, S. Rama, and D. Brooks, "Application of Approximate Matrix Multiplication to Neural Networks and Distributed SLAM," in Proceedings of 2019 IEEE High Performance Extreme Computing Conference (HPEC), Waltham: MA, USA, pp. 1-7, 2019.
  6. M. Adelman, K. Y. Levy, I. Hakimi, and M. Silberstein "Faster Neural Network Training with Approximate Tensor Operations," arXiv, arXiv: 1805.08079, Oct. 2021.
  7. Y. Xiong, Z. Zeng, R. Chakraborty, M. Tan, G. Fung, Y. Li, and V. Singh, "Nystromformer: A Nystrom-based Algorithmfor Approximating Self-Attention," ArXiv, arXiv:2102.03902, Feb. 2021.
  8. N. Charalambides, M. Pilanci, and A. O. Hero, "Approximate Weighted Coded Matrix Multiplication," in Proceedings of 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto: ON, Canada, pp. 5095-5099, 2021.
  9. Microsoft, Install Linux on Windows with WSL [Internet]. https://learn.microsoft.com/ko-kr/windows/wsl/install.