DOI QR코드

DOI QR Code

Performance Comparison of Parallel Programming Frameworks in Digital Image Transformation

  • Received : 2019.05.08
  • Accepted : 2019.05.17
  • Published : 2019.08.31

Abstract

Previously, parallel computing was mainly used in areas requiring high computing performance, but nowadays, multicore CPUs and GPUs have become widespread, and parallel programming advantages can be obtained even in a PC environment. Various parallel programming frameworks using multicore CPUs such as OpenMP and PPL have been announced. Nvidia and AMD have developed parallel programming platforms and APIs for program developers to take advantage of multicore GPUs on their graphics cards. In this paper, we develop digital image transformation programs that runs on each of the major parallel programming frameworks, and measure the execution time. We analyze the characteristics of each framework through the execution time comparison. Also a constant K indicating the ratio of program execution time between different parallel computing environments is presented. Using this, it is possible to predict rough execution time without implementing a parallel program.

Acknowledgement

Supported by : Seokyeong University

References

  1. M.H. Lee and J.S. Kang, "Performance Analysis and Characterization of Multi-Core Servers," Korea Information Processing Society Review A Vol.15-A, No.5, pp. 259-268, Korea Information Processing Society, 2008. DOI: https://doi.org/10.3745/kipsta.2008.15-a.5.259. https://doi.org/10.3745/kipsta.2008.15-a.5.259
  2. S.W. Roh, J.E. Choi, D.N. Nam, G.C. Park, and C.Y. Park, "An Analysis Tool and Benchmark Performance Experiment for Optimizing Parallel Programming in Mani-Core System," Korea Information Processing Society Review, Vol.25, No.1, pp. 78-88, Korea Information Processing Society, 2018.
  3. S.S. Kim, D.H. Kim, S.K. Woo, and I.S. Ihm, "Analysis of Programming Techniques for Creating Optimized CUDA software," Journal of KIISE : Computing Practices and Letters 16(7), pp. 775-787, Korea Information Science Society, 2017.
  4. S. Che, M. Boyer, J. Meng, D. Tarjan, J.W. Sheaffer, and K. Skadron, "A Performance Study of General-Purpose Applicaions on Graphics Processors Using CUDA," Journal of Parallel and Distributed Computing, University of Virginia, 2008. DOI: https://doi.org/10.1016/j.jpdc.2008.05.014. https://doi.org/10.1016/j.jpdc.2008.05.014
  5. S. Ryoo, C.I. Rodrigues, S.S. Baghsorkhi, S.S. Stone, D.B. Kirk, and W.W. Hwu, "Optimization Principles and Application Performance Evaluation of a Multi-threaded GPU Using CUDA," Proc. 13th ACM SIG-PLAN Symp. Principles and Practice of Parallel Programming, ACM Press, 2008. DOI: https://doi.org/10.1145/1345206.1345220. https://doi.org/10.1145/1345206.1345220
  6. S. Hua, "Comparison and Analysis of Parallel Computing Performance Using OpenMP and MPI," The Open Automation and Control Systems Journal vol. 5, pp. 38-44, 2013. DOI: https://doi.org/10.2174/1874444301305010038. https://doi.org/10.2174/1874444301305010038
  7. K. Karimi, N.G. Dickson, and F. Hamze, "A Performance Comparison of CUDA and OpenCL," Computing Research Repository - CORR, arXiv:1005.2581, 2010.
  8. OpenMP Architecture Review Board, OpenMP Application Program Interface, http://openmp.org/forum/
  9. O. Takashi, OpenCL NYUMON-GPU & Multi-core CPU heiretus Programming, HongRung Publishing, 2012.
  10. M. Ebersole, What Is CUDA?, https://blogs.nvidia.com/blog/2012/09/10/what-is-cuda-2/.
  11. Y.H. Jung, CUDA Parallel Programming, FreeLec Publishing, 2011.