• Title/Summary/Keyword: GPU Computing

Search Result 229, Processing Time 0.022 seconds

An Investigation of the Performance of the Colored Gauss-Seidel Solver on CPU and GPU (Coloring이 적용된 Gauss-Seidel 해법을 통한 CPU와 GPU의 연산 효율에 관한 연구)

  • Yoon, Jong Seon;Jeon, Byoung Jin;Choi, Hyoung Gwon
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.41 no.2
    • /
    • pp.117-124
    • /
    • 2017
  • The performance of the colored Gauss-Seidel solver on CPU and GPU was investigated for the two- and three-dimensional heat conduction problems by using different mesh sizes. The heat conduction equation was discretized by the finite difference method and finite element method. The CPU yielded good performance for small problems but deteriorated when the total memory required for computing was larger than the cache memory for large problems. In contrast, the GPU performed better as the mesh size increased because of the latency hiding technique. Further, GPU computation by the colored Gauss-Siedel solver was approximately 7 times that by the single CPU. Furthermore, the colored Gauss-Seidel solver was found to be approximately twice that of the Jacobi solver when parallel computing was conducted on the GPU.

Development of GPU-accelerated kinematic wave model using CUDA fortran (CUDA fortran을 이용한 GPU 가속 운동파모형 개발)

  • Kim, Boram;Park, Seonryang;Kim, Dae-Hong
    • Journal of Korea Water Resources Association
    • /
    • v.52 no.11
    • /
    • pp.887-894
    • /
    • 2019
  • We proposed a GPU (Grapic Processing Unit) accelerated kinematic wave model for rainfall runoff simulation and tested the accuracy and speed up performance of the proposed model. The governing equations are the kinematic wave equation for surface flow and the Green-Ampt model for infiltration. The kinematic wave equations were discretized using a finite volume method and CUDA fortran was used to implement the rainfall runoff model. Several numerical tests were conducted. The computed results of the GPU accelerated kinematic wave model were compared with several measured and other numerical results and reasonable agreements were observed from the comparisons. The speed up performance of the GPU accelerated model increased as the number of grids increased, achieving a maximum speed up of approximately 450 times compared to a CPU (Central Processing Unit) version, at least for the tested computing resources.

GPU-Accelerated Password Cracking of PDF Files

  • Kim, Keon-Woo;Lee, Sang-Su;Hong, Do-Won;Ryou, Jae-Cheol
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.11
    • /
    • pp.2235-2253
    • /
    • 2011
  • Digital document file such as Adobe Acrobat or MS-Office is encrypted by its own ciphering algorithm with a user password. When this password is not known to a user or a forensic inspector, it is necessary to recover the password to open the encrypted file. Password cracking by brute-force search is a perfect approach to discover the password but a time consuming process. This paper presents a new method of speeding up password recovery on Graphic Processing Unit (GPU) using a Compute Unified Device Architecture (CUDA). PDF files are chosen as a password cracking target, and the Abode Acrobat password recovery algorithm is examined. Experimental results show that the proposed method gives high performance at low cost, with a cluster of GPU nodes significantly speeding up the password recovery by exploiting a number of computing nodes. Password cracking performance is increased linearly in proportion to the number of computing nodes and GPUs.

A Dual Transcoding Method for Retaining QoS of Video Streaming Services under Restricted Computing Resources (동영상 스트리밍 서비스의 QoS유지를 위한 듀얼 트랜스코딩 기법)

  • Oh, Doohwan;Ro, Won Woo
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.3 no.7
    • /
    • pp.231-240
    • /
    • 2014
  • Video transcoding techniques provide an efficient mechanism to make a video content adaptive to the capabilities of a variety of clients. However, it is hard to provide an appropriate quality-of-service(QoS) to the clients owing to heavy workload on transcoding operations. In light of this fact, this paper presents the dual transcoding method in order to guarantee QoS in streaming services by maximizing resource usage in a transcoding server equipped with both CPU and GPU computing units. The CPU and GPU computing units have different architectural features. The proposed method speculates workload of incoming transcoding requests and then schedules the requests either to the CPU or GPU accordingly. From performance evaluation, the proposed dual transcoding method achieved a speedup of 1.84 compared with traditional transcoding approach.

Implementation of Massive FDTD Simulation Computing Model Based on MPI Cluster for Semi-conductor Process (반도체 검증을 위한 MPI 기반 클러스터에서의 대용량 FDTD 시뮬레이션 연산환경 구축)

  • Lee, Seung-Il;Kim, Yeon-Il;Lee, Sang-Gil;Lee, Cheol-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.9
    • /
    • pp.21-28
    • /
    • 2015
  • In the semi-conductor process, a simulation process is performed to detect defects by analyzing the behavior of the impurity through the physical quantity calculation of the inner element. In order to perform the simulation, Finite-Difference Time-Domain(FDTD) algorithm is used. The improvement of semiconductor which is composed of nanoscale elements, the size of simulation is getting bigger. Problems that a processor such as CPU or GPU cannot perform the simulation due to the massive size of matrix or a computer consist of multiple processors cannot handle a massive FDTD may come up. For those problems, studies are performed with parallel/distributed computing. However, in the past, only single type of processor was used. In GPU's case, it performs fast, but at the same time, it has limited memory. On the other hand, in CPU, it performs slower than that of GPU. To solve the problem, we implemented a computing model that can handle any FDTD simulation regardless of size on the cluster which consist of heterogeneous processors. We tested the simulation on processors using MPI libraries which is based on 'point to point' communication and verified that it operates correctly regardless of the number of node and type. Also, we analyzed the performance by measuring the total execution time and specific time for the simulation on each test.

Multi-GPU based Fast Multi-view Depth Map Generation Method (다중 GPU 기반의 고속 다시점 깊이맵 생성 방법)

  • Ko, Eunsang;Ho, Yo-Sung
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2014.11a
    • /
    • pp.236-239
    • /
    • 2014
  • 3차원 영상을 제작하기 위해서는 여러 시점의 색상 영상과 함께 깊이 정보를 필요로 한다. 하지만 깊이 정보를 얻을 때 사용하는 ToF 카메라는 해상도가 낮으며 적외선 신호의 주파수 문제 때문에 최대 3대까지 사용할 수 있다. 따라서 깊이 정보를 색상 영상과 함께 사용하기 위해서 깊이 정보의 업샘플링이 필수적이다. 업샘플링은 깊이 정보를 색상 카메라 위치로 3차원 워핑하고 결합형 양방향 필터(joint bilateral filter, JBF)를 사용하여 빈 영역을 채우는 방법으로 진행된다. 업샘플링은 오랜 시간이 소요되지만 그래픽스 프로세싱 유닛(graphics processing units, GPU)를 이용하여 빠르게 수행될 수 있다. 본 논문에서는 다중 GPU의 병렬 수행을 통하여 빠르게 다시점 깊이맵을 생성할 수 있는 방법을 제안한다. 다중 GPU 병렬 수행은 범용 목적 GPU(general purpose computing on GPU, GPGPU) 중의 하나인 CUDA를 이용하였으며, 본 논문에서 제안된 방법을 이용하여 3개의 GPU 사용한 실험 결과 초당 35 프레임의 다시점 깊이맵을 생성했다.

  • PDF

Matrix Multiplication Acceleration with GPU and Locality (GPU와 지역성을 이용한 행렬 곱셈 가속)

  • Kwon, Oh-Young;Lee, Chang-Mug
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.902-903
    • /
    • 2009
  • Matrix multiplication is widely used in scientific and engineering field. Locality can improve the execution performance of matrix multiplication. A method for accelerating matrix multiplication is presented. This method uses both CPU and GPU computing power in PC. The presented method improved execution time about %15~30% than the method which uses only GPU.

  • PDF

Parallel Algorithm of Conjugate Gradient Solver using OpenGL Compute Shader

  • Va, Hongly;Lee, Do-keyong;Hong, Min
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.1
    • /
    • pp.1-9
    • /
    • 2021
  • OpenGL compute shader is a shader stage that operate differently from other shader stage and it can be used for the calculating purpose of any data in parallel. This paper proposes a GPU-based parallel algorithm for computing sparse linear systems through conjugate gradient using an iterative method, which perform calculation on OpenGL compute shader. Basically, this sparse linear solver is used to solve large linear systems such as symmetric positive definite matrix. Four well-known matrix formats (Dense, COO, ELL and CSR) have been used for matrix storage. The performance comparison from our experimental tests using eight sparse matrices shows that GPU-based linear solving system much faster than CPU-based linear solving system with the best average computing time 0.64ms in GPU-based and 15.37ms in CPU-based.

GPU Memory Management Technique to Improve the Performance of GPGPU Task of Virtual Machines in RPC-Based GPU Virtualization Environments (RPC 기반 GPU 가상화 환경에서 가상머신의 GPGPU 작업 성능 향상을 위한 GPU 메모리 관리 기법)

  • Kang, Jihun
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.5
    • /
    • pp.123-136
    • /
    • 2021
  • RPC (Remote Procedure Call)-based Graphics Processing Unit (GPU) virtualization technology is one of the technologies for sharing GPUs with multiple user virtual machines. However, in a cloud environment, unlike CPU or memory, general GPUs do not provide a resource isolation technology that can limit the resource usage of virtual machines. In particular, in an RPC-based virtualization environment, since GPU tasks executed in each virtual machine are performed in the form of multi-process, the lack of resource isolation technology causes performance degradation due to resource competition. In addition, the GPU memory competition accelerates the performance degradation as the resource demand of the virtual machines increases, and the fairness decreases because it cannot guarantee equal performance between virtual machines. This paper, in the RPC-based GPU virtualization environment, analyzes the performance degradation problem caused by resource contention when the GPU memory requirement of virtual machines exceeds the available GPU memory capacity and proposes a GPU memory management technique to solve this problem. Also, experiments show that the GPU memory management technique proposed in this paper can improve the performance of GPGPU tasks.

Performance Study of Satellite Image Processing on Graphics Processors Unit Using CUDA

  • Jeong, In-Kyu;Hong, Min-Gee;Hahn, Kwang-Soo;Choi, Joonsoo;Kim, Choen
    • Korean Journal of Remote Sensing
    • /
    • v.28 no.6
    • /
    • pp.683-691
    • /
    • 2012
  • High resolution satellite images are now widely used for a variety of mapping applications including photogrammetry, GIS data acquisition and visualization. As the spectral and spatial data size of satellite images increases, a greater processing power is needed to process the images. The solution of these problems is parallel systems. Parallel processing techniques have been developed for improving the performance of image processing along with the development of the computational power. However, conventional CPU-based parallel computing is often not good enough for the demand for computational speed to process the images. The GPU is a good candidate to achieve this goal. Recently GPUs are used in the field of highly complex processing including many loop operations such as mathematical transforms, ray tracing. In this study we proposed a technique for parallel processing of high resolution satellite images using GPU. We implemented a spectral radiometric processing algorithm on Landsat-7 ETM+ imagery using CUDA, a parallel computing architecture developed by NVIDIA for GPU. Also performance of the algorithm on GPU and CPU is compared.