• Title/Summary/Keyword: GPU 계산

Search Result 196, Processing Time 0.022 seconds

Improving the Performance of Document Similarity by using GPU Parallelism (GPU 병렬성을 이용한 문서 유사도 계산 성능 개선)

  • Park, Il-Nam;Bae, Byung-Gurl;Im, Eun-Jin;Kang, Seung-Shik
    • The KIPS Transactions:PartB
    • /
    • v.19B no.4
    • /
    • pp.243-248
    • /
    • 2012
  • In the information retrieval systems like vector model implementation and document clustering, document similarity calculation takes a great part on the overall performance of the system. In this paper, GPU parallelism has been explored to enhance the processing speed of document similarity calculation in a CUDA framework. The proposed method increased the similarity calculation speed almost 15 times better compared to the typical CPU-based framework. It is 5.2 and 3.4 times better than the methods by using CUBLAS and Thrust, respectively.

Acceleration of computation speed for elastic wave simulation using a Graphic Processing Unit (그래픽 프로세서를 이용한 탄성파 수치모사의 계산속도 향상)

  • Nakata, Norimitsu;Tsuji, Takeshi;Matsuoka, Toshifumi
    • Geophysics and Geophysical Exploration
    • /
    • v.14 no.1
    • /
    • pp.98-104
    • /
    • 2011
  • Numerical simulation in exploration geophysics provides important insights into subsurface wave propagation phenomena. Although elastic wave simulations take longer to compute than acoustic simulations, an elastic simulator can construct more realistic wavefields including shear components. Therefore, it is suitable for exploration of the responses of elastic bodies. To overcome the long duration of the calculations, we use a Graphic Processing Unit (GPU) to accelerate the elastic wave simulation. Because a GPU has many processors and a wide memory bandwidth, we can use it in a parallelised computing architecture. The GPU board used in this study is an NVIDIA Tesla C1060, which has 240 processors and a 102 GB/s memory bandwidth. Despite the availability of a parallel computing architecture (CUDA), developed by NVIDIA, we must optimise the usage of the different types of memory on the GPU device, and the sequence of calculations, to obtain a significant speedup of the computation. In this study, we simulate two- (2D) and threedimensional (3D) elastic wave propagation using the Finite-Difference Time-Domain (FDTD) method on GPUs. In the wave propagation simulation, we adopt the staggered-grid method, which is one of the conventional FD schemes, since this method can achieve sufficient accuracy for use in numerical modelling in geophysics. Our simulator optimises the usage of memory on the GPU device to reduce data access times, and uses faster memory as much as possible. This is a key factor in GPU computing. By using one GPU device and optimising its memory usage, we improved the computation time by more than 14 times in the 2D simulation, and over six times in the 3D simulation, compared with one CPU. Furthermore, by using three GPUs, we succeeded in accelerating the 3D simulation 10 times.

Efficient Computation of Isosurface Curvatures on GPUs Based on the de Boor Algorithm (드 부어 알고리즘을 이용한 GPU에서의 효율적인 등가면 곡률 계산)

  • Kim, Minho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.47-54
    • /
    • 2017
  • In this paper, we propose an improved curvature-based GPU (Graphics Processing Unit) isosurface ray-casting technique. Our method adopts the fast evaluation method proposed by Sigg et al. [1] to find the isosurface, but replaces the computation of the gradient and Hessian with the de Boor algorithm. In this way, we can reduce the number of additional texture fetches from 84 to 27 thus improving the performance by up to ${\approx}30%$, depending on the platforms.

Optimization Technique for Vertex Programming on Programmable GPU (프로그래밍이 가능한 GPU 상에서의 버텍스 프로그래밍의 최적화 기법)

  • Oh, Jinsang;Ihm, Insung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.8 no.3
    • /
    • pp.25-34
    • /
    • 2002
  • 최근 프로그래밍이 가능한 그래픽스 프로세서(GPU)의 등장은 렌더링 속도의 향상은 물론 기존의 GPU가 할 수 없었던 다양한 그래픽스 계산을 효과적으로 수행할 수 있도록 해주고 있다. 이로 인하여 기존에 CPU 상에서 수행해야만 했던 그래픽스 계산들의 일부를 GPU 상에서 수행하도록 해주는 기법들에 대한 연구가 활발히 진행되고 있다. 본 논문에서는 선형식에 기반을 둔 여러 응용 문제들을 GPU 상에서 효율적으로 구현할 수 있도록 도와주는 쉐이더 코드 최적화 기법을 제안한다. 이 기법은 SIMD 형태의 병렬 처리 능력을 가진 버텍스 쉐이더의 명령어에 맞게 고안되었다. 본 기법의 활용 가능성을 보이기 위하여 미분 방정식을 풀기 위한 4차 런지-쿠타 방법, 선형방정식을 풀기 위한 가우스-자이델 방법, 자연스러운 유체 모델링을 위한 파동 방정식 등의 문제에 적용하여 보았다. 본 논문에서 제안한 최적화 기법은 버텍스 쉐이더 용 컴파일러 구현에 쓰일 수 있으며, 향후 프로그래밍이 가능한 GPU 상에서의 실시간 그래픽스 소프트웨어 개발에 유용하게 사용될 수 있을 것이다.

  • PDF

Fast Self-Collision Handling in Cloth Simulations Using GPU-based Optimized BVH and R-Triangle (GPU 기반의 최적화된 BVH와 R-Triangle을 이용한 옷감 시뮬레이션에서의 빠른 자기충돌 처리)

  • Moon, Seong-Hyeok;Kim, Jong-Hyun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.01a
    • /
    • pp.373-376
    • /
    • 2022
  • 본 논문에서는 삼각형 메쉬 기반에서 옷감 시뮬레이션(Cloth simulation)에서 계산양이 큰 자기충돌(Self-collision) 처리를 GPU기반으로 가속화시킬 수 있는 방법에 대해 소개한다. CUDA기반으로 병렬 최적화하기 위해 본 논문에서는 1)재귀적으로 계산하여 충돌판정을 하는 BVH(Bounding volume hierarchy) 트리를 GPU기반에서 효율적으로 빌드, 업데이트, 트리 순회하는 방법을 제안하고, 2)삼각형 메쉬 기반에서는 중복되는 프리미티브(Primitive) 충돌검사를 최소화하기 위해 R-Triangle기법을 GPU에서 최적화 시키는 방법을 소개한다. 결과적으로 본 논문에서 제안하는 기법은 GPU 환경에서 옷감 시뮬레이션의 자기충돌과 객체충돌 처리를 빠르고 효율적으로 처리할 수 있도록 하였고, 다양한 장면에서 실험한 결과 모든 결과에서 빠른 시뮬레이션 결과를 얻을 수 있었다.

  • PDF

Molecular Docking System using Parallel GPU (병렬 GPU를 이용한 분자 도킹 시스템)

  • Park, Sung-Jun
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.12
    • /
    • pp.441-448
    • /
    • 2008
  • The molecular docking system needs a large amount of computation and requires super-computing power. Since the experiment requires a large amount of time, the experiment is conducted in the distributed environment or in the grid environment. Recently, researches on using parallel GPU of far higher performance than that of CPU in scientific computing have been very actively conducted. CUDA is an open technique by which a parallel GPU programming is made possible. This study proposes the molecular docking system using CUDA. It also proposes algorithm that parallels energy-minimizing-computation. To verify such experiments, this study conducted a comparative analysis on the time required for experimenting molecular docking in general CPU and the time and performance of the parallel GPU-based molecular docking which is proposed in this study.

GPU Based Incremental Connected Component Processing in Dynamic Graphs (동적 그래프에서 GPU 기반의 점진적 연결 요소 처리)

  • Kim, Nam-Young;Choi, Do-Jin;Bok, Kyoung-Soo;Yoo, Jae-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.6
    • /
    • pp.56-68
    • /
    • 2022
  • Recently, as the demand for real-time processing increases, studies on a dynamic graph that changes over time has been actively done. There is a connected components processing algorithm as one of the algorithms for analyzing dynamic graphs. GPUs are suitable for large-scale graph calculations due to their high memory bandwidth and computational performance. However, when computing the connected components of a dynamic graph using the GPU, frequent data exchange occurs between the CPU and the GPU during real graph processing due to the limited memory of the GPU. The proposed scheme utilizes the Weighted-Quick-Union algorithm to process large-scale graphs on the GPU. It supports fast connected components computation by applying the size to the connected component label. It computes the connected component by determining the parts to be recalculated and minimizing the data to be transmitted to the GPU. In addition, we propose a processing structure in which the GPU and the CPU execute asynchronously to reduce the data transfer time between GPU and CPU. We show the excellence of the proposed scheme through performance evaluation using real dataset.

Implementation of a GPU Cluster System using Inexpensive Graphics Devices (저가의 그래픽스 장치를 이용한 GPU 클러스터 시스템 구현)

  • Lee, Jong-Min;Lee, Jung-Hwa;Kim, Seong-Woo
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.11
    • /
    • pp.1458-1466
    • /
    • 2011
  • Recently the research on GPGPU has been carried out actively as the performance of GPUs has been increased rapidly. In this paper, we propose the system architecture by benchmarking the existing supercomputer architecture for a cost-effective system using GPUs in low-cost graphics devices and implement a GPU cluster system with eight GPUs. We also make the software development environment that is suitable for the GPU cluster system and use it for the performance evaluation by implementing the n-body problem. According to its result, we found that it is efficient to use multiple GPUs when the problem size is large due to its communication cost. In addition, we could calculate up to eight million celestial bodies by applying the method of calculating block by block to mitigate the problem size constraint due to the limited resource in GPUs.

A 2D GPU-Accelerated High Resolution Numerical Scheme for Solving Diffusive Wave Equation (고해상도 수치기법을 이용한 GPU 기반 2D 확산파 모형)

  • Park, Seonryang;Kim, Dae-Hong
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2019.05a
    • /
    • pp.109-109
    • /
    • 2019
  • 본 연구에서는 강우-유출 과정 모의를 위한 GPU 기반 확산파 모형을 개발하였다. 확산파 방정식을 풀기위한 수치기법으로는 유한체적법을 이용하였으며, van Leer TVD limiter를 적용한 MUSCL 기법을 이용하여 각 셀의 인터페이스의 물리적 성질을 재구성하여 구하였다. 또한, 침투를 고려하기 위하여 Horton 침투 모형을 이용하였다. 개발된 모형을 이용하여 1D single overland plane과 2D V-shaped overland에서 강우-유출 과정을 모의실험을 하였으며, 각각 해석해와 dynamic wave model을 이용하여 계산된 수치 결과와 비교하여 본 모형의 정확성을 검증하였다. 또한, 1D와 2D의 기복이 심한 지형에 적용하여 강우-유출과정이 본 모형을 통하여 물리적으로 타당한 해석이 가능함을 검증하였다. 마지막으로 복잡한 실제 지형에 적용하였으며, 측정값과의 비교를 통하여 실제 유역에서의 확산파 모형의 적정성을 검증하였다. 또한, 본 연구에서는 NVIDIA사의 GPU인 Geforce GTX 1050과 GPU의 병렬 연산 처리 능력을 활용할 수 있는 NVIDIA사의 CUDA-Fortran을 이용하여 GPU 기반 확산파 모형을 개발하였다. PC windows에서 CPU(Intel i7, 4.70 GHz) 기반 모형 대비 GPU 기반 모형의 계산속도 성능을 비교한 결과, 격자 간격이 증가할수록 CPU 기반 모형 대비 GPU 기반 모형의 연산 효율이 증가하였으며, 격자 간격이 $3200{\times}3200$일 때, CPU 기반 모형 대비 GPU 기반 모형의 연산 효율이 최대 약 150배 증가하였다.

  • PDF

Parallel Design and Implementation of Shot Boundary Detection Algorithm (샷 경계 탐지 알고리즘의 병렬 설계와 구현)

  • Lee, Joon-Goo;Kim, SeungHyun;You, Byoung-Moon;Hwang, DooSung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.2
    • /
    • pp.76-84
    • /
    • 2014
  • As the number of high-density videos increase, parallel processing approaches are necessary to process a large-scale of video data. When a processing method of video data requires thousands of simple operations, GPU-based parallel processing is preferred to CPU-based parallel processing by way of reducing the time and space complexities of a given computation problem. This paper studies the parallel design and implementation of a shot-boundary detection algorithm. The proposed shot-boundary detection algorithm uses pixel brightness comparisons and global histogram data among the blocks of frames, and the computation of these data is characterized with the high parallelism for the related operations. In order to maximize these operations in parallel, the computations of the pixel brightness and histogram are designed in parallel and implemented in NVIDIA GPU. The GPU-based shot detection method is tested with 10 videos from the set of videos in National Archive of Korea. In experiments, the detection rate is similar but the computation time is about 10 time faster to that of the CPU-based algorithm.