• Title/Summary/Keyword: GPU 병렬성

Search Result 92, Processing Time 0.041 seconds

Performance Evaluation of the GPU Architecture Executing Parallel Applications (병렬 응용프로그램 실행 시 GPU 구조에 따른 성능 분석)

  • Choi, Hong-Jun;Kim, Cheol-Hong
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.5
    • /
    • pp.10-21
    • /
    • 2012
  • The role of GPU has evolved from graphics-specific processing to general-purpose processing with the development of unified shader core architecture. Especially, execution methods for general-purpose parallel applications using GPU have been researched intensively, since the parallel hardware architecture can be utilized efficiently when the parallel applications are executed. However, current GPU architecture has limitations in executing general-purpose parallel applications, since the GPU is not specialized for general-purpose computing yet. To improve the GPU performance when general-purpose parallel applications are executed, the GPU architecture should be evolved. In this work, we analyze the GPU performance according to the architecture varying the number of cores and clock frequency. Our simulation results show that the GPU performance improves by up to 125.8% and 16.2% as the number of cores increases and the clock frequency increases, respectively. However, note that the improvement of the GPU performance is saturated even though the number of cores increases and the clock frequency increases continuously, since the data cannot be provided to the GPU due to the limit of memory bandwidth. Consequently, to accomplish high performance effectiveness on GPU, computational resources must be more suitably considered.

A Study on the Database Research Trends of Exploiting GPU Capabilities (데이터베이스에서의 GPU 활용 동향 연구)

  • Choi, Young-Hwan;Yeo, Eunji;Lee, Hyungseok;Lim, Hyo-sang
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.04a
    • /
    • pp.747-750
    • /
    • 2015
  • GPU(Graphic Processing Unit)의 높은 계산 능력과 병렬성은 그래픽 분야뿐만 아니라 다양한 분야에서 활용되고 있다. 본 논문에서는 데이터베이스 분야에서 GPU가 활용되고 있는 연구들을 소개한다. 이를 통하여 데이터베이스 분야에서 GPU 활용의 중요성과 이러한 연구가 활발히 이루어져야 하는 필요성을 보인다. 또한 각각의 연구들이 GPU 의 병렬성을 어떻게 활용하고 있는지 분석하여 다른 데이터베이스 관련 연구들에서도 GPU가 어떻게 활용될 수 있는지 중요한 단서들을 제공한다.

GPU for Multi-Layer Perceptron (다층 신경망 구현에서의 GPU 사용)

  • 정기철;오경수
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.736-738
    • /
    • 2004
  • 신경망의 테스트 단계를 실시간으로 처리하기 위해 많은 노력이 있었다 본 논문은 일반적인 그래픽스 하드웨어를 이용하여 더욱 빠른 신경망을 구현하고, 구현된 시스템을 영상 처리 분야에 적용함으로써 효용성을 검증한다. GPU는 CPU보다 병렬연산에 효과적이다. GPU의 병렬성을 효율적으로 사용하기 위하여, 다수의 신경망 입력벡터와 웨이트벡터를 모아서 많은 내적연산을 하나의 행렬곱 연산으로 대체하였고, 시그모이드와 바이어스 항 덧셈 연산도 픽셀세이더로 병렬 구현하였다. ATI RADEON 9800 XT 보드를 이용하여 구현된 신경망 시스템은 CPU를 사용한 기존의 시스템과 비교하여 정악도의 차이 없이 30배 정도의 속도 향상을 얻을 수 있었다.

  • PDF

Analysis of GPU Performance and Memory Efficiency according to Task Processing Units (작업 처리 단위 변화에 따른 GPU 성능과 메모리 접근 시간의 관계 분석)

  • Son, Dong Oh;Sim, Gyu Yeon;Kim, Cheol Hong
    • Smart Media Journal
    • /
    • v.4 no.4
    • /
    • pp.56-63
    • /
    • 2015
  • Modern GPU can execute mass parallel computation by exploiting many GPU core. GPGPU architecture, which is one of approaches exploiting outstanding computational resources on GPU, executes general-purpose applications as well as graphics applications, effectively. In this paper, we investigate the impact of memory-efficiency and performance according to number of CTAs(Cooperative Thread Array) on a SM(Streaming Multiprocessors), since the analysis of relation between number of CTA on a SM and them provides inspiration for researchers who study the GPU to improve the performance. Our simulation results show that almost benchmarks increasing the number of CTAs on a SM improve the performance. On the other hand, some benchmarks cannot provide performance improvement. This is because the number of CTAs generated from same kernel is a little or the number of CTAs executed simultaneously is not enough. To precisely classify the analysis of performance according to number of CTA on a SM, we also analyze the relations between performance and memory stall, dram stall due to the interconnect congestion, pipeline stall at the memory stage. We expect that our analysis results help the study to improve the parallelism and memory-efficiency on GPGPU architecture.

An MPI-CUDA Implementation for Parallel Scalability on Multi-GPU Clusters (멀티-GPU 기반 MPI-CUDA 병렬 성능 확장성)

  • Yi, Hong-Suk;Lee, Seung-Min
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06a
    • /
    • pp.13-15
    • /
    • 2012
  • 매우 빠른 GPU의 성능과 저가의 개발 비용으로, 최신 GPU는 대용량 계산과학 분야에 꼭 필수적인 자원으로 등장하였다. 이 논문에서는 멀티-GPU 클러스터 시스템에서 GPU 컴퓨팅 기술을 적용한 대용량 Monte Carlo 알고리즘을 개발하였다. MPI와 CUDA를 동시에 적용한 결과 8개 GPU까지 병렬 확장성을 얻을 수 있었다. 병렬 성능 확장성 분석 결과, 멀티-GPU 클러스터에서는 GPU 사이의 데이터 통신이 전체 프로그램 성능 향상을 결정하는 매우 중요한 요인임을 보였다.

Performance Enhancement of GPU Parallelism Algorithm including Memory Loading Time (메모리 로딩 시간을 고려한 GPU 병렬 알고리즘의 성능 개선 방안)

  • Bae, Byunggul;Lee, Jinwoo;Park, II-Nam;Im, Eun-Jin;Kang, Seung-Shik
    • Annual Conference on Human and Language Technology
    • /
    • 2012.10a
    • /
    • pp.119-120
    • /
    • 2012
  • GPU를 이용한 병렬 알고리즘은 어떤 메모리를 사용하는지에 따라 시스템의 전체적인 성능이 달라진다. 본 논문은 GPU 환경에서 실행되는 CUDA 프레임워크에서 병렬처리를 이용하여 문서 분류 시스템의 속도를 향상시키고자 할 때 메모리 로딩 시간이 전체적인 시스템의 성능에 미치는 영항을 연구하였다. 기존의 CPU 환경에서 구현했을 때와 비교하여 어느 정도의 성능 향상이 있었는지 실험하였으며 이전 연구에서 고려하지 않았던 메모리를 읽는데 걸리는 시간을 고려하여 현실적인 실행 시간을 비교하였다. 실험 결과에 의하면 CPU 에서 구현했을 때의 연산 속도보다 GPU의 텍스쳐 메모리를 사용하여 구현하였을 때 문서분류 성능이 향상되는 효과가 있음을 알 수 있었다.

  • PDF

MSHR-Aware Dynamic Warp Scheduler for High Performance GPUs (GPU 성능 향상을 위한 MSHR 활용률 기반 동적 워프 스케줄러)

  • Kim, Gwang Bok;Kim, Jong Myon;Kim, Cheol Hong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.8 no.5
    • /
    • pp.111-118
    • /
    • 2019
  • Recent graphic processing units (GPUs) provide high throughput by using powerful hardware resources. However, massive memory accesses cause GPU performance degradation due to cache inefficiency. Therefore, the performance of GPU can be improved by reducing thread parallelism when cache suffers memory contention. In this paper, we propose a dynamic warp scheduler which controls thread parallelism according to degree of cache contention. Usually, the greedy then oldest (GTO) policy for issuing warp shows lower parallelism than loose round robin (LRR) policy. Therefore, the proposed warp scheduler employs the LRR warp scheduling policy when Miss Status Holding Register(MSHR) utilization is low. On the other hand, the GTO policy is employed in order to reduce thread parallelism when MSHRs utilization is high. Our proposed technique shows better performance compared with LRR and GTO policy since it selects efficient scheduling policy dynamically. According to our experimental results, our proposed technique provides IPC improvement by 12.8% and 3.5% over LRR and GTO on average, respectively.

Multi GPU Based Image Registration for Cerebrovascular Extraction and Interactive Visualization (뇌혈관 추출과 대화형 가시화를 위한 다중 GPU기반 영상정합)

  • Park, Seong-Jin;Shin, Yeong-Gil
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.6
    • /
    • pp.445-449
    • /
    • 2009
  • In this paper, we propose a computationally efficient multi GPU accelerated image registration technique to correct the motion difference between the pre-contrast CT image and post-contrast CTA image. Our method consists of two steps: multi GPU based image registration and a cerebrovascular visualization. At first, it computes a similarity measure considering the parallelism between both GPUs as well as the parallelism inside GPU for performing the voxel-based registration. Then, it subtracts a CT image transformed by optimal transformation matrix from CTA image, and visualizes the subtracted volume using GPU based volume rendering technique. In this paper, we compare our proposed method with existing methods using 5 pairs of pre-contrast brain CT image and post-contrast brain CTA image in order to prove the superiority of our method in regard to visual quality and computational time. Experimental results show that our method well visualizes a brain vessel, so it well diagnose a vessel disease. Our multi GPU based approach is 11.6 times faster than CPU based approach and 1.4 times faster than single GPU based approach for total processing.

Parallel Design and Implementation of Shot Boundary Detection Algorithm (샷 경계 탐지 알고리즘의 병렬 설계와 구현)

  • Lee, Joon-Goo;Kim, SeungHyun;You, Byoung-Moon;Hwang, DooSung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.2
    • /
    • pp.76-84
    • /
    • 2014
  • As the number of high-density videos increase, parallel processing approaches are necessary to process a large-scale of video data. When a processing method of video data requires thousands of simple operations, GPU-based parallel processing is preferred to CPU-based parallel processing by way of reducing the time and space complexities of a given computation problem. This paper studies the parallel design and implementation of a shot-boundary detection algorithm. The proposed shot-boundary detection algorithm uses pixel brightness comparisons and global histogram data among the blocks of frames, and the computation of these data is characterized with the high parallelism for the related operations. In order to maximize these operations in parallel, the computations of the pixel brightness and histogram are designed in parallel and implemented in NVIDIA GPU. The GPU-based shot detection method is tested with 10 videos from the set of videos in National Archive of Korea. In experiments, the detection rate is similar but the computation time is about 10 time faster to that of the CPU-based algorithm.

Improving the Performance of Document Similarity by using GPU Parallelism (GPU 병렬성을 이용한 문서 유사도 계산 성능 개선)

  • Park, Il-Nam;Bae, Byung-Gurl;Im, Eun-Jin;Kang, Seung-Shik
    • The KIPS Transactions:PartB
    • /
    • v.19B no.4
    • /
    • pp.243-248
    • /
    • 2012
  • In the information retrieval systems like vector model implementation and document clustering, document similarity calculation takes a great part on the overall performance of the system. In this paper, GPU parallelism has been explored to enhance the processing speed of document similarity calculation in a CUDA framework. The proposed method increased the similarity calculation speed almost 15 times better compared to the typical CPU-based framework. It is 5.2 and 3.4 times better than the methods by using CUBLAS and Thrust, respectively.