• Title/Summary/Keyword: GPU 메모리

Search Result 127, Processing Time 0.051 seconds

Parallel Range Query processing on R-tree with Graphics Processing Units (GPU를 이용한 R-tree에서의 범위 질의의 병렬 처리)

  • Yu, Bo-Seon;Kim, Hyun-Duk;Choi, Won-Ik;Kwon, Dong-Seop
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.5
    • /
    • pp.669-680
    • /
    • 2011
  • R-trees are widely used in various areas such as geographical information systems, CAD systems and spatial databases in order to efficiently index multi-dimensional data. As data sets used in these areas grow in size and complexity, however, range query operations on R-tree are needed to be further faster to meet the area-specific constraints. To address this problem, there have been various research efforts to develop strategies for acceleration query processing on R-tree by using the buffer mechanism or parallelizing the query processing on R-tree through multiple disks and processors. As a part of the strategies, approaches which parallelize query processing on R-tree through Graphics Processor Units(GPUs) have been explored. The use of GPUs may guarantee improved performances resulting from faster calculations and reduced disk accesses but may cause additional overhead costs caused by high memory access latencies and low data exchange rate between GPUs and the CPU. In this paper, to address the overhead problems and to adapt GPUs efficiently, we propose a novel approach which uses a GPU as a buffer to parallelize query processing on R-tree. The use of buffer algorithm can give improved performance by reducing the number of disk access and maximizing coalesced memory access resulting in minimizing GPU memory access latencies. Through the extensive performance studies, we observed that the proposed approach achieved up to 5 times higher query performance than the original CPU-based R-trees.

Development of high-speed image interpolation method using CUDA (GPU를 이용한 고속 영상 보간법 개발)

  • Cui, Xue-Nan;Park, Eun-Soo;Kim, Jun-Chui;Jung, Young-Han;Kim, Hak-Il
    • Proceedings of the KIEE Conference
    • /
    • 2008.10b
    • /
    • pp.300-301
    • /
    • 2008
  • 본 논문에서는 GPU를 이용한 고속 보간법 개발방법에 대해 제안한다. GPU는 흔히 그래픽 연산에 사용되지만 최근에는 GPGPH가 각광을 받고 있다. 특히 NVIDIA에서 발표한 CUDA를 이용하면 GPU를 쉽게 접근하여 프로세싱 할 수 있어 많은 분야에서 GPU를 활용하고 있다. 본 논문에서는 실제 CUDA를 이용하여 여러 가지 보간법에 대한 알고리즘을 구현하여 CUDA의 성능을 확인하였다. CPU에서 구현한 알고리즘과 CUDA를 이용한 알고리즘을 비교했을 때 메모리 할당 및 전송부분을 제외한 수순 프로세싱 시간을 보면 CPU에서 훨씬 좋은 성능을 나타내었고, 메모리 할당 및 전송을 고려했을 때 작은 사이즈 영상에서는 오히려 역효과가 나타났고, 대용량 영상에서는 좋은 성능을 나타냄을 확인하였다.

  • PDF

Bandwidth-Effective Rendering Scheme for 3D Texture-based Volume Visualization on GPU (3차원 텍스쳐 기반 볼륨 가시화를 위한 GPU 대역폭 효과적인 렌더링 기법)

  • Lee Won-Jong;Han Tack-Don
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07a
    • /
    • pp.673-675
    • /
    • 2005
  • 본 논문은 3차원 텍스쳐 기반의 볼륨 가시화를 위한 GPU 대역폭에 효과적인 렌더링 기법을 제안한다. 전처리 과정에서 옥트리를 이용하여 원본 볼륨 데이터를 계층적으로 균일한 크기로 분할하여 실제 영역만을 효과적으로 검출하게 되고, 렌더링 시에는 가시순서에 따라 옥트리를 탐색하며 리프 노드의 각 부볼륨을 텍스쳐 매핑 유닛에서 처리하고 블렌딩 유닛에서 이를 합성한다. 작은 크기($16^3$ 또는 $32^3$)의 부볼륨 처리는 텍스쳐와 픽셀 캐시의 이용율을 높이고 공백 공간 생략을 가용하게 하여 GPU의 메모리 대역폭을 크게 줄여 렌더링을 가속할 수 있다. 제안하는 기법의 캐시 효율, 메모리 트래픽, 렌더링 시간 등 다양한 실험 결과와 성능분석이 제공된다. 실험 결과는 제안하는 기 법이 전통적인 렌더링 방법에 비해 평균 11배의 대역폭 감소와 3배 빠른 렌더링을 가능하게 하여 GPU를 이용한 볼륨 렌더링에 효과적인 방법임을 보여주었다.

  • PDF

Performance Enhancement and Evaluation of a Deep Learning Framework on Embedded Systems using Unified Memory (통합메모리를 이용한 임베디드 환경에서의 딥러닝 프레임워크 성능 개선과 평가)

  • Lee, Minhak;Kang, Woochul
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.7
    • /
    • pp.417-423
    • /
    • 2017
  • Recently, many embedded devices that have the computing capability required for deep learning have become available; hence, many new applications using these devices are emerging. However, these embedded devices have an architecture different from that of PCs and high-performance servers. In this paper, we propose a method that improves the performance of deep-learning framework by considering the architecture of an embedded device that shares memory between the CPU and the GPU. The proposed method is implemented in Caffe, an open-source deep-learning framework, and is evaluated on an NVIDIA Jetson TK1 embedded device. In the experiment, we investigate the image recognition performance of several state-of-the-art deep-learning networks, including AlexNet, VGGNet, and GoogLeNet. Our results show that the proposed method can achieve significant performance gain. For instance, in AlexNet, we could reduce image recognition latency by about 33% and energy consumption by about 50%.

Analysis on Memory Characteristics of Graphics Processing Units for Designing Memory System of General-Purpose Computing on Graphics Processing Units (범용 그래픽 처리 장치의 메모리 설계를 위한 그래픽 처리 장치의 메모리 특성 분석)

  • Choi, Hongjun;Kim, Cheolhong
    • Smart Media Journal
    • /
    • v.3 no.1
    • /
    • pp.33-38
    • /
    • 2014
  • Even though the performance of microprocessor is improved continuously, the performance improvement of computing system becomes hard to increase, in order to some drawbacks including increased power consumption. To solve the problem, general-purpose computing on graphics processing units(GPGPUs), which execute general-purpose applications by using specialized parallel-processing device representing graphics processing units(GPUs), have been focused. However, the characteristics of applications related with graphics is substantially different from the characteristics of general-purpose applications. Therefore, GPUs cannot exploit the outstanding computational resources sufficiently due to various constraints, when they execute general-purpose applications. When designing GPUs for GPGPU, memory system is important to effectively exploit the GPUs since typically general-purpose applications requires more memory accesses than graphics applications. Especially, external memory access requiring long latency impose a big overhead on the performance of GPUs. Therefore, the GPU performance must be improved if hierarchical memory architecture which can reduce the number of external memory access is applied. For this reason, we will investigate the analysis of GPU performance according to hierarchical cache architectures in executing various benchmarks.

Acceleration of computation speed for elastic wave simulation using a Graphic Processing Unit (그래픽 프로세서를 이용한 탄성파 수치모사의 계산속도 향상)

  • Nakata, Norimitsu;Tsuji, Takeshi;Matsuoka, Toshifumi
    • Geophysics and Geophysical Exploration
    • /
    • v.14 no.1
    • /
    • pp.98-104
    • /
    • 2011
  • Numerical simulation in exploration geophysics provides important insights into subsurface wave propagation phenomena. Although elastic wave simulations take longer to compute than acoustic simulations, an elastic simulator can construct more realistic wavefields including shear components. Therefore, it is suitable for exploration of the responses of elastic bodies. To overcome the long duration of the calculations, we use a Graphic Processing Unit (GPU) to accelerate the elastic wave simulation. Because a GPU has many processors and a wide memory bandwidth, we can use it in a parallelised computing architecture. The GPU board used in this study is an NVIDIA Tesla C1060, which has 240 processors and a 102 GB/s memory bandwidth. Despite the availability of a parallel computing architecture (CUDA), developed by NVIDIA, we must optimise the usage of the different types of memory on the GPU device, and the sequence of calculations, to obtain a significant speedup of the computation. In this study, we simulate two- (2D) and threedimensional (3D) elastic wave propagation using the Finite-Difference Time-Domain (FDTD) method on GPUs. In the wave propagation simulation, we adopt the staggered-grid method, which is one of the conventional FD schemes, since this method can achieve sufficient accuracy for use in numerical modelling in geophysics. Our simulator optimises the usage of memory on the GPU device to reduce data access times, and uses faster memory as much as possible. This is a key factor in GPU computing. By using one GPU device and optimising its memory usage, we improved the computation time by more than 14 times in the 2D simulation, and over six times in the 3D simulation, compared with one CPU. Furthermore, by using three GPUs, we succeeded in accelerating the 3D simulation 10 times.

GPU-based Shift-FFT Implementation for Ultra-High Resolution Hologram Generation (초고해상도 홀로그램 생성을 위한 GPU 기반 Shift-FFT 처리 구현)

  • Lee, Jaehong;Kang, Homin;Yeom, Han-ju;Cheon, Sanghoon;Park, Joongki;Kim, Duksu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.563-566
    • /
    • 2020
  • 본 논문은 초고해상도 컴퓨터 홀로그램 생성을 위한 GPU 기반 2D Shift-FFT 의 효율적인 구현 방법을 제안한다. 본 연구가 제안하는 알고리즘은 기존에 여섯 단계로 이루어진 처리과정을 다섯 단계로 줄임으로서, 병렬처리에서 비효율적인 메모리 접근 과정을 줄인다. 또한, 핀드(pinned) 메모리 기반의 CPU-GPU 데이터 통신 통로인 핀드 버퍼(pinned buffer)를 사용하고 다중 스트림을 채용함으로써, GPU 활용의 주요 병목원인이 되는 데이터 통신의 부하를 줄이고 GPU 활용 효율을 높인다. 본 연구는 제안하는 알고리즘의 효용성을 증명하기 위해 서로 다른 두 시스템에 알고리즘을 구현하고, 다양한 크기의 행렬에 대한 2D-FFT 처리에 대한 성능을 측정하였다. 그 결과, CPU 기반의 FFTW 라이브러리 대비 최대 3 배, 동일한 GPU 를 사용하는 cuFFT 라이브러리 대비 최대 1.5 배 높은 성능을 달성하였다. 이러한 결과는, 본 연구가 제안하는 알고리즘의 효용성을 보여주는 결과다.

  • PDF

Optimizing Shared Memory Accesses for GPGPU Computations (GPGPU를 위한 공유 메모리 최적화)

  • Tran, Nhat-Phuong;Lee, Myungho;Hong, Sugwon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.11a
    • /
    • pp.197-199
    • /
    • 2012
  • Recently, a lot of general-purpose application programs in addition to graphic applications have been parallelized for boosting their performance using Graphic Processing Unit (GPU)'s excellent floating-point performance. In order to maximize the application performance on GPUs, optimizing the memory hierarchy and the on-chip caches such as the shared memory is essential. In this paper, we propose techniques to optimize the shared memory, and verify its effectiveness using a pattern matching application program.

Lightweight Network for Multi-exposure High Dynamic Range Imaging (다중 노출 High Dynamic Range 이미징을 위한 경량화 네트워크)

  • Lee, Keuntek;Cho, Nam Ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.70-73
    • /
    • 2021
  • 최근 영상 및 비디오 분야에 심층 신경망(DNN, Deep Neural Network)을 사용한 연구가 다양하게 진행됨에 따라 High Dynamic Range (HDR) 이미징 기술에서도 기존의 방법들 보다 우수한 성능을 보이는 심층 신경망 모델들이 등장하였다. 하지만, 심층 신경망을 사용한 방법은 큰 연산량과 많은 GPU 메모리를 사용한다는 문제점이 존재하며, 이는 심층 신경망 기반 기술들의 현실 적용 가능성에 제한이 되고 있다. 이에 본 논문에서는 제한된 연산량과 GPU 메모리 조건에서도 사용 가능한 다중 노출 HDR 경량화 심층 신경망을 제안한다. Kalantari Dataset에 대해 기존 HDR 모델들과의 성능 평가를 진행해 본 결과, PSNR-µ와 PSNR-l 수치에서 GPU 메모리 사용량 대비 우수한 성능을 보임을 확인하였다.

  • PDF

Performance Analysis and Enhancing Techniques of Kd-Tree Traversal Methods on GPU (GPU용 Kd-트리 탐색 방법의 성능 분석 및 향상 기법)

  • Chang, Byung-Joon;Ihm, In-Sung
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.2
    • /
    • pp.177-185
    • /
    • 2010
  • Ray-object intersection is an important element in ray tracing that takes up a substantial amount of computing time. In general, such spatial data structure as kd-tree has been frequently used for static scenes to accelerate the intersection computation. Recently, a few variants of kd-tree traversal have been proposed suitable for the GPU that has a relatively restricted computing architecture compared to the CPU. In this article, we propose yet another two implementation techniques that can improve those previous ones. First, we present a cached stack method that is aimed to reduce the costly global memory access time needed when the stack is allocated to global memory. Secondly, we present a rope-with-short-stack method that eases the substantial memory requirement, often necessary for the previous rope method. In order to show the effectiveness of our techniques, we compare their performances with those of the previous GPU traversal methods. The experimental results will provide prospective GPU ray tracer developers with valuable information, helping them choose a proper kd-tree traversal method.