• Title/Summary/Keyword: Graphics Processing Unit (GPU)

Search Result 154, Processing Time 0.031 seconds

Implementation of a 3D Graphics Simulator for GP-GPU (GP-GPU 개발을 위한 3차원 그래픽 시뮬레이터 구현)

  • Yeo, Dong-young;Kim, Woo-young;Jung, Hyung-Ki;Lee, Kwang-Yeob
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.337-340
    • /
    • 2009
  • Since a hardware accelerator for 3D graphics processing GPU(Graphics Processing Unit)'s performance has been improving constantly. This is the efficient way was introduced for complex graphics application, but it is rarely used to utilize 100% resources on GPU. GP-GPU(general-purpose GPU), including operations on the GPU and supporting common operations can be handled by the processor, is noted by depending on the distribution of resources that can be effectively controlled. In this paper, the simulator was implemented that supports virtual environment of GP-GPU and available for program design and debugging. Through this, the co-design development environment support simultaneous design fast and reliable verification that are available to build the interface of three-dimensional graphics display.

  • PDF

An Efficient Technique for Processing of Spatial Data Using GPU (GPU를 사용한 효율적인 공간 데이터 처리)

  • Lee, Jae-Il;Oh, Byoung-Woo
    • Spatial Information Research
    • /
    • v.17 no.3
    • /
    • pp.371-379
    • /
    • 2009
  • Recently, GPU (Graphics Processing Unit) has been improved rapidly on the need of speed for gaming. As a result, GPU contains multiple ALU (Arithmetic Logic Unit) for parallel processing of a lot of graphics data, such as transform, ray tracing, etc. Therefore, this paper proposed a technique for parallel processing of spatial data using GPU. Spatial data consists of multiple coordinates, and each coordinate contains value of x and y axis. To display spatial data graphics operations have to be processed to large amount of coordinates. Because the graphics operation is identical and coordinates are multiple data, SIMD (Single Instruction Multiple Data) parallel processing of GPU can be used for processing of spatial data to improve performance. This paper implemented SIMD parallel processing of spatial data using two kinds of SDK (Software Development Kit). CUDA and ATI Stream are used for NVIDIA and ATI GPU respectively. Experiments that measure time of calculation for graphics operations are carried out to observe enhancement of performance. Experimental result is reported that proposed method can enhance performance up to 1,162% for graphics operations. The proposed method that uses parallel processing with GPU for spatial data can be generally used to enhance performance for applications which deal with large amount of spatial data.

  • PDF

Systematic Evaluation of Island based Real-Valued Genetic Algorithm with Graphics Processing Unit (Graphics Processing Unit를 이용한 섬기반 Real-Valued Genetic Algorithm의 체계적 평가)

  • Park, Hyun-Soo;Kim, Kyung-Joong
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2010.06c
    • /
    • pp.328-333
    • /
    • 2010
  • 최적해를 구하는 효과적인 방법 중 하나인 GA (Genetic Algorithm)은 높은 품질의 해를 구하기 위해서 많은 연산시간이 필요하지만, GPU (Graphics Processing Unit)의 높은 데이터 병렬처리 능력과 우수한 부동소수 연산능력을 이용하면 빠르게 처리 가능하다. 이 논문에서는 GPU를 이용하여 가속한 섬 기반의 RVGA (Real-Valued Genetic Algorithm)와 GPU를 이용하지 않는 RVGA를 비교하여 평가하였으며, 또한 GPU를 이용하지만 RVGA가 아닌 Simple GA인 경우와도 비교하여 평가 하였다. 그 결과, GPU를 이용한 경우 속도 향상을 할 수 있었으며, Simple GA보다 RVGA가 더 속도가 향상되었다.

  • PDF

The Need of Cache Partitioning on Shared Cache of Integrated Graphics Processor between CPU and GPU (내장형 GPU 환경에서 CPU-GPU 간의 공유 캐시에서의 캐시 분할 방식의 필요성)

  • Sung, Hanul;Eom, Hyeonsang;Yeom, HeonYoung
    • KIISE Transactions on Computing Practices
    • /
    • v.20 no.9
    • /
    • pp.507-512
    • /
    • 2014
  • Recently, Distributed computing processing begins using both CPU(Central processing unit) and GPU(Graphic processing unit) to improve the performance to overcome darksilicon problem which cannot use all of the transistors because of the electric power limitation. There is an integrated graphics processor that CPU and GPU share memory and Last level cache(LLC). But, There is no LLC access rules between CPU and GPU, so if GPU and CPU processes run together at the same time, performance of both processes gets worse because of the contention on the LLC. This Paper gives evidence to prove the need of the Cache Partitioning and is mentioned about the cache partitioning design using page coloring to allocate the L3 Cache space only for the GPU process to guarantee GPU process performance.

GPGPU Task Management Technique to Mitigate Performance Degradation of Virtual Machines due to GPU Operation in Cloud Environments (클라우드 환경에서 GPU 연산으로 인한 가상머신의 성능 저하를 완화하는 GPGPU 작업 관리 기법)

  • Kang, Jihun;Gil, Joon-Min
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.9
    • /
    • pp.189-196
    • /
    • 2020
  • Recently, GPU cloud computing technology applying GPU(Graphics Processing Unit) devices to virtual machines is widely used in the cloud environment. In a cloud environment, GPU devices assigned to virtual machines can perform operations faster than CPUs through massively parallel processing, which can provide many benefits when operating high-performance computing services in a variety of fields in a cloud environment. In a cloud environment, a GPU device can help improve the performance of a virtual machine, but the virtual machine scheduler, which is based on the CPU usage time of a virtual machine, does not take into account GPU device usage time, affecting the performance of other virtual machines. In this paper, we test and analyze the performance degradation of other virtual machines due to the virtual machine that performs GPGPU(General-Purpose computing on Graphics Processing Units) task in the direct path based GPU virtualization environment, which is often used when assigning GPUs to virtual machines in cloud environments. Then to solve this problem, we propose a GPGPU task management method for a virtual machine.

Computationally Efficient Implementation of a Hamming Code Decoder Using Graphics Processing Unit

  • Islam, Md Shohidul;Kim, Cheol-Hong;Kim, Jong-Myon
    • Journal of Communications and Networks
    • /
    • v.17 no.2
    • /
    • pp.198-202
    • /
    • 2015
  • This paper presents a computationally efficient implementation of a Hamming code decoder on a graphics processing unit (GPU) to support real-time software-defined radio, which is a software alternative for realizing wireless communication. The Hamming code algorithm is challenging to parallelize effectively on a GPU because it works on sparsely located data items with several conditional statements, leading to non-coalesced, long latency, global memory access, and huge thread divergence. To address these issues, we propose an optimized implementation of the Hamming code on the GPU to exploit the higher parallelism inherent in the algorithm. Experimental results using a compute unified device architecture (CUDA)-enabled NVIDIA GeForce GTX 560, including 335 cores, revealed that the proposed approach achieved a 99x speedup versus the equivalent CPU-based implementation.

Development of Real-Time Image Processing System Using GPU (GPU를 이용한 실시간 이미지 프로세싱 시스템)

  • Oh Jae-Hong;Kang Hoon;Lee Ja-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.5
    • /
    • pp.393-397
    • /
    • 2005
  • When a real-time image processing application is implemented with a general-purpose computer, CPU (Central Processing Unit) is usually heavily loaded and in many cases that CPU alone cannot meet the real-time requirement at all. Most modern computers are equipped with powerful Graphics Processing Units (GPUs) to accelerate graphics operations. There is a trend that the power of GPU outgrows that of CPU. If we take advantage of the powerful GPU for more general operations other than pure graphics operations, the processing time can be reduced. In this study, we will present techniques that apply GPU to general operations such as image processing procedures. Our experiment results show that significant speed-up can be achieved by using GPU.

Real-Time Object Segmentation in Image Sequences (연속 영상 기반 실시간 객체 분할)

  • Kang, Eui-Seon;Yoo, Seung-Hun
    • The KIPS Transactions:PartB
    • /
    • v.18B no.4
    • /
    • pp.173-180
    • /
    • 2011
  • This paper shows an approach for real-time object segmentation on GPU (Graphics Processing Unit) using CUDA (Compute Unified Device Architecture). Recently, many applications that is monitoring system, motion analysis, object tracking or etc require real-time processing. It is not suitable for object segmentation to procedure real-time in CPU. NVIDIA provide CUDA platform for Parallel Processing for General Computation to upgrade limit of Hardware Graphic. In this paper, we use adaptive Gaussian Mixture Background Modeling in the step of object extraction and CCL(Connected Component Labeling) for classification. The speed of GPU and CPU is compared and evaluated with implementation in Core2 Quad processor with 2.4GHz.The GPU version achieved a speedup of 3x-4x over the CPU version.

A Study on a Declines in Performance by Memory Copy in CUDA (CUDA의 메모리 복사로 인한 성능 저하 연구)

  • Kang, Jihun;Lee, DaeWon;Kang, InSung;Yu, HeonChang
    • Annual Conference of KIPS
    • /
    • 2013.11a
    • /
    • pp.135-138
    • /
    • 2013
  • GPGPU(General Purpose Graphics Processing Unit) 병렬처리 시스템인 CUDA(Compute Unified Device Architecture)는 컴퓨터에서의 고속 연산 처리를 위해 많이 사용되어왔다. CUDA에서 연산 처리를 하기 위해서는 CUDA의 특성을 이해해야 한다. CUDA는 CPU(Central Processing Unit)가 처리하는 Host 영역과 GPU(Graphics Processing Unit)가 처리하는 영역인 Device 영역이 존재하며, 이 두 영역간의 데이터 복사를 통해 연산 처리를 진행한다. 이런 구조적인 특성상 메인 메모리에서 GPU 메모리로 입력 데이터를 전달해야 GPU를 이용해 연산을 처리할 수 있는 구조를 가지고 있다. 하지만 이러한 처리 구조로 인해 연산 시간과 별도로 메인 메모리와 GPU 메모리간의 데이터 복사시간이 존재하며, 추가적으로 발생하는 메모리 복사 시간으로 인해 오버헤드가 발생하게 된다. 본 논문에서는 실험을 통해 메모리 복사 시간, 연산의 반복 횟수 그리고 연산의 복잡성이 전체 성능에 어떤 영향을 미치는지 논하고자 한다.

Real-time BCC Volume Isosurface Ray Casting on the GPU (GPU를 이용한 실시간 BCC 볼륨 등가면 레이 캐스팅)

  • Kim, Minho;Lee, Young-Joon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.4
    • /
    • pp.25-34
    • /
    • 2012
  • This paper presents a real-time GPU (graphics processing unit) ray casting scheme for rendering isosurfaces of BCC (body-centered cubic) volume datasets. A quartic spline field is built using the 7-direction box-spline filter accompanied with a quasi-interpolation prefilter. To obtain an interactive rendering speed on the graphics hardware, the shader code was optimized to avoid lookup table and conditional branches and to minimize data fetch overhead. Compared to previous implementations, our work outperforms the comparable one by more than 20% and the rendering quality is superior than others.