• Title/Summary/Keyword: Graphics processing unit

Search Result 190, Processing Time 0.024 seconds

Design of a Floating Point Unit for 3D Graphics Geometry Engine (3D 그래픽 Geometry Engine을 위한 부동소수점 연산기의 설계)

  • Kim, Myeong Hwm;Oh, Min Seok;Lee, Kwang Yeob;Kim, Won Jong;Cho, Han Jin
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.42 no.10 s.340
    • /
    • pp.55-64
    • /
    • 2005
  • In this paper, we designed floating point units to accelate real-time 3D Graphics for Geometry processing. Designed floating point units support IEEE-754 single precision format and we confirmed 100 MHz performance of floating point add/mul unit, 120 MHz performance of floating point NR inverse division unit, 200 MHz performance of floating point power unit, 120 MHz performance of floating point inverse square root unit at Xilinx-vertex2. Also, using floating point units, designed Geometry processor and confirmed 3D Graphics data processing.

The Need of Cache Partitioning on Shared Cache of Integrated Graphics Processor between CPU and GPU (내장형 GPU 환경에서 CPU-GPU 간의 공유 캐시에서의 캐시 분할 방식의 필요성)

  • Sung, Hanul;Eom, Hyeonsang;Yeom, HeonYoung
    • KIISE Transactions on Computing Practices
    • /
    • v.20 no.9
    • /
    • pp.507-512
    • /
    • 2014
  • Recently, Distributed computing processing begins using both CPU(Central processing unit) and GPU(Graphic processing unit) to improve the performance to overcome darksilicon problem which cannot use all of the transistors because of the electric power limitation. There is an integrated graphics processor that CPU and GPU share memory and Last level cache(LLC). But, There is no LLC access rules between CPU and GPU, so if GPU and CPU processes run together at the same time, performance of both processes gets worse because of the contention on the LLC. This Paper gives evidence to prove the need of the Cache Partitioning and is mentioned about the cache partitioning design using page coloring to allocate the L3 Cache space only for the GPU process to guarantee GPU process performance.

An Efficient Block Cipher Implementation on Many-Core Graphics Processing Units

  • Lee, Sang-Pil;Kim, Deok-Ho;Yi, Jae-Young;Ro, Won-Woo
    • Journal of Information Processing Systems
    • /
    • v.8 no.1
    • /
    • pp.159-174
    • /
    • 2012
  • This paper presents a study on a high-performance design for a block cipher algorithm implemented on modern many-core graphics processing units (GPUs). The recent emergence of VLSI technology makes it feasible to fabricate multiple processing cores on a single chip and enables general-purpose computation on a GPU (GPGPU). The GPU strategy offers significant performance improvements for all-purpose computation and can be used to support a broad variety of applications, including cryptography. We have proposed an efficient implementation of the encryption/decryption operations of a block cipher algorithm, SEED, on off-the-shelf NVIDIA many-core graphics processors. In a thorough experiment, we achieved high performance that is capable of supporting a high network speed of up to 9.5 Gbps on an NVIDIA GTX285 system (which has 240 processing cores). Our implementation provides up to 4.75 times higher performance in terms of encoding and decoding throughput as compared to the Intel 8-core system.

GPGPU Task Management Technique to Mitigate Performance Degradation of Virtual Machines due to GPU Operation in Cloud Environments (클라우드 환경에서 GPU 연산으로 인한 가상머신의 성능 저하를 완화하는 GPGPU 작업 관리 기법)

  • Kang, Jihun;Gil, Joon-Min
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.9
    • /
    • pp.189-196
    • /
    • 2020
  • Recently, GPU cloud computing technology applying GPU(Graphics Processing Unit) devices to virtual machines is widely used in the cloud environment. In a cloud environment, GPU devices assigned to virtual machines can perform operations faster than CPUs through massively parallel processing, which can provide many benefits when operating high-performance computing services in a variety of fields in a cloud environment. In a cloud environment, a GPU device can help improve the performance of a virtual machine, but the virtual machine scheduler, which is based on the CPU usage time of a virtual machine, does not take into account GPU device usage time, affecting the performance of other virtual machines. In this paper, we test and analyze the performance degradation of other virtual machines due to the virtual machine that performs GPGPU(General-Purpose computing on Graphics Processing Units) task in the direct path based GPU virtualization environment, which is often used when assigning GPUs to virtual machines in cloud environments. Then to solve this problem, we propose a GPGPU task management method for a virtual machine.

An Implementation of a Video-Equipped Real-Time Fire Detection Algorithm Using GPGPU (GPGPU를 이용한 비디오 기반 실시간 화재감지 알고리즘 구현)

  • Shon, Dong-Koo;Kim, Cheol-Hong;Kim, Jong-Myon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.8
    • /
    • pp.1-10
    • /
    • 2014
  • This paper proposes a parallel implementation of the video based 4-stage fire detection algorithm using a general-purpose graphics processing unit (GPGPU) to support real-time processing of the high computational algorithm. In addition, this paper compares the performance of the GPGPU based fire detection implementation with that of the CPU implementation to show the effectiveness of the proposed method. Experimental results using five fire included videos with an SXGA ($1400{\times}1050$) resolution, the proposed GPGPU implementation achieves 6.6x better performance that the CPU implementation, showing 30.53ms per frame which satisfies real-time processing (30 frames per second, 30fps) of the fire detection algorithm.

Parallel Implementation and Performance Evaluation of the SIFT Algorithm Using a Many-Core Processor (매니코어 프로세서를 이용한 SIFT 알고리즘 병렬구현 및 성능분석)

  • Kim, Jae-Young;Son, Dong-Koo;Kim, Jong-Myon;Jun, Heesung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.9
    • /
    • pp.1-10
    • /
    • 2013
  • In this paper, we implement the SIFT(Scale-Invariant Feature Transform) algorithm for feature point extraction using a many-core processor, and analyze the performance, area efficiency, and system area efficiency of the many-core processor. In addition, we demonstrate the potential of the proposed many-core processor by comparing the performance of the many-core processor with that of high-performance CPU and GPU(Graphics Processing Unit). Experimental results indicate that the accuracy result of the SIFT algorithm using the many-core processor was same as that of OpenCV. In addition, the many-core processor outperforms CPU and GPU in terms of execution time. Moreover, this paper proposed an optimal model of the SIFT algorithm on the many-core processor by analyzing energy efficiency and area efficiency for different octave sizes.

Accelerating Depth Image-Based Rendering Using GPU (GPU를 이용한 깊이 영상기반 렌더링의 가속)

  • Lee, Man-Hee;Park, In-Kyu
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.11
    • /
    • pp.853-858
    • /
    • 2006
  • In this paper, we propose a practical method for hardware-accelerated rendering of the depth image-based representation(DIBR) of 3D graphic object using graphic processing unit(GPU). The proposed method overcomes the drawbacks of the conventional rendering, i.e. it is slow since it is hardly assisted by graphics hardware and surface lighting is static. Utilizing the new features of modem GPU and programmable shader support, we develop an efficient hardware-accelerating rendering algorithm of depth image-based 3D object. Surface rendering in response of varying illumination is performed inside the vertex shader while adaptive point splatting is performed inside the fragment shader. Experimental results show that the rendering speed increases considerably compared with the software-based rendering and the conventional OpenGL-based rendering method.

Benchmark Results of a Radio Spectrometer Based on Graphics Processing Unit

  • Kim, Jongsoo;Wagner, Jan
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.40 no.2
    • /
    • pp.44.1-44.1
    • /
    • 2015
  • We set up a project to make spectrometers for single dish observations of the Korean VLBI Network (KVN), a new future multi-beam receiver of the ASTE (Atacama Submillimeter Telescope Experiment), and the total power (TP) antennas of the Atacama Large Millimeter/submillimeter Array (ALMA). Traditionally, spectrometers based on ASIC (Application-Specific Integrated circuit) and FPGA (Field-Programmable Gate Array) have been used in radio astronomy. It is, however, that a Graphics Processing Unit (GPU) technology is now viable for spectrometers due to the rapid improvement of its performance. A high-resolution spectrometer should have the following functions: poly-phase filter, data-bit conversion, fast Fourier transform, and complex multiplication. We wrote a program based on CUDA (Compute Unified Device Architecture) for a GPU spectrometer. We measured its performance using two GPU cards, Titan X and K40m, from NVIDIA. A non-optimized GPU code can process a data stream of around 2 GHz bandwidth, which is enough for the KVN spectrometer and promising for the ASTE and ALMA TP spectrometers.

  • PDF

Fast Double Random Phase Encoding by Using Graphics Processing Unit (GPU 컴퓨팅에 의한 고속 Double Random Phase Encoding)

  • Saifullah, Saifullah;Moon, In-Kyu
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2012.05a
    • /
    • pp.343-344
    • /
    • 2012
  • With the increase of sensitive data and their secure transmission and storage, the use of encryption techniques has become widespread. The performance of encoding majorly depends on the computational time, so a system with less computational time suits more appropriate as compared to its contrary part. Double Random Phase Encoding (DRPE) is an algorithm with many sub functions which consumes more time when executed serially; the computation time can be significantly reduced by implementing important functions in a parallel fashion on Graphics Processing Unit (GPU). Computing convolution using Fast Fourier transform in DRPE is the most important part of the algorithm and it is shown in the paper that by performing this portion in GPU reduced the execution time of the process by substantial amount and can be compared with MATALB for performance analysis. NVIDIA graphic card GeForce 310 is used with CUDA C as a programming language.

  • PDF