• Title/Summary/Keyword: GPU implementation

Search Result 147, Processing Time 0.028 seconds

Implementation of real-time FD-OCT system based on asynchronous triple buffering and parallel processing using GPU (GPU 병렬처리와 비동기 트리플 버퍼를 적용한 실시간 FD-OCT 시스템 구현)

  • Jeon, Jun-Young;Kim, Young-Bong
    • Annual Conference of KIPS
    • /
    • 2014.04a
    • /
    • pp.858-860
    • /
    • 2014
  • 최근 영상처리 기법과 하드웨어의 발달로 의학 분야에서는 질병의 진단에 다양한 영상 시스템을 활용하고 있다. 특히 OCT 기술은 인체조직의 고해상도 이미지 획득과 혈류속도 측정을 동시에 할 수 있어 의료분야에 다양하게 적용이 가능하여 많은 관심을 받고 있다. 이에 더욱더 선명한 OCT 영상을 획득하기 위해 다양한 알고리즘과 필터를 사용함에 따라 빠른 프로세스 처리가 요구되고 있는 실정이다. 본 논문에서는 듀얼 코어 이상급의 CPU 를 탑재한 시스템에서 데이터 처리 모듈과 렌더링 모듈을 트리플 버퍼를 통해 비동기식으로 멀티스레드화 하였고, GPU 기반의 병렬처리를 통한 데이터 처리를 하여 속도를 향상시켰다. 이에 광학 카메라 촬영 시 선명한 실시간 OCT 영상을 확인할 수 있었다.

The Implementation of Fast 3D Object Tracking using GPU (GPU를 이용한 3차원 고속 물체 추적 알고리즘 구현)

  • Kim, Su-Hyun;Jo, Chang-woo;Jeong, Chang-sung
    • Annual Conference of KIPS
    • /
    • 2013.05a
    • /
    • pp.374-376
    • /
    • 2013
  • 증강 현실(Argument Reality)에 대한 관심이 증가함에 따라 빠르고 강건한 물체 추적(Object Tracking)기법의 개발이 큰 이슈가 되고 있다. 특히, 마커를 사용하지 않는 경우에 추적 속도와 정확도의 정보가 이루어지는 강건한 Markerless 3D 추적 기술은 많은 연구가 이루어지고 있다. 본 논문에서는 SIFT(Scale Invariant Feature Transform)를 이용한 특징점 추출 및 매칭 기법을 통하여 높은 정확도의 물체 추적기법을 제안한다. 그리고 실시간으로 적용하기 어려운 SIFT의 느린 특징점 추출과 매칭 단계를 GPU 기반의 병렬화 작업을 통하여 개선시켜 향상된 추적 속도를 보여준다.

Implementation of fast moving detection using CUDA (CUDA를 이용한 고속 움직임 탐지 구현)

  • Lee, Seong-Yeon;Park, Seong-Mo;Kim, Jong-Nam
    • Annual Conference of KIPS
    • /
    • 2009.04a
    • /
    • pp.132-133
    • /
    • 2009
  • 움직임 검출 시스템은 감시카메라에서 불필요한 녹화를 방지하는 방법으로 널리 사용되고 있다. 그러나 최근 출시되고 있는 고화질 CCTV 카메라에서는 연산의 복잡도 때문에 실시간 처리가 어려운 실정이다. 이를 해결하기 위해 본 논문에서는 CUDA를 이용한 고속 움직임 탐지 시스템을 구현하였다. 기존의 움직임 탐지 시스템은 처리 속도의 한계로 인해 고속의 탐지가 어려웠을 뿐 아니라 고속으로 동작하도록 하려면 고가의 시스템 부품을 사용하여야 하므로 사용자에게 부담을 안겨주었다. 그러나 최근 발전을 거듭하고 있는 고속의 GPU를 이용하여 움직임 탐지 시스템을 구현할 경우 보다 저렴한 가격에 보다 뛰어난 성능을 가질 수 있도록 할 수 있다. 따라서 본 논문에서는 이러한 범용 GPU 사용기술인 nVidia의 CUDA를 이용하여 움직임 탐지 시스템을 구현하였다. 실험 결과 GPU 기반 시스템은 CPU 기반 시스템보다 80배가량 속도의 향상이 있었다. 제안하는 방법은 nVidia 그래픽 카드가 설치된 시스템에서 고속의 감시카메라 서버 등으로 적용이 가능하다.

FLUID SIMULATION METHODS FOR COMPUTER GRAPHICS SPECIAL EFFECTS (컴퓨터 그래픽스 특수효과를 위한 유체시뮬레이션 기법들)

  • Jung, Moon-Ryul
    • 한국전산유체공학회:학술대회논문집
    • /
    • 2009.11a
    • /
    • pp.1-1
    • /
    • 2009
  • In this presentation, I talk about various fluid simulation methods that have been developed for computer graphics special effects since 1996. They are all based on CFD but sacrifice physical reality for visual plausability and time. But as the speed of computer increases rapidly and the capability of GPU (graphics processing unit) improves, methods for more physical realism have been tried. In this talk, I will focus on four aspects of fluid simulation methods for computer graphics: (1) particle level-set methods, (2) particle-based simulation, (3) methods for exact satisfaction of incompressibility constraint, and (4) GPU-based simulation. (1) Particle level-set methods evolve the surface of fluid by means of the zero-level set and a band of massless marker particles on both sides of it. The evolution of the zero-level set captures the surface in an approximate manner and the evolution of marker particles captures the fine details of the surface, and the zero-level set is modified based on the particle positions in each step of evolution. (2) Recently the particle-based Lagrangian approach to fluid simulation gains some popularity, because it automatically respects mass conservation and the difficulty of tracking the surface geometry has been somewhat addressed. (3) Until recently fluid simulation algorithm was dominated by approximate fractional step methods. They split the Navier-Stoke equation into two, so that the first one solves the equation without considering the incompressibility constraint and the second finds the pressure which satisfies the constraint. In this approach, the first step introduces error inevitably, producing numerical diffusion in solution. But recently exact fractional step methods without error have been developed by fluid mechanics scholars), and another method was introduced which satisfies the incompressibility constraint by formulating fluid in terms of vorticity field rather than velocity field (by computer graphics scholars). (4) Finally, I want to mention GPU implementation of fluid simulation, which takes advantage of the fact that discrete fluid equations can be solved in parallel.

  • PDF

Development of a Remote Rendering System using Direct3D API (Direct3D API의 원격 실시간 실행 시스템 개발)

  • Lim, Choong-Gyoo
    • Journal of Korea Game Society
    • /
    • v.14 no.5
    • /
    • pp.117-126
    • /
    • 2014
  • There are various kinds of applications if one can develop a remote execution system using for legacy 3D APIs. It can be used in implementing a cloud gaming service based on the real-time video streaming technology. Or, it can also be used in implementing a GPU virtualization for simultaneously rendering of many different 3D applications. The OpenGL API consists of independent global functions while the Direct3D API consists of Microsoft COM-based interfaces and their member functions, which makes the implementation of remote rendering system more difficult. The purpose of the paper is to show the applicability of the technology to any legacy 3D API by successfully designing and implementing a remote rendering system using the Direct3D API. It applies the implementation to a sample Direct3D application and also performs a few experimentations to show the technical feasibility.

Parallel Approximate String Matching with k-Mismatches for Multiple Fixed-Length Patterns in DNA Sequences on Graphics Processing Units (GPU을 이용한 다중 고정 길이 패턴을 갖는 DNA 시퀀스에 대한 k-Mismatches에 의한 근사적 병열 스트링 매칭)

  • Ho, ThienLuan;Kim, HyunJin;Oh, SeungRohk
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.6
    • /
    • pp.955-961
    • /
    • 2017
  • In this paper, we propose a parallel approximate string matching algorithm with k-mismatches for multiple fixed-length patterns (PMASM) in DNA sequences. PMASM is developed from parallel single pattern approximate string matching algorithms to effectively calculate the Hamming distances for multiple patterns with a fixed-length. In the preprocessing phase of PMASM, all target patterns are binary encoded and stored into a look-up memory. With each input character from the input string, the Hamming distances between a substring and all patterns can be updated at the same time based on the binary encoding information in the look-up memory. Moreover, PMASM adopts graphics processing units (GPUs) to process the data computations in parallel. This paper presents three kinds of PMASM implementation methods in GPUs: thread PMASM, block-thread PMASM, and shared-mem PMASM methods. The shared-mem PMASM method gives an example to effectively make use of the GPU parallel capacity. Moreover, it also exploits special features of the CUDA (Compute Unified Device Architecture) memory structure to optimize the performance. In the experiments with DNA sequences, the proposed PMASM on GPU is 385, 77, and 64 times faster than the traditional naive algorithm, the shift-add algorithm and the single thread PMASM implementation on CPU. With the same NVIDIA GPU model, the performance of the proposed approach is enhanced up to 44% and 21%, compared with the naive, and the shift-add algorithms.

Correct Implementation of Sub-warp Parallel Prefix Operations based on GPU Hardware Architecture (GPU 하드웨어 아키텍처 기반 sub-warp 단위 병렬 프리픽스(prefix) 연산의 정확한 구현)

  • Park, Taejung
    • Journal of Digital Contents Society
    • /
    • v.18 no.3
    • /
    • pp.613-619
    • /
    • 2017
  • This paper presents a CUDA (Compute Unified Device Architecture) code to achieve correct GPU parallel segmented prefix operation results with less than 32 segment length for large data arrays. Mark Harris and Michael Garland had published CUDA code to address the tasks. This paper shows that their code does not generate correct results when the local segment length is less than 32, discusses the cause of the problem, and presents a CUDA code that generates correct results. The segmented parallel prefix operation presented in this paper can be applied as a building block to various large parallel processing algorithms including the k-nearest neighbor search problems.

The Implementation of Fast Object Recognition Using Parallel Processing on CPU and GPU (CPU와 GPU의 병렬 처리를 이용한 고속 물체 인식 알고리즘 구현)

  • Kim, Jun-Chul;Jung, Young-Han;Park, Eun-Soo;Cui, Xue-Nan;Kim, Hak-Il;Huh, Uk-Youl
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.488-495
    • /
    • 2009
  • This paper presents a fast feature extraction method for autonomous mobile robots utilizing parallel processing and based on OpenMP, SSE (Streaming SIMD Extension) and CUDA programming. In the first step on CPU version, the algorithms and codes are optimized and then implemented by parallel processing. The parallel algorithms are debugged to maintain the same level of performance and the process for extracting key points and obtaining dominant orientation with respect to key points is parallelized. After extraction, a parallel descriptor via SSE instructions is constructed. And the GPU version also implemented by parallel processing using CUDA based on the SIFT. The GPU-Parallel descriptor achieves an acceleration up to five times compared with the CPU-Parallel descriptor, but it shows the lower performance than CPU version. CPU version also speed-up the four and half times compared with the original SIFT while maintaining robust performance.

GPU Implementation of TMIV Decoder for Real-time Playback (실시간 재생을 위한 TMIV 디코더의 GPU 구현)

  • Lee, Sangho;Shin, Hongchang;Lee, Gwangsoon;Seo, Jeongil
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.122-125
    • /
    • 2022
  • TMIV 레퍼런스 모델에는 VWS(View Weighting Synthesizer), AS(Additive Synthesizer), MPIS(Multiplane Image Synthesizer)의 세 가지 방식의 렌더러 구현이 제시되어 있는데 본 논문에서는 VWS 에 포커스를 맞추어 GPU 로 구현하여 디코딩 성능을 개선한 결과를 소개하고자 한다. AS, MPIS 등에 대해서는 GPU 에 의한 구현이 아직 진행 중이며 본 구현이 적용된 TMIV 레퍼런스 모델의 버전은 8.0.1 이어서 최신 버전인 11 또는 12 에 바로 적용하기에는 다소 거리가 있겠으나, 본 구현에서 적용된 세부 구현 기술과 서브 모듈 등은 충분한 재활용성을 가지고 있어 다른 방식의 렌더러나 상위 버전의 고속화 구현에도 적용이 가능할 것이다. TMIV 8.0.1 의 디코더에서 1920×4640 크기를 가지는 두 개의 아틀라스를 기준으로 프레임 렌더링의 경우 싱글 프레임 당 약 4 초에서 평균 25ms 이하 로 실행 시간이 단축되어 약 150 배 이상의 성능 향상을 획득하였으며 렌더링 파이프라인의 추가 등에 의해 통상적으로 실시간이라고 여기는 30fps 의 속도로 재생이 가능한 성능에 도달한 결과를 소개하였다.

  • PDF

From WiFi to WiMAX: Efficient GPU-based Parameterized Transceiver across Different OFDM Protocols

  • Li, Rongchun;Dou, Yong;Zhou, Jie;Li, Baofeng;Xu, Jinbo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.8
    • /
    • pp.1911-1932
    • /
    • 2013
  • Orthogonal frequency-division multiplexing (OFDM) has become a popular modulation scheme for wireless protocols because of its spectral efficiency and robustness against multipath interference. Although the components of various OFDM protocols are functionally similar, they remain distinct because of the characteristics of the environment. Recently, graphics processing units (GPUs) have been used to accelerate the signal processing of the physical layer (PHY) because of their great computational power, high development efficiency, and flexibility. In this paper, we describe the implementation of parameterized baseband modules using GPUs for two different OFDM protocols, namely, 802.11a and 802.16. First, we introduce various modules in the modulator/demodulator parts of the transmitter and receiver and analyze the computational complexity of each module. We then describe the integration of the GPU-based baseband modules of the two protocols using the parameterized method. GPU-based implementations are addressed to explain how to accelerate the baseband processing to archive real-time throughput. Finally, the performance results of each signal processing module are evaluated and analyzed. The experiments show that the GPU-based 802.11a and 802.16 PHY meet the real-time requirement and demonstrate good bit error ratio (BER) performance. The performance comparison indicates that our GPU-based implemented modules have better flexibility and throughput to the current ones.