• Title/Summary/Keyword: NVIDIA CUDA

Search Result 80, Processing Time 0.023 seconds

Benchmark Results of a Radio Spectrometer Based on Graphics Processing Unit

  • Kim, Jongsoo;Wagner, Jan
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.40 no.2
    • /
    • pp.44.1-44.1
    • /
    • 2015
  • We set up a project to make spectrometers for single dish observations of the Korean VLBI Network (KVN), a new future multi-beam receiver of the ASTE (Atacama Submillimeter Telescope Experiment), and the total power (TP) antennas of the Atacama Large Millimeter/submillimeter Array (ALMA). Traditionally, spectrometers based on ASIC (Application-Specific Integrated circuit) and FPGA (Field-Programmable Gate Array) have been used in radio astronomy. It is, however, that a Graphics Processing Unit (GPU) technology is now viable for spectrometers due to the rapid improvement of its performance. A high-resolution spectrometer should have the following functions: poly-phase filter, data-bit conversion, fast Fourier transform, and complex multiplication. We wrote a program based on CUDA (Compute Unified Device Architecture) for a GPU spectrometer. We measured its performance using two GPU cards, Titan X and K40m, from NVIDIA. A non-optimized GPU code can process a data stream of around 2 GHz bandwidth, which is enough for the KVN spectrometer and promising for the ASTE and ALMA TP spectrometers.

  • PDF

Fast Double Random Phase Encoding by Using Graphics Processing Unit (GPU 컴퓨팅에 의한 고속 Double Random Phase Encoding)

  • Saifullah, Saifullah;Moon, In-Kyu
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2012.05a
    • /
    • pp.343-344
    • /
    • 2012
  • With the increase of sensitive data and their secure transmission and storage, the use of encryption techniques has become widespread. The performance of encoding majorly depends on the computational time, so a system with less computational time suits more appropriate as compared to its contrary part. Double Random Phase Encoding (DRPE) is an algorithm with many sub functions which consumes more time when executed serially; the computation time can be significantly reduced by implementing important functions in a parallel fashion on Graphics Processing Unit (GPU). Computing convolution using Fast Fourier transform in DRPE is the most important part of the algorithm and it is shown in the paper that by performing this portion in GPU reduced the execution time of the process by substantial amount and can be compared with MATALB for performance analysis. NVIDIA graphic card GeForce 310 is used with CUDA C as a programming language.

  • PDF

The GPU-based Parallel Processing Algorithm for Fast Inspection of Semiconductor Wafers (반도체 웨이퍼 고속 검사를 위한 GPU 기반 병렬처리 알고리즘)

  • Park, Youngdae;Kim, Joon Seek;Joo, Hyonam
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.12
    • /
    • pp.1072-1080
    • /
    • 2013
  • In a the present day, many vision inspection techniques are used in productive industrial areas. In particular, in the semiconductor industry the vision inspection system for wafers is a very important system. Also, inspection techniques for semiconductor wafer production are required to ensure high precision and fast inspection. In order to achieve these objectives, parallel processing of the inspection algorithm is essentially needed. In this paper, we propose the GPU (Graphical Processing Unit)-based parallel processing algorithm for the fast inspection of semiconductor wafers. The proposed algorithm is implemented on GPU boards made by NVIDIA Company. The defect detection performance of the proposed algorithm implemented on the GPU is the same as if by a single CPU, but the execution time of the proposed method is about 210 times faster than the one with a single CPU.

Computationally Efficient Implementation of a Hamming Code Decoder Using Graphics Processing Unit

  • Islam, Md Shohidul;Kim, Cheol-Hong;Kim, Jong-Myon
    • Journal of Communications and Networks
    • /
    • v.17 no.2
    • /
    • pp.198-202
    • /
    • 2015
  • This paper presents a computationally efficient implementation of a Hamming code decoder on a graphics processing unit (GPU) to support real-time software-defined radio, which is a software alternative for realizing wireless communication. The Hamming code algorithm is challenging to parallelize effectively on a GPU because it works on sparsely located data items with several conditional statements, leading to non-coalesced, long latency, global memory access, and huge thread divergence. To address these issues, we propose an optimized implementation of the Hamming code on the GPU to exploit the higher parallelism inherent in the algorithm. Experimental results using a compute unified device architecture (CUDA)-enabled NVIDIA GeForce GTX 560, including 335 cores, revealed that the proposed approach achieved a 99x speedup versus the equivalent CPU-based implementation.

Performance Comparison of Parallel Programming Frameworks in Digital Image Transformation

  • Shin, Woochang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.11 no.3
    • /
    • pp.1-7
    • /
    • 2019
  • Previously, parallel computing was mainly used in areas requiring high computing performance, but nowadays, multicore CPUs and GPUs have become widespread, and parallel programming advantages can be obtained even in a PC environment. Various parallel programming frameworks using multicore CPUs such as OpenMP and PPL have been announced. Nvidia and AMD have developed parallel programming platforms and APIs for program developers to take advantage of multicore GPUs on their graphics cards. In this paper, we develop digital image transformation programs that runs on each of the major parallel programming frameworks, and measure the execution time. We analyze the characteristics of each framework through the execution time comparison. Also a constant K indicating the ratio of program execution time between different parallel computing environments is presented. Using this, it is possible to predict rough execution time without implementing a parallel program.

Efficient Parallel Bilateral Filter using GPGPU (GPGPU 를 이용한 양 방향성 필터의 병렬 구현 및 성능 평가)

  • Chang, Ki Joon;Ro, Won Woo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.11a
    • /
    • pp.369-372
    • /
    • 2011
  • 양 방향성 필터는 이미지표면 평탄화와 잡음제거에 좋은 성능을 보이지만 특유의 연산 복잡도로 인하여 연산 시간이 오래 걸린다는 단점이 존재한다. 따라서 본 논문에서는 고도의 병렬수행을 바탕으로 하는 그래픽연산장치(GPU)에 적합하도록 수정된 효율적인 양 방향성 필터를 NVIDIA 의 CUDA 를 사용하여 GTX 285 GPU 에서 구현하였다. 영상의 전 영역을 참조하는 대신 인접하고 연속된 영역으로의 근사화, 적은 메모리 사용량, 빠른 접근속도를 가지며 충돌이 최소화된 공유메모리 버퍼, Warp 를 고려한 병합된 메모리 접근방법을 바탕으로 병렬화 하였다. 그 결과, 같은 방식의 순차실행 알고리즘 대비 최소 약 34 배에서 최대 약 76 배의 속도 개선과 30dB 내외의 PSNR 을 갖는 양 방향성 필터를 구현할 수 있었다.

Evaluating the groundwater prediction using LSTM model (LSTM 모형을 이용한 지하수위 예측 평가)

  • Park, Changhui;Chung, Il-Moon
    • Journal of Korea Water Resources Association
    • /
    • v.53 no.4
    • /
    • pp.273-283
    • /
    • 2020
  • Quantitative forecasting of groundwater levels for the assessment of groundwater variation and vulnerability is very important. To achieve this purpose, various time series analysis and machine learning techniques have been used. In this study, we developed a prediction model based on LSTM (Long short term memory), one of the artificial neural network (ANN) algorithms, for predicting the daily groundwater level of 11 groundwater wells in Hankyung-myeon, Jeju Island. In general, the groundwater level in Jeju Island is highly autocorrelated with tides and reflected the effects of precipitation. In order to construct an input and output variables based on the characteristics of addressing data, the precipitation data of the corresponding period was added to the groundwater level data. The LSTM neural network was trained using the initial 365-day data showing the four seasons and the remaining data were used for verification to evaluate the fitness of the predictive model. The model was developed using Keras, a Python-based deep learning framework, and the NVIDIA CUDA architecture was implemented to enhance the learning speed. As a result of learning and verifying the groundwater level variation using the LSTM neural network, the coefficient of determination (R2) was 0.98 on average, indicating that the predictive model developed was very accurate.

An Accelerated Approach to Dose Distribution Calculation in Inverse Treatment Planning for Brachytherapy (근접 치료에서 역방향 치료 계획의 선량분포 계산 가속화 방법)

  • Byungdu Jo
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.5
    • /
    • pp.633-640
    • /
    • 2023
  • With the recent development of static and dynamic modulated brachytherapy methods in brachytherapy, which use radiation shielding to modulate the dose distribution to deliver the dose, the amount of parameters and data required for dose calculation in inverse treatment planning and treatment plan optimization algorithms suitable for new directional beam intensity modulated brachytherapy is increasing. Although intensity-modulated brachytherapy enables accurate dose delivery of radiation, the increased amount of parameters and data increases the elapsed time required for dose calculation. In this study, a GPU-based CUDA-accelerated dose calculation algorithm was constructed to reduce the increase in dose calculation elapsed time. The acceleration of the calculation process was achieved by parallelizing the calculation of the system matrix of the volume of interest and the dose calculation. The developed algorithms were all performed in the same computing environment with an Intel (3.7 GHz, 6-core) CPU and a single NVIDIA GTX 1080ti graphics card, and the dose calculation time was evaluated by measuring only the dose calculation time, excluding the additional time required for loading data from disk and preprocessing operations. The results showed that the accelerated algorithm reduced the dose calculation time by about 30 times compared to the CPU-only calculation. The accelerated dose calculation algorithm can be expected to speed up treatment planning when new treatment plans need to be created to account for daily variations in applicator movement, such as in adaptive radiotherapy, or when dose calculation needs to account for changing parameters, such as in dynamically modulated brachytherapy.

Parallel Design and Implementation of Shot Boundary Detection Algorithm (샷 경계 탐지 알고리즘의 병렬 설계와 구현)

  • Lee, Joon-Goo;Kim, SeungHyun;You, Byoung-Moon;Hwang, DooSung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.2
    • /
    • pp.76-84
    • /
    • 2014
  • As the number of high-density videos increase, parallel processing approaches are necessary to process a large-scale of video data. When a processing method of video data requires thousands of simple operations, GPU-based parallel processing is preferred to CPU-based parallel processing by way of reducing the time and space complexities of a given computation problem. This paper studies the parallel design and implementation of a shot-boundary detection algorithm. The proposed shot-boundary detection algorithm uses pixel brightness comparisons and global histogram data among the blocks of frames, and the computation of these data is characterized with the high parallelism for the related operations. In order to maximize these operations in parallel, the computations of the pixel brightness and histogram are designed in parallel and implemented in NVIDIA GPU. The GPU-based shot detection method is tested with 10 videos from the set of videos in National Archive of Korea. In experiments, the detection rate is similar but the computation time is about 10 time faster to that of the CPU-based algorithm.

Implementation of Viterbi Decoder on Massively Parallel GPU for DVB-T Receiver (DVB-T 수신기를 위한 대규모 병렬처리 GPU 기반의 비터비 복호기 구현)

  • Lee, KyuHyung;Lee, Ho-Kyoung;Heo, Seo Weon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.9
    • /
    • pp.3-11
    • /
    • 2013
  • Recently, a plenty of researches have been conducted using the massively parallel processing of GPU for the implementation of communication system. In this paper, we tried to reduce software simulation time applying GPU with sliding block method to Viterbi decoder in DVB-T system which is one of European DTV standards. First of all, we implement DVB-T system by CPU and estimate cost time whereby the system processes one OFDM symbol. Secondly, we implement Viterbi decoder by software using NVIDIA's massive GPU processor. In our work, stream process method is applied to reduce the overhead for data transfer between CPU and GPU, as well as coalescing method to lower the global memory access time. In addition, data structure design method is used to maximize the shared memory usage. Consequently, our proposed method is approximately 11 times faster in 2K mode and 60 times faster in 8K mode for the process in Viterbi decoder.