• Title/Summary/Keyword: gpu

Search Result 964, Processing Time 0.028 seconds

Analyzing Fine-Grained Resource Utilization for Efficient GPU Workload Allocation (GPU 작업 배치의 효율화를 위한 자원 이용률 상세 분석)

  • Park, Yunjoo;Shin, Donghee;Cho, Kyungwoon;Bahn, Hyokyung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.1
    • /
    • pp.111-116
    • /
    • 2019
  • Recently, GPU expands application domains from graphic processing to various kinds of parallel workloads. However, current GPU systems focus on the maximization of each workload's parallelism through simplified control rather than considering various workload characteristics. This paper classifies the resource usage characteristics of GPU workloads into computing-bound, memory-bound, and dependency-latency-bound, and quantifies the fine-grained bottleneck for efficient workload allocation. For example, we identify the exact bottleneck resources such as single function unit, double function unit, or special function unit even for the same computing-bound workloads. Our analysis implies that workloads can be allocated together if fine-grained bottleneck resources are different even for the same computing-bound workloads, which can eventually contribute to efficient workload allocation in GPU.

Acceleration of ECC Computation for Robust Massive Data Reception under GPU-based Embedded Systems (GPU 기반 임베디드 시스템에서 대용량 데이터의 안정적 수신을 위한 ECC 연산의 가속화)

  • Kwon, Jisu;Park, Daejin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.7
    • /
    • pp.956-962
    • /
    • 2020
  • Recently, as the size of data used in an embedded system increases, the need for an ECC decoding operation to robustly receive a massive data is emphasized. In this paper, we propose a method to accelerate the execution of computations that derive syndrome vectors when ECC decoding is performed using Hamming code in an embedded system with a built-in GPU. The proposed acceleration method uses the matrix-vector multiplication of the decoding operation using the CSR format, one of the data structures representing sparse matrix, and is performed in parallel in the CUDA kernel of the GPU. We evaluated the proposed method using a target embedded board with a GPU, and the result shows that the execution time is reduced when ECC decoding operation accelerated based on the GPU than used only CPU.

Design and Implementation of High-Performance Cryptanalysis System Based on GPUDirect RDMA (GPUDirect RDMA 기반의 고성능 암호 분석 시스템 설계 및 구현)

  • Lee, Seokmin;Shin, Youngjoo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.6
    • /
    • pp.1127-1137
    • /
    • 2022
  • Cryptographic analysis and decryption technology utilizing the parallel operation of GPU has been studied in the direction of shortening the computation time of the password analysis system. These studies focus on optimizing the code to improve the speed of cryptographic analysis operations on a single GPU or simply increasing the number of GPUs to enhance parallel operations. However, using a large number of GPUs without optimization for data transmission causes longer data transmission latency than using a single GPU and increases the overall computation time of the cryptographic analysis system. In this paper, we investigate GPUDirect RDMA and related technologies for high-performance data processing in deep learning or HPC research fields in GPU clustering environments. In addition, we present a method of designing a high-performance cryptanalysis system using the relevant technologies. Furthermore, based on the suggested system topology, we present a method of implementing a cryptanalysis system using password cracking and GPU reduction. Finally, the performance evaluation results are presented according to demonstration of high-performance technology is applied to the implemented cryptanalysis system, and the expected effects of the proposed system design are shown.

Development of the sediment transport model using GPU arithmetic (GPU 연산을 활용한 유사이송 예측모형 개발)

  • Noh, Junsu;Son, Sangyoung
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.7
    • /
    • pp.431-438
    • /
    • 2023
  • Many shorelines are facing the beach erosion. Considering the climate change and the increment of coastal population, the erosion problem could be accelerated. To address this issue, developing a sediment transport model for rapidly predicting terrain change is crucial. In this study, a sediment transport model based on GPU parallel arithmetic was introduced, and it was supposed to simulate the terrain change well with a higher computing speed compared to the CPU based model. We also aim to investigate the model performance and the GPU computational efficiency. We applied several dam break cases to verified model, and we found that the simulated results were close to the observed results. The computational efficiency of GPU was defined by comparing operation time of CPU based model, and it showed that the GPU based model were more efficient than the CPU based model.

Acceleration of Mesh Denoising Using GPU Parallel Processing (GPU의 병렬 처리 기능을 이용한 메쉬 평탄화 가속 방법)

  • Lee, Sang-Gil;Shin, Byeong-Seok
    • Journal of Korea Game Society
    • /
    • v.9 no.2
    • /
    • pp.135-142
    • /
    • 2009
  • Mesh denoising is a method to remove noise applying various filters. However, those methods usually spend much time since filtering is performed on CPU. Because GPU is specialized for floating point operations and faster than CPU, real-time processing for complex operations is possible. Especially mesh denoising is adequate for GPU parallel processing since it repeats the same operations for vertices or triangles. In this paper, we propose mesh denoising algorithm based on bilateral filtering using GPU parallel processing to reduce processing time. It finds neighbor triangles of each vertex for applying bilateral filter, and computes its normal vector. Then it performs bilateral filtering to estimate new vertex position and to update its normal vector.

  • PDF

Optimization of Lightweight Encryption Algorithm (LEA) using Threads and Shared Memory of GPU (GPU의 스레드와 공유메모리를 이용한 LEA 최적화 방안)

  • Park, Moo Kyu;Yoon, Ji Won
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.25 no.4
    • /
    • pp.719-726
    • /
    • 2015
  • As big-data and cloud security technologies become popular, many researchers have recently been conducted on faster and lighter encryption. As a result, National Security Research Institute developed LEA which is lightweight and fast block cipher. To date, there have been various studies on lightweight encryption algorithm (LEA) for speeding up using GPU rather than conventional CPU. However, it is rather difficult to explore any guideline how to manipulate the GPU for the efficient usage of the LEA. Therefore, we introduce a guideline which explains how to implement and design the optimal LEA using GPU.

A Performance Enhancement of a Naval Multi-Function Radar Signal Processor (GPU를 이용한 함정용 다기능레이다 신호처리기 성능 개선 연구)

  • Kwon, Se-Woong;Hong, Sung-Min;Ryu, Seong-Hyun;Jung, Chae-Hyun;Sohn, Sung-Hwan;Lee, Ki-Won;Kang, Yeon-Duk
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.2
    • /
    • pp.141-147
    • /
    • 2020
  • We studied for GPU based signal processor for naval multi-function radar. We implemented processing software both DSP and GPU, and compared computation performances and power consumption. As a result, computation performance was enhanced from 1.2 to 4.1 times compared with a DSP result. From the results, GPU can alternating DSP based signal processor for common radar processor even though Naval Multi Function Radar.

The study on the Efficient methodology to apply the GPU for military information system improvement (국방정보시스템 성능향상을 위한 효율적인 GPU적용방안 연구)

  • Kauh, Janghyuk;Lee, Dongho
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.11 no.1
    • /
    • pp.27-35
    • /
    • 2015
  • Increasing the number of GPU (Graphic Processor Unit) cores, the studies on High Performance Computing Platform using GPU have actively been made in recent. This trend has led to the development of GPGPU (General Purpose GPU) and CUDA (Compute Unified Device Architecture) Framework. In this paper, we explain the many benefits of the GPU based system, and propose the ICIDF(Identify Compute-Intensive Data set and Function) methodology to apply GPU technology to legacy military information system for performance improvement. To demonstrate the efficiency of this methodology, we applied this method to AES CPU based program obtained from the Internet web site. Simply changing the data structure made improved the performance of AES program. As a result, the performance of AES based GPU program is improved gradually up to 10 times. Depending on the developer's ability, additional performance improvement can be expected. The problem to be solved is heat issue, but this problem has been much improved by the development of the cooling technology.

Pedestrian Inference Convolution Neural Network Using GP-GPU (GP-GPU를 이용한 보행자 추론 CNN)

  • Jeong, Junmo
    • Journal of IKEEE
    • /
    • v.21 no.3
    • /
    • pp.244-247
    • /
    • 2017
  • In this paper, we implemented a convolution neural network using GP-GPU. After defining the structure, CNN performed inferencing using the GP-GPU with 256 threads, which was the previous study, using the weight obtained from the training. Training used Intel i7-4470 CPU and Matlab. Dataset used Daimler Pedestrian Dataset. The GP-GPU is controlled by the PC using PCIe and operates as an FPGA. We assigned a thread according to the depth and size of each layer. In the case of the pooling layer, we used over warpping pooling to perform additional operations on the horizontal and vertical regions. One inferencing takes about 12 ms.

Optimal Implementation of Lightweight Block Cipher PIPO on CUDA GPGPU (CUDA GPGPU 상에서 경량 블록 암호 PIPO의 최적 구현)

  • Kim, Hyun-Jun;Eum, Si-Woo;Seo, Hwa-Jeong
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.6
    • /
    • pp.1035-1043
    • /
    • 2022
  • With the spread of the Internet of Things (IoT), cloud computing, and big data, the need for high-speed encryption for applications is emerging. GPU optimization can be used to validate cryptographic analysis results or reduced versions theoretically obtained by the GPU in a reasonable time. In this paper, PIPO lightweight encryption implemented in various environments was implemented on GPU. Optimally implemented considering the brute force attack on PIPO. In particular, the optimization implementation applying the bit slicing technique and the GPU elements were used as much as possible. As a result, the implementation of the proposed method showed a throughput of about 19.5 billion per second in the RTX 3060 environment, achieving a throughput of about 122 times higher than that of the previous study.