• Title/Summary/Keyword: CUDA(CUDA)

Search Result 295, Processing Time 0.035 seconds

A Road Region Extraction Using OpenCV CUDA To Advance The Processing Speed (처리 속도 향상을 위해 OpenCV CUDA를 활용한 도로 영역 검출)

  • Lee, Tae-Hee;Hwang, Bo-Hyun;Yun, Jong-Ho;Choi, Myung-Ryul
    • Journal of Digital Convergence
    • /
    • v.12 no.6
    • /
    • pp.231-236
    • /
    • 2014
  • In this paper, we propose a processing speed improvement by adding a parallel processing based on device(graphic card) into a road region extraction by host(PC) based serial processing. The OpenCV CUDA supports the many functions of parallel processing method by interworking a conventional OpenCV with CUDA. Also, when interworking the OpenCV and CUDA, OpenCV functions completed a configuration are optimized the User's device(Graphic Card) specifications. Thus, OpenCV CUDA usage provides an algorithm verification and easiness of simulation result deduction. The proposed method is verified that the proposed method has a about 3.09 times faster processing speed than a conventional method by using OpenCV CUDA and graphic card of NVIDIA GeForce GTX 560 Ti model through experimentation.

A Simulation Framework for CUDA Computing on Non-x86 Platforms based on QEMU and GPGPU-Sim (비x86 플랫폼 상에서의 CUDA 컴퓨팅을 위한 QEMU 및 GPGPU-Sim 기반 시뮬레이션 프레임워크 개발)

  • Hwang, Jaemin;Choi, Jong-Wook;Choi, Seongrim;Nam, Byeong-Gyu
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.19 no.2
    • /
    • pp.15-22
    • /
    • 2014
  • This paper proposes a CUDA simulation framework for non-x86 computing platforms based on QEMU and GPGPU-sim. Previous simulators for heterogeneous computing platforms did not support for non-x86 CPU models or CUDA computing platform. In this work, we combined the QEMU and the GPGPU-Sim to support the non-x86 CPU models and the CUDA platform, respectively. This approach provides a simulation framework for CUDA computing on non-x86 CPU models.

Implementation of Particle Swarm Optimization Method Using CUDA (CUDA를 이용한 Particle Swarm Optimization 구현)

  • Kim, Jo-Hwan;Kim, Eun-Su;Kim, Jong-Wook
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.58 no.5
    • /
    • pp.1019-1024
    • /
    • 2009
  • In this paper, particle swarm optimization(PSO) is newly implemented by CUDA(Compute Unified Device Architecture) and is applied to function optimization with several benchmark functions. CUDA is not CPU but GPU(Graphic Processing Unit) that resolves complex computing problems using parallel processing capacities. In addition, CUDA helps one to develop GPU softwares conveniently. Compared with the optimization result of PSO executed on a general CPU, CUDA saves about 38% of PSO running time as average, which implies that CUDA is a promising frame for real-time optimization and control.

Correlation between adolescents' dietary safety management competency and value recognition, efficacy, and competency of convergence using dietary area: a descriptive study (청소년의 식생활 안전관리역량과 식생활 영역을 활용한 융합의 가치 인식 및 효능감, 역량과의 상관성 연구)

  • Yunhwa Kim;Yeon-Kyung Lee
    • Korean Journal of Community Nutrition
    • /
    • v.28 no.4
    • /
    • pp.317-328
    • /
    • 2023
  • Objectives: This study aimed to investigate the correlation between adolescents' dietary safety management competency, value recognition, efficacy, and competency of convergence using the dietary area (CUDA). Methods: Data were collected from 480 middle and high school students in Daegu, Gyeongbuk and Seoul, Gyeonggi using a self-administered five-point Likert scale questionnaire from May to July 2021. A questionnaire was used to investigate dietary safety management competency, awareness of convergence, recognition of the benefits, efficacy, and competency of CUDA. Results: We conducted factor, reliability, correlation, and regression analyses using SPSS 25. The average scores for each factor were: dietary significance (3.68); dietary safety management knowledge (3.34); food selection and cooking (3.72); nutrition management (3.38); weight management (3.28); risk dietary management (3.13); CUDA interest (2.98); convergence necessity (3.50); benefits in specialized areas (3.31); benefits in everyday life (3.48); efficacy of science and technology convergence (3.35); convergence efficacy with humanities, social science, and arts (3.31); and CUDA competency (3.41). The score for interest in CUDA was lower than that for the recognition of CUDA benefits. Significant positive correlations were observed between all factors except between risk dietary management and both nutrition and weight management (P < 0.01). Interest in CUDA and recognition of the need for convergence exhibited a positive and significant effect on all factors of the perception of CUDA benefits and efficacy. The subgroup factors of dietary safety management competency and the recognition of CUDA had a positive effect on the CUDA competency (P < 0.001, R2= 0.58). Conclusions: Strengthening dietary safety management competency and increasing the awareness of CUDA can enhance adolescents' convergence competency. Therefore, CUDA and targeted education must be actively promoted among adolescents.

Analysis of Programming Techniques for Creating Optimized CUDA Software (최적화된 CUDA 소프트웨어 제작을 위한 프로그래밍 기법 분석)

  • Kim, Sung-Soo;Kim, Dong-Heon;Woo, Sang-Kyu;Ihm, In-Sung
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.7
    • /
    • pp.775-787
    • /
    • 2010
  • Unlike general-purpose CPUs, the GPUs have been specialized as many-core streaming processors, and are frequently replacing the CPUs in an increasing range of computations thanks to their outstanding parallel computing capacity. In order to respond to such trend, NVIDIA has recently issued a new parallel computing architecture called CUDA(Compute Unified Device Architecture), offering a flexible GPU programming environment for GPGPU(General Purpose GPU) computing. In general, when programmers use the CUDA API, they should clearly understand many aspects of GPU's computing architecture to produce efficient parallel software. In this article, we explain several optimization techniques for CUDA programming that we have verified through a lot of experiment and trial and error, and review how those techniques affect the performance of code execution. In particular, we use a specific problem as an example to analyze several elements that affect performances, such as effective accesses to hierarchical memory system, processor occupancy, and latency hiding. In conclusion, we present several directions that may be utilized effectively in CUDA-based parallel programming.

An Image Processing Speed Enhancement in a Multi-Frame Super Resolution Algorithm by a CUDA Method (CUDA를 이용한 초해상도 기법의 영상처리 속도개선 방법)

  • Kim, Mi-Jeong
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.14 no.4
    • /
    • pp.663-668
    • /
    • 2011
  • Although multi-frame super resolution algorithm has many merits but it demands too much calculation time. Researches have shown that image processing time can be reduced using a CUDA(Compute unified device architecture) which is one of GPGPU(General purpose computing on graphics processing unit) models. In this paper, we show that the processing time of multi-frame super resolution algorithm can be reduced by employing the CUDA. It was applied not to the whole parts but to the largest time consuming parts of the program. The simulation result shows that using a CUDA can reduce an operation time dramatically. Therefore it can be possible that multi-frame super resolution algorithm is implemented in real time by using libraries of image processing algorithms which are made by a CUDA.

Kinematic Wave Rainfall-Runoff Model Using CUDA FORTRAN (CUDA FORTRAN을 이용한 운동파 강우유출모형)

  • Kim, Boram;Kim, Dae-Hong
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2018.05a
    • /
    • pp.271-271
    • /
    • 2018
  • 그래픽 처리 장치(GPU: Graphic Processing Units)는 그래픽 처리에 특화된 수많은 산술논리연산자 (ALU: Arithmetic Logic Unit)와 이에 관련된 인스트럭션Instruction)으로 인해 중앙 처리 장치(CPU: Central Processing Units) 보다 훨씬 빠른 계산 처리를 수행할 수 있다. 최근에는 FORTRAN에 의해 구현된 많은 수치모형들이 현실적인 모델링 방법의 발달로 인해 더 많은 계산량과 계산시간을 필요로 한다. 이 연구에서는 GPU 상의 범용 계산GPGPU : General-Purpose computing on Graphics Processing Units) 기반 운동파 강우유출모형(Kinematic Wave Rainfall-Runoff Model)이 CUDA(Compute Unified Device Architecture) FORTRAN을 사용하여 구현되었다. CUDA FORTRAN 운동파 강우유출모형의 계산 결과는 검증된 CPU 기반 운동파 강우유출모형의 계산 결과와 비교하여 검증되었으며, 잘 일치함을 보여 주었다. CUDA FORTRAN 운동파 강우유출모형은 CPU 기반 모형에 비해 약 20 배 더 빠른 계산 시간을 보였다. 또한 계산 영역이 커짐에 따라 CPU 버전에 비해 CUDA FORTRAN 버전의 계산 효율이 향상되었다.

  • PDF

CUDA based Lossless Asynchronous Compression of Ultra High Definition Game Scenes using DPCM-GR (DPCM-GR 방식을 이용한 CUDA 기반 초고해상도 게임 영상 무손실 비동기 압축)

  • Kim, Youngsik
    • Journal of Korea Game Society
    • /
    • v.14 no.6
    • /
    • pp.59-68
    • /
    • 2014
  • Memory bandwidth requirements of UHD (Ultra High Definition $4096{\times}2160$) game scenes have been much more increasing. This paper presents a lossless DPCM-GR based compression algorithm using CUDA for solving the memory bandwidth problem without sacrificing image quality, which is modified from DDPCM-GR [4] to support bit parallel pipelining. The memory bandwidth efficiency increases because of using the shared memory of CUDA. Various asynchronous transfer configurations which can overlap the kernel execution and data transfer between host and CUDA are implemented with the page-locked host memory. Experimental results show that the maximum 31.3 speedup is obtained according to CPU time. The maximum 30.3% decreases in the computation time among various configurations.

Real-Time Free Viewpoint TV System Using CUDA (CUDA 를 이용한 실시간 Free Viewpoint TV System 구현)

  • Yang, Yun Mo;Lee, Jin Hyeok;Oh, Byung Tae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2015.11a
    • /
    • pp.71-73
    • /
    • 2015
  • In this paper, we propose the Real-Time Free Viewpoint TV System with multiple Microsoft Kinects and CUDA of NVidia GPGPU library. It generates a virtual view between two views by using color and depth image acquired by Kinect in real time. In order to reduce complexity of coordinate transformations and nearest neighbor method for hole filling caused by IR pattern interference, we parallelize this process using CUDA. Finally, it is observed that CUDA based system generates more frames than using CPU based system in the same time.

  • PDF

Development of Diffusive Wave Rainfall-Runoff Model Based on CUDA FORTRAN (CUDA FORTEAN기반 확산파 강우유출모형 개발)

  • Kim, Boram;Kim, Hyeong-Jun;Yoon, Kwang Seok
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.287-287
    • /
    • 2021
  • 본 연구에서는 CUDA(Compute Unified Device Architecture) 포트란을 이용하여 확산파 강우 유출모형을 개발하였다. CUDA 포트란은 그래픽 처리 장치(Graphic Processing Unit: GPU)에서 수행하는 병렬 연산 알고리즘을 포트란 언어를 사용하여 작성할 수 있도록 하는 GPU상의 범용계산(General-Purpose Computing on Graphics Processing Units: GPGPU) 기술이다. GPU는 그래픽 처리 작업에 특화된 다수의 산술 논리 장치(Arithmetic Logic Unit: ALU)로 구성되어 있어서 중앙 처리 장치(Central Processing Unit: CPU)보다 한 번에 더 많은 연산 수행이 가능하다. 이에 따라, CUDA 포트란기반 확산파모형은 분포형 강우유출모형의 수치모의 연산시간을 단축시킬 수 있다. 분포형모형의 지배방정식은 확산파모형과 Green-Ampt모형으로 구성되었고, 확산파모형은 유한체적법을 이용하여 이산화 하였다. CUDA 포트란기반 확산파모형의 정확성은 기존 연구된 수리실험 결과 및 CPU기반 강우유출모형과 비교하였으며, 연산소요시간에 대한 효율성은 CPU기반 확산파모형과 비교하였다. 그 결과 CUDA 포트란기반 확산파모형의 결과는 수리실험 결과 및 CPU기반 강우유출모형의 결과와 유사한 결과를 나타냈다. 또한, 연산소요시간은 CPU 기반 확산파모형의 연산소요시간보다 단축되었으며, 본 연구에 사용된 장비를 기준으로 최대 100배 정도 단축되었다.

  • PDF