• Title/Summary/Keyword: GPU Memory

Search Result 149, Processing Time 0.032 seconds

Accelerating GPU-based Volume Ray-casting Using Brick Vertex (브릭 정점을 이용한 GPU 기반 볼륨 광선투사법 가속화)

  • Chae, Su-Pyeong;Shin, Byeong-Seok
    • Journal of the Korea Computer Graphics Society
    • /
    • v.17 no.3
    • /
    • pp.1-7
    • /
    • 2011
  • Recently, various researches have been proposed to accelerate GPU-based volume ray-casting. However, those researches may cause several problems such as bottleneck of data transmission between CPU and GPU, requirement of additional video memory for hierarchical structure and increase of processing time whenever opacity transfer function changes. In this paper, we propose an efficient GPU-based empty space skipping technique to solve these problems. We store maximum density in a brick of volume dataset on a vertex element. Then we delete vertices regarded as transparent one by opacity transfer function in geometry shader. Remaining vertices are used to generate bounding boxes of non-transparent area that helps the ray to traverse efficiently. Although these vertices are independent on viewing condition they need to be reproduced when opacity transfer function changes. Our technique provides fast generation of opaque vertices for interactive processing since the generation stage of the opaque vertices is running in GPU pipeline. The rendering results of our algorithm are identical to the that of general GPU ray-casting, but the performance can be up to more than 10 times faster.

Spatial Data Structure for Efficient Representation of Very Large Sparse Volume Data for 3D Reconstruction (3차원 복원을 위한 대용량 희소 볼륨 데이터의 효율적인 저장을 위한 공간자료구조)

  • An, Jae Pung;Shin, Seungmi;Seo, Woong;Ihm, Insung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.19-29
    • /
    • 2017
  • When a fixed-sized memory allocation method is used for sparse volume data, a considerable memory space is in general wasted, which becomes more serious for a large volume of high resolution. In this paper, in order to reduce such unnecessary memory consumption, we propose a volume representation method to store mostly voxels that represent valid information rather than all voxels in a fixed volume space. Then our method is compared with the conventional static memory allocation method, an octree-based representation, and a voxel hashing method in terms of memory usage and computation speed. In particular, we compare the proposed method and the voxel hashing method with respect to implementation of the GPU-based Marching Cubes algorithm.

Profiler Design for Evaluating Performance of WebCL Applications (WebCL 기반 애플리케이션의 성능 평가를 위한 프로파일러 설계 및 구현)

  • Kim, Cheolwon;Cho, Hyeonjoong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.4 no.8
    • /
    • pp.239-244
    • /
    • 2015
  • WebCL was proposed for high complex computing in Javascript. Since WebCL-based applications are distributed and executed on an unspecified number of general clients, it is important to profile their performances on different clients. Several profilers have been introduced to support various programming languages but WebCL profiler has not been developed yet. In this paper, we present a WebCL profiler to evaluate WebCL-based applications and monitor the status of GPU on which they run. This profiler helps developers know the execution time of applications, memory read/write time, GPU statues such as its power consumption, temperature, and clock speed.

New Two-Level L1 Data Cache Bypassing Technique for High Performance GPUs

  • Kim, Gwang Bok;Kim, Cheol Hong
    • Journal of Information Processing Systems
    • /
    • v.17 no.1
    • /
    • pp.51-62
    • /
    • 2021
  • On-chip caches of graphics processing units (GPUs) have contributed to improved GPU performance by reducing long memory access latency. However, cache efficiency remains low despite the facts that recent GPUs have considerably mitigated the bottleneck problem of L1 data cache. Although the cache miss rate is a reasonable metric for cache efficiency, it is not necessarily proportional to GPU performance. In this study, we introduce a second key determinant to overcome the problem of predicting the performance gains from L1 data cache based on the assumption that miss rate only is not accurate. The proposed technique estimates the benefits of the cache by measuring the balance between cache efficiency and throughput. The throughput of the cache is predicted based on the warp occupancy information in the warp pool. Then, the warp occupancy is used for a second bypass phase when workloads show an ambiguous miss rate. In our proposed architecture, the L1 data cache is turned off for a long period when the warp occupancy is not high. Our two-level bypassing technique can be applied to recent GPU models and improves the performance by 6% on average compared to the architecture without bypassing. Moreover, it outperforms the conventional bottleneck-based bypassing techniques.

Super High-Resolution Image Style Transfer (초-고해상도 영상 스타일 전이)

  • Kim, Yong-Goo
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.104-123
    • /
    • 2022
  • Style transfer based on neural network provides very high quality results by reflecting the high level structural characteristics of images, and thereby has recently attracted great attention. This paper deals with the problem of resolution limitation due to GPU memory in performing such neural style transfer. We can expect that the gradient operation for style transfer based on partial image, with the aid of the fixed size of receptive field, can produce the same result as the gradient operation using the entire image. Based on this idea, each component of the style transfer loss function is analyzed in this paper to obtain the necessary conditions for partitioning and padding, and to identify, among the information required for gradient calculation, the one that depends on the entire input. By structuring such information for using it as auxiliary constant input for partition-based gradient calculation, this paper develops a recursive algorithm for super high-resolution image style transfer. Since the proposed method performs style transfer by partitioning input image into the size that a GPU can handle, it can perform style transfer without the limit of the input image resolution accompanied by the GPU memory size. With the aid of such super high-resolution support, the proposed method can provide a unique style characteristics of detailed area which can only be appreciated in super high-resolution style transfer.

Fast GPU Implementation for the Solution of Tridiagonal Matrix Systems (삼중대각행렬 시스템 풀이의 빠른 GPU 구현)

  • Kim, Yong-Hee;Lee, Sung-Kee
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.11_12
    • /
    • pp.692-704
    • /
    • 2005
  • With the improvement of computer hardware, GPUs(Graphics Processor Units) have tremendous memory bandwidth and computation power. This leads GPUs to use in general purpose computation. Especially, GPU implementation of compute-intensive physics based simulations is actively studied. In the solution of differential equations which are base of physics simulations, tridiagonal matrix systems occur repeatedly by finite-difference approximation. From the point of view of physics based simulations, fast solution of tridiagonal matrix system is important research field. We propose a fast GPU implementation for the solution of tridiagonal matrix systems. In this paper, we implement the cyclic reduction(also known as odd-even reduction) algorithm which is a popular choice for vector processors. We obtained a considerable performance improvement for solving tridiagonal matrix systems over Thomas method and conjugate gradient method. Thomas method is well known as a method for solving tridiagonal matrix systems on CPU and conjugate gradient method has shown good results on GPU. We experimented our proposed method by applying it to heat conduction, advection-diffusion, and shallow water simulations. The results of these simulations have shown a remarkable performance of over 35 frame-per-second on the 1024x1024 grid.

Synthesis of Ocean Wave Models and Simulation Using GPU (바다물결 모형의 합성 및 GPU를 이용한 시뮬레이션)

  • Lee, Dong-Min;Lee, Sung-Kee
    • The KIPS Transactions:PartA
    • /
    • v.14A no.7
    • /
    • pp.421-434
    • /
    • 2007
  • Among many other CG generated natural scenes, the representation of ocean surfaces is one of the most complicated and time-consuming problem because of its large extent and complex surface movement. We present a hybrid method to represent and animate unbound deep-water ocean surfaces by utilizing graphics processor as both simulation and rendering core. Our technique is mainly based on spectral approaches that generate a high-detailed height field using Fourier transform on a 2D regular grid. Additionally, we incorporate Gerstner model and generate low-detailed height field on a 2D projected grid in order to represent large waves and main structure of ocean surface. There is no interruption between CPU and GPU, and no need to transfer simulation results from the system memory to graphics hardware because the entire simulation and rending processes are done on graphics processor. As a result we can synthesize and render realistic water surfaces in real-time. Proposed techniques are readily adoptable to real-time applications such as computer games that have heavy work load on CPU but still demand plausible natural scenes.

A GPU-enabled Face Detection System in the Hadoop Platform Considering Big Data for Images (이미지 빅데이터를 고려한 하둡 플랫폼 환경에서 GPU 기반의 얼굴 검출 시스템)

  • Bae, Yuseok;Park, Jongyoul
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.1
    • /
    • pp.20-25
    • /
    • 2016
  • With the advent of the era of digital big data, the Hadoop platform has become widely used in various fields. However, the Hadoop MapReduce framework suffers from problems related to the increase of the name node's main memory and map tasks for the processing of large number of small files. In addition, a method for running C++-based tasks in the MapReduce framework is required in order to conjugate GPUs supporting hardware-based data parallelism in the MapReduce framework. Therefore, in this paper, we present a face detection system that generates a sequence file for images to process big data for images in the Hadoop platform. The system also deals with tasks for GPU-based face detection in the MapReduce framework using Hadoop Pipes. We demonstrate a performance increase of around 6.8-fold as compared to a single CPU process.

Accelerating Medical Image Processing on Integrated GPU Using OpenCL (OpenCL을 이용한 내장형 GPU에서의 의학영상처리 가속화)

  • Kim, Beom-Jun;Shin, Byeong-seok
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.2
    • /
    • pp.1-10
    • /
    • 2017
  • A variety of filters are applied to improve the quality of noise and low resolution medical images. This is necessary to reduce the radiation dose of the patient and to improve the utilization of the conventional spherical imaging equipment. In the conventional method, it is common to perform filtering using the CPU of the PC. However, it is difficult to produce results in real time by applying various calculations and filters to high-resolution human images using only the CPU performance of a PC used in a hospital. In this paper, we analyze the structure and performance of Intel integrated GPU in CPU and propose a method to perform image filtering using OpenCL parallel processing function. By applying complex filters with high computational complexity to medical images, high quality images can be generated in real time.

Large-Scale Ultrasound Volume Rendering using Bricking (블리킹을 이용한 대용량 초음파 볼륨 데이터 렌더링)

  • Kim, Ju-Hwan;Kwon, Koo-Joo;Shin, Byeong-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.7
    • /
    • pp.117-126
    • /
    • 2008
  • Recent advances in medical imaging technologies have enabled the high-resolution data acquisition. Therefore visualization of such large data set on standard graphics hardware became a popular research theme. Among many visualization techniques, we focused on bricking method which divided the entire volume into smaller bricks and rendered them in order. Since it switches bet\W8n bricks on main memory and bricks on GPU memory on the fly, to achieve better performance, the number of these memory swapping conditions has to be minimized. And, because the original bricking algorithm was designed for regular volume data such as CT and MR, when applying the algorithm to ultrasound volume data which is based on the toroidal coordinate space, it revealed some performance degradation. In some areas near bricks' boundaries, an orthogonal viewing ray intersects the single brick twice, and it consequently makes a single brick memory to be uploaded onto GPU twice in a single frame. To avoid this redundancy, we divided the volume into bricks allowing overlapping between the bricks. In this paper, we suggest the formula to determine an appropriate size of these shared area between the bricks. Using our formula, we could minimize the memory bandwidth. and, at the same time, we could achieve better rendering performance.

  • PDF