• Title/Summary/Keyword: pixel shader

Search Result 17, Processing Time 0.025 seconds

MPEG-I RVS Software Speed-up for Real-time Application (실시간 렌더링을 위한 MPEG-I RVS 가속화 기법)

  • Ahn, Heejune;Lee, Myeong-jin
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.655-664
    • /
    • 2020
  • Free viewpoint image synthesis technology is one of the important technologies in the MPEG-I (Immersive) standard. RVS (Reference View Synthesizer) developed by MPEG-I and in use in MPEG group is a DIBR (Depth Information-Based Rendering) program that generates an image at a virtual (intermediate) viewpoint from multiple viewpoints' inputs. RVS uses the mesh surface method based on computer graphics, and outperforms the pixel-based ones by 2.5dB or more compared to the previous pixel method. Even though its OpenGL version provides 10 times speed up over the non OpenGL based one, it still shows a non-real-time processing speed, i.e., 0.75 fps on the two 2k resolution input images. In this paper, we analyze the internal of RVS implementation and modify its structure, achieving 34 times speed up, therefore, real-time performance (22-26 fps), through the 3 key improvements: 1) the reuse of OpenGL buffers and texture objects 2) the parallelization of file I/O and OpenGL execution 3) the parallelization of GPU shader program and buffer transfer.

Implementation of Neural Networks using GPU (GPU를 이용한 신경망 구현)

  • Oh Kyoung-su;Jung Keechul
    • The KIPS Transactions:PartB
    • /
    • v.11B no.6
    • /
    • pp.735-742
    • /
    • 2004
  • We present a new use of common graphics hardware to perform a faster artificial neural network. And we examine the use of GPU enhances the time performance of the image processing system using neural network, In the case of parallel computation of multiple input sets, the vector-matrix products become matrix-matrix multiplications. As a result, we can fully utilize the parallelism of GPU. Sigmoid operation and bias term addition are also implemented using pixel shader on GPU. Our preliminary result shows a performance enhancement of about thirty times faster using ATI RADEON 9800 XT board.

Real-time Soft Shadowing of Dynamic Height Map Using a Shadow Height Map (그림자 높이 맵을 이용한 실시간 그림자)

  • Lee, Sung-Ho;Kim, Chang-Hun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.14 no.1
    • /
    • pp.11-16
    • /
    • 2008
  • This paper introduces a novel real-time soft shadowing method applicable for height maps. As well as supporting self-shadowing of the height map, our method allows shadows to be caught on other objects. The method is very suitable for dynamically changing height maps because it requires no precomputation. A shadow height map (SHM) is a new structure which represents the height of the shadow at each discretized coordinate of a height map. Constructing the SHM is O(n), where n is the number of texels in the SHM. Shadow can be computed from this map quickly and simply, using a pixel shader. Examples demonstrate good real-time performance and plausible visual quality.

  • PDF

PBR(Physically based Render) simulation considered mathematical Fresnel model for Game Improvement (효율적 게임개선을 위한 프레넬수학모델의 PBR 시뮬레이션)

  • Kim, Seongdong
    • Journal of Korea Game Society
    • /
    • v.16 no.1
    • /
    • pp.111-118
    • /
    • 2016
  • This paper proposes the mathematical model of Fresnel effect used to illuminate and simulate a surface character model for defense game play. The term illumination is used to represent the process by which the amount of light reaching a surface character model used on game play is determined. The character surface shaders generally use a mathematical model to predict how light will reflect on triangles. The shading normally represents the methods used to determine the color and intensity of light reflected toward the viewer for each pixel representing the character surface model of the game. This model computes the reflection and transmission coefficients and compares simulated results to the Fresnel equations for the real game improvement.

Virtual Engraving and Frottage Simulation with Haptic Feedback (촉감을 이용한 판화와 탁본 기법의 가상 시뮬레이션)

  • Lee, Dong-Wook;Park, Ye-Seul;Park, Jin-Ah
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.10b
    • /
    • pp.206-211
    • /
    • 2007
  • 현대 그래픽스 장치의 발전은 수년전까지 Pre-Rendered 방식을 사용해서 볼 수 있었던 영상들을 Real-Time Rendering을 통해 실시간으로 인터렉티브하게 제공하고 있다. 이러한 장치의 발전은 게임, 시뮬레이션, 미디어 아트 등의 많은 분야에서 변화를 불러 일으켰으며, 앞으로도 많은 변화를 촉진시킬 것이다. 이러한 변화 중 하나로 기존까지 실시간으로 영상을 생성하기 힘들었던 분야 중의 하나인 미술 기법들의 실시간 재생이 가능해졌다. 본 논문은 미술 기법 중 판화기법과 탁본기법을 가상의 환경에서 모사할 수 있는 어플리케이션인 Virtual Engraving과 Virtual Frottage를 제안한다. Virtual Engraving은 3차원 공간상의 가상의 물체에 대해 3차원 입출력장치와 Bump Mapping을 이용하여 조각행위에 대한 경험을 사용자에게 제공하며, Virtual Frottage는 탁본의 대상을 영상으로 받아들여 영상 처리 기법과 Pixel Shader를 통한 렌더링을 통하여 사용자에게 흥미로운 프로타주 기법의 경험을 제공한다. 두 어플리케이션 모두 시각적인 정보를 통해 사용자에게 미술 기법의 경험을 제공하며, Virtual Engraving의 경우 3차원 입출력장치를 통해 촉각적인 정보를 제공하였고 Virtual Frottage 역시 촉각 피드백을 제공할 수 있도록 연구 중이다. 이러한 미술 기법의 모사 연구는 사용자에게 보다 더 실감적인 경험뿐만 아니라 실 공간에서는 가능하지 않은 여러 효과를 제공할 수 있다.

  • PDF

GPU-based dynamic point light particles rendering using 3D textures for real-time rendering (실시간 렌더링 환경에서의 3D 텍스처를 활용한 GPU 기반 동적 포인트 라이트 파티클 구현)

  • Kim, Byeong Jin;Lee, Taek Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.123-131
    • /
    • 2020
  • This study proposes a real-time rendering algorithm for lighting when each of more than 100,000 moving particles exists as a light source. Two 3D textures are used to dynamically determine the range of influence of each light, and the first 3D texture has light color and the second 3D texture has light direction information. Each frame goes through two steps. The first step is to update the particle information required for 3D texture initialization and rendering based on the Compute shader. Convert the particle position to the sampling coordinates of the 3D texture, and based on this coordinate, update the colour sum of the particle lights affecting the corresponding voxels for the first 3D texture and the sum of the directional vectors from the corresponding voxels to the particle lights for the second 3D texture. The second stage operates on a general rendering pipeline. Based on the polygon world position to be rendered first, the exact sampling coordinates of the 3D texture updated in the first step are calculated. Since the sample coordinates correspond 1:1 to the size of the 3D texture and the size of the game world, use the world coordinates of the pixel as the sampling coordinates. Lighting process is carried out based on the color of the sampled pixel and the direction vector of the light. The 3D texture corresponds 1:1 to the actual game world and assumes a minimum unit of 1m, but in areas smaller than 1m, problems such as stairs caused by resolution restrictions occur. Interpolation and super sampling are performed during texture sampling to improve these problems. Measurements of the time taken to render a frame showed that 146 ms was spent on the forward lighting pipeline, 46 ms on the defered lighting pipeline when the number of particles was 262144, and 214 ms on the forward lighting pipeline and 104 ms on the deferred lighting pipeline when the number of particle lights was 1,024766.

Image-Based Relighting Rendering System (영상 기반 실시간 재조명 렌더링 시스템)

  • Kim, Soon-Hyun;Lee, Joo-Haeng;Kyung, Min-Ho
    • Journal of the HCI Society of Korea
    • /
    • v.2 no.1
    • /
    • pp.25-31
    • /
    • 2007
  • We develop an interactive relighting renderer allowing camera view changes based on a deep-frame buffer approach. The renderer first caches the rendering parameters for a given 3D scene in an auxiliary buffer with the same size of the output image. The rendering parameters independent from light changes are selected from the shading models used for shading pixels. Next, as the user interactively edits one light at one time, the relighting renderer instantly re-shades each pixel by updating the contribution of the changed light with the shading parameters cached in the deep-frame buffer. When the camera moves, the cache values should be re-computed because the currently cached values become obsolete. We present a novel method to synthesize them quickly from the cache images of the user specified cameras by using an image-based technique. This computations are all performed on GPU to achieve real-time performance.

  • PDF