• Title/Summary/Keyword: 렌더링 품질

Search Result 78, Processing Time 0.029 seconds

A Contribution Culling Method for Fast Rendering of Complex Urban Scenes (복잡한 도시장면의 고속 렌더링을 위한 기여도 컬링 기법)

  • Lee, Bum-Jong;Park, Jong-Seung
    • Journal of Korea Game Society
    • /
    • v.7 no.1
    • /
    • pp.43-52
    • /
    • 2007
  • This article describes a new contribution culling method for fast rendering of complex huge urban scenes. A view frustum culling technique is used for fast rendering of complex scenes. To support the levels-of-detail, we subdivide the image regions and construct a weighted quadtree. Only visible objects at the current camera position contributes the current quadtree and the weight is assigned to each object in the quadtree. The weight is proportional to the image area of the projected object, so large buildings in the far distance are less likely to be culled out than small buildings in the near distance. The rendering time is nearly constant not depending on the number of visible objects. The proposed method has applied to a new metropolitan region which is currently under development. Experimental results showed that the rendering quality of the proposed method is barely distinguishable from the rendering quality of the original method, while the proposed method reduces the number of polygons by about 9%. Experimental results showed that the proposed rendering method is appropriate for real-time rendering applications of complex huge scenes.

  • PDF

SURE-based-Trous Wavelet Filter for Interactive Monte Carlo Rendering (몬테카를로 렌더링을 위한 슈어기반 실시간 에이트러스 웨이블릿 필터)

  • Kim, Soomin;Moon, Bochang;Yoon, Sung-Eui
    • Journal of KIISE
    • /
    • v.43 no.8
    • /
    • pp.835-840
    • /
    • 2016
  • Monte Carlo ray tracing has been widely used for simulating a diverse set of photo-realistic effects. However, this technique typically produces noise when insufficient numbers of samples are used. As the number of samples allocated per pixel is increased, the rendered images converge. However, this approach of generating sufficient numbers of samples, requires prohibitive rendering time. To solve this problem, image filtering can be applied to rendered images, by filtering the noisy image rendered using low sample counts and acquiring smoothed images, instead of naively generating additional rays. In this paper, we proposed a Stein's Unbiased Risk Estimator (SURE) based $\grave{A}$-Trous wavelet to filter the noise in rendered images in a near-interactive rate. Based on SURE, we can estimate filtering errors associated with $\grave{A}$-Trous wavelet, and identify wavelet coefficients reducing filtering errors. Our approach showed improvement, up to 6:1, over the original $\grave{A}$-Trous filter on various regions in the image, while maintaining a minor computational overhead. We have integrated our propsed filtering method with the recent interactive ray tracing system, Embree, and demonstrated its benefits.

Adaptive Foveated Ray Tracing Based on Time-Constrained Rendering for Head-Mounted Display (헤드 마운티드 디스플레이를 위한 시간 제약 렌더링을 이용한 적응적 포비티드 광선 추적법)

  • Kim, Youngwook;Ihm, Insung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.3
    • /
    • pp.113-123
    • /
    • 2022
  • Ray tracing-based rendering creates by far more realistic images than the traditional rasterization-based rendering. However, it is still burdensome when implemented for a Head-Mounted Display (HMD) system that demands a wide field of view and a high display refresh rate. Furthermore, for presenting high-quality images on the HMD screen, a sufficient number of ray sampling should be carried out per pixel to alleviate visually annoying spatial and temporal aliases. In this paper, we extend the recent selective foveated ray tracing technique by Kim et al. [1], and propose an improved real-time rendering technique that realizes the rendering effect of the classic Whitted-style ray tracing on the HMD system. In particular, by combining the ray tracing hardware-based acceleration technique and time-constrained rendering scheme, we show that fast HMD ray tracing is possible that is well suited to human visual systems.

Sample thread based real-time BRDF rendering (샘플 쓰레드 기반 실시간 BRDF 렌더링)

  • Kim, Soon-Hyun;Kyung, Min-Ho;Lee, Joo-Haeng
    • Journal of the Korea Computer Graphics Society
    • /
    • v.16 no.3
    • /
    • pp.1-10
    • /
    • 2010
  • In this paper, we propose a novel noiseless method of BRDF rendering on a GPU in real-time. Illumination at a surface point is formulated as an integral of BRDF producted with incident radiance over the hemi-sphere domain. The most popular method to compute the integral is the Monte Carlo method, which needs a large number of samples to achieve good image quality. But, it leads to increase of rendering time. Otherwise, a small number of sample points cause serious image noise. The main contribution of our work is a new importance sampling scheme producing a set of incoming ray samples varying continuously with respect to the eye ray. An incoming ray is importance-based sampled at different latitude angles of the eye ray, and then the ray samples are linearly connected to form a curve, called a thread. These threads give continuously moving incident rays for eye ray change, so they do not make image noise. Since even a small number of threads can achieve a plausible quality and also can be precomputed before rendering, they enable real-time BRDF rendering on the GPU.

Real-time Fluid Animation using Particle Dynamics Simulation and Pre-integrated Volume Rendering (입자 동역학 시뮬레이션과 선적분 볼륨 렌더링을 이용한 실시간 유체 애니메이션)

  • Lee Jeongjin;Kang Moon Koo;Kim Dongho;Shin Yeong Gil
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.1
    • /
    • pp.29-38
    • /
    • 2005
  • The fluid animation procedure consists of physical simulation and visual rendering. In the physical simulation of fluids, the most frequently used practices are the numerical simulation of fluid particles using particle dynamics equations and the continuum analysis of flow via Wavier-Stokes equation. Particle dynamics method is fast in calculation, but the resulting fluid motion is conditionally unrealistic The method using Wavier-Stokes equation, on the contrary, yields lifelike fluid motion when properly conditioned, yet the complexity of calculation restrains this method from being used in real-time applications. Global illumination is generally successful in producing premium-Duality rendered images, but is also excessively slow for real-time applications. In this paper, we propose a rapid fluid animation method incorporating enhanced particle dynamics simulation method and pre-integrated volume rendering technique. The particle dynamics simulation of fluid flow was conducted in real-time using Lennard-Jones model, and the computation efficiency was enhanced such that a small number of particles can represent a significant volume. For real-time rendering, pre-integrated volume rendering method was used so that fewer slices than ever can construct seamless inter-laminar shades. The proposed method could successfully simulate and render the fluid motion in real time at an acceptable speed and visual quality.

Efficient Visualization Method for Large Volume Dataset using 3D Texture Mapping and Texture Coordinate Tweaking (3차원 텍스쳐 맵핑 및 텍스쳐 좌표 조작을 통한 대용량 볼륨 데이터의 효과적인 가시화 기법)

  • 이중연
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.10b
    • /
    • pp.652-654
    • /
    • 2004
  • PC 그래픽스 하드웨어의 급격한 발전에 따라 과거 슈퍼컴퓨터 급에서나 가능하였던 대용량 데이터의 볼륨 렌더링을 일반 PC에서 수행하려는 시도가 계속되고 있다. 특히, PC 그래픽스 하드웨어의 꼭지점 및 픽셀 쉐이더는 기존의 고정된 그래픽스 파이프라인에서 벗어나 사용자가 렌더링 과정에 개입하여 프로그래밍을 할 수 있도록 하여 많은 각광을 받고 있다. 그러나 그래픽스 하드웨어의 텍스쳐 메모리의 크기보다 큰 볼륨 데이터의 가시화는 아직까지 충분히 빠르지 못하며 텍스쳐의 압축으로 인하여 영상 품질도 좋지 못하다. 본 논문에서는 이러한 그래픽스 하드웨어의 프로그래밍 기능 중 꼭지점 좌표 및 텍스쳐 좌프의 조작, 그리고 픽셀 쉐이더를 통한 퐁 쉐이딩 연산을 이용하여 그래픽스 하드웨어의 메모리 크기보다 큰 대용량 볼륨 데이터를 고품질로 가시화하였다.

  • PDF

Group-based Adaptive Rendering for 6DoF Immersive Video Streaming (6DoF 몰입형 비디오 스트리밍을 위한 그룹 분할 기반 적응적 렌더링 기법)

  • Lee, Soonbin;Jeong, Jong-Beom;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.27 no.2
    • /
    • pp.216-227
    • /
    • 2022
  • The MPEG-I (Immersive) group is working on a standardization project for immersive video that provides 6 degrees of freedom (6DoF). The MPEG Immersion Video (MIV) standard technology is intended to provide limited 6DoF based on depth map-based image rendering (DIBR) technique. Many efficient coding methods have been suggested for MIV, but efficient transmission strategies have received little attention in MPEG-I. This paper proposes group-based adaptive rendering method for immersive video streaming. Each group can be transmitted independently using group-based encoding, enabling adaptive transmission depending on the user's viewport. In the rendering process, the proposed method derives weights of group for view synthesis and allocate high quality bitstream according to a given viewport. The proposed method is implemented through the Test Model for Immersive Video (TMIV) test model. The proposed method demonstrates 17.0% Bjontegaard-delta rate (BD-rate) savings on the peak signalto-noise ratio (PSNR) and 14.6% on the Immersive Video PSNR(IV-PSNR) in terms of various end-to-end evaluation metrics in the experiment.

Rendering Quality Improvement Method based on Depth and Inverse Warping (깊이정보와 역변환 기반의 포인트 클라우드 렌더링 품질 향상 방법)

  • Lee, Heejea;Yun, Junyoung;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.714-724
    • /
    • 2021
  • The point cloud content is immersive content recorded by acquiring points and colors corresponding to the real environment and objects having three-dimensional location information. When a point cloud content consisting of three-dimensional points having position and color information is enlarged and rendered, the gap between the points widens and an empty hole occurs. In this paper, we propose a method for improving the quality of point cloud contents through inverse transformation-based interpolation using depth information for holes by finding holes that occur due to the gap between points when expanding the point cloud. The points on the back are rendered between the holes created by the gap between the points, acting as a hindrance to applying the interpolation method. To solve this, remove the points corresponding to the back side of the point cloud. Next, a depth map at the point in time when an empty hole is generated is extracted. Finally, inverse transform is performed to extract pixels from the original data. As a result of rendering content by the proposed method, the rendering quality improved by 1.2 dB in terms of average PSNR compared to the conventional method of increasing the size to fill the blank area.

Real-time BCC Volume Isosurface Ray Casting on the GPU (GPU를 이용한 실시간 BCC 볼륨 등가면 레이 캐스팅)

  • Kim, Minho;Lee, Young-Joon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.4
    • /
    • pp.25-34
    • /
    • 2012
  • This paper presents a real-time GPU (graphics processing unit) ray casting scheme for rendering isosurfaces of BCC (body-centered cubic) volume datasets. A quartic spline field is built using the 7-direction box-spline filter accompanied with a quasi-interpolation prefilter. To obtain an interactive rendering speed on the graphics hardware, the shader code was optimized to avoid lookup table and conditional branches and to minimize data fetch overhead. Compared to previous implementations, our work outperforms the comparable one by more than 20% and the rendering quality is superior than others.

A Hybrid Generation Method of Visual Effects for Mobile Entertainment Applications (모바일 엔터테인먼트 애플리케이션을 위한 혼합적 시각 효과 생성 방법)

  • Kim, Byung-Cheol
    • Journal of Digital Convergence
    • /
    • v.13 no.12
    • /
    • pp.367-380
    • /
    • 2015
  • This paper proposes a hybrid rendering method which combines pre-computed global illumination results and interactive local illumination techniques and thus could interactively produce photo-realistic visual effects for mobile entertainment applications. The proposed method uses the programmable shading capability of OpenGL, a de facto standard for computer graphics library so that it can be deployed in a real-world development environment. Also, it increases the rendering time by a negligible amount compared to normal rendering time since the pre-computed results are used as operands of plain arithmetic operations. Therefore it is expected to be applicable in practice for mobiles games which require real-time responsiveness to users.