• Title/Summary/Keyword: Rendering quality

Search Result 247, Processing Time 0.028 seconds

A Data Structure for Real-time Volume Ray Casting (실시간 볼륨 광선 투사법을 위한 자료구조)

  • Lim, Suk-Hyun;Shin, Byeong-Seok
    • Journal of the Korea Computer Graphics Society
    • /
    • v.11 no.1
    • /
    • pp.40-49
    • /
    • 2005
  • Several optimization techniques have been proposed for volume ray casting, but these cannot achieve real-time frame rates. In addition, it is difficult to apply them to some applications that require perspective projection. Recently, hardware-based methods using 3D texture mapping are being used for real-time volume rendering. Although rendering speed approaches real time, the larger volumes require more swapping of volume bricks for the limited texture memory. Also, image quality deteriorates compared with that of conventional volume ray casting. In this paper, we propose a data structure for real-time volume ray casting named PERM (Precomputed dEnsity and gRadient Map). The PERM stores interpolated density and gradient vector for quantized cells. Since the information requiring time-consuming computations is stored in the PERM, our method can ensure interactive frame rates on a consumer PC platform. Our method normally produces high-quality images because it is based on conventional volume ray casting.

  • PDF

Evaluation of Artificial Intelligence-Based Denoising Methods for Global Illumination

  • Faradounbeh, Soroor Malekmohammadi;Kim, SeongKi
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.737-753
    • /
    • 2021
  • As the demand for high-quality rendering for mixed reality, videogame, and simulation has increased, global illumination has been actively researched. Monte Carlo path tracing can realize global illumination and produce photorealistic scenes that include critical effects such as color bleeding, caustics, multiple light, and shadows. If the sampling rate is insufficient, however, the rendered results have a large amount of noise. The most successful approach to eliminating or reducing Monte Carlo noise uses a feature-based filter. It exploits the scene characteristics such as a position within a world coordinate and a shading normal. In general, the techniques are based on the denoised pixel or sample and are computationally expensive. However, the main challenge for all of them is to find the appropriate weights for every feature while preserving the details of the scene. In this paper, we compare the recent algorithms for removing Monte Carlo noise in terms of their performance and quality. We also describe their advantages and disadvantages. As far as we know, this study is the first in the world to compare the artificial intelligence-based denoising methods for Monte Carlo rendering.

Sample thread based real-time BRDF rendering (샘플 쓰레드 기반 실시간 BRDF 렌더링)

  • Kim, Soon-Hyun;Kyung, Min-Ho;Lee, Joo-Haeng
    • Journal of the Korea Computer Graphics Society
    • /
    • v.16 no.3
    • /
    • pp.1-10
    • /
    • 2010
  • In this paper, we propose a novel noiseless method of BRDF rendering on a GPU in real-time. Illumination at a surface point is formulated as an integral of BRDF producted with incident radiance over the hemi-sphere domain. The most popular method to compute the integral is the Monte Carlo method, which needs a large number of samples to achieve good image quality. But, it leads to increase of rendering time. Otherwise, a small number of sample points cause serious image noise. The main contribution of our work is a new importance sampling scheme producing a set of incoming ray samples varying continuously with respect to the eye ray. An incoming ray is importance-based sampled at different latitude angles of the eye ray, and then the ray samples are linearly connected to form a curve, called a thread. These threads give continuously moving incident rays for eye ray change, so they do not make image noise. Since even a small number of threads can achieve a plausible quality and also can be precomputed before rendering, they enable real-time BRDF rendering on the GPU.

Study on the Methods of Enhancing the Quality of DIBR-based Multiview Intermediate Images using Depth Expansion and Mesh Construction (깊이 정보 확장과 메쉬 구성을 이용한 DIBR 기반 다시점 중간 영상 화질 향상 방법에 관한 연구)

  • Park, Kyoung Shin;Kim, Jiseong;Cho, Yongjoo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.1
    • /
    • pp.127-135
    • /
    • 2015
  • In this research, we conducted an experiment on evaluating the extending depth information method and surface reconstruction method and the interaction of these two methods in order to enhance the final intermediate view images, which are acquired using DIBR (Depth-Image-Based Rendering) method. We evaluated the experimental control groups using the Microsoft's "Ballet" and "Break Dancer" data sets with three different hole-filling algorithms. The result revealed that the quality was improved the most by applying both extending depth information and surface reconstruction method as compared to the previous point clouds only. In addition, it found that the quality of the intermediate images was improved vastly by only applying extending depth information when using no hole-filling algorithm.

Service Rendering Study for Adaptive Service of 3D Graphics Contents in Middleware (3D 그래픽 콘텐츠의 적응적 서비스를 위한 미들웨어에서의 서비스 렌더링 연구)

  • Kim, Hak-Ran;Park, Hwa-Jin;Yoon, Yong-Ik
    • The KIPS Transactions:PartA
    • /
    • v.14A no.5
    • /
    • pp.279-286
    • /
    • 2007
  • The need of contents adaptation in ubiquitous environments is growing to support multiple target platforms for 3D graphics contents. Since 3D graphics deal with a large data set md a high performance, a service adaptation for context changing is required to manipulate graphics contents with a more complicated method in multiple devices such as desktops, laptops, PDAs, mobile phones, etc. In this paper, we suggest a new notion of service adaptation middleware based on service rendering algorithm, which provides a flexible and customized service for user-centric 3D graphics contents. The service adaptation middleware consists of Service Adaptation(SA) for analyzing environments and Service Rendering(SR) for reconfiguring customized services by processing customized data. These adaptation services are able to intelligently and dynamically support the same computer graphics contents with good quality, when user environments are changed.

Real-Time Rendering of a Displacement Map using an Image Pyramid (이미지 피라미드를 이용한 변위 맵의 실시간 렌더링)

  • Oh, Kyoung-Su;Ki, Hyun-Woo
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.5_6
    • /
    • pp.228-237
    • /
    • 2007
  • displacement mapping enables us to add realistic details to polygonal meshes without changing geometry. We present a real-time artifacts-free inverse displacement mapping method. In each pixel, we construct a ray and trace the ray through the displacement map to find an intersection. To skip empty regions safely, we traverse the image pyramid of displacement map in top-down order. Furthermore, when the displacement map is enlarged, intersection with bilinear interpolated displacement map can be found. When the displacement map is at distance, our method supports mipmap-like prefiltering to enhance image quality and speed. Experimental results show that our method can produce correct images even at grazing view angles. Rendering speed of a test scene is over hundreds of frames per second and the influence of resolution of displacement map to rendering speed is little. Our method is simple enough to be added to existing virtual reality systems easily.

Reconfigurable Architecture Design for H.264 Motion Estimation and 3D Graphics Rendering of Mobile Applications (이동통신 단말기를 위한 재구성 가능한 구조의 H.264 인코더의 움직임 추정기와 3차원 그래픽 렌더링 가속기 설계)

  • Park, Jung-Ae;Yoon, Mi-Sun;Shin, Hyun-Chul
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.1
    • /
    • pp.10-18
    • /
    • 2007
  • Mobile communication devices such as PDAs, cellular phones, etc., need to perform several kinds of computation-intensive functions including H.264 encoding/decoding and 3D graphics processing. In this paper, new reconfigurable architecture is described, which can perform either motion estimation for H.264 or rendering for 3D graphics. The proposed motion estimation techniques use new efficient SAD computation ordering, DAU, and FDVS algorithms. The new approach can reduce the computation by 70% on the average than that of JM 8.2, without affecting the quality. In 3D rendering, midline traversal algorithm is used for parallel processing to increase throughput. Memories are partitioned into 8 blocks so that 2.4Mbits (47%) of memory is shared and selective power shutdown is possible during motion estimation and 3D graphics rendering. Processing elements are also shared to further reduce the chip area by 7%.

Ambient Occlusion Volume Rendering using Multi-Range Statistics (다중 영역 통계량을 이용한 환경-광 가림 볼륨 가시화)

  • Nam, Jinhyun;Kye, Heewon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.3
    • /
    • pp.27-35
    • /
    • 2015
  • This study presents a volume rendering method using ambient occlusion which is one of global illumination methods. By considering the volume density distribution as normal distribution, ambient occlusion can be calculated at real-time speed regardless of modification of opacity transfer function. We calculate and store the averages and standard deviations of densities in a block centered at each voxel in pre-processing time. In rendering process, we determine the illumination value by estimating the nearby opacity. We generalized theoretical model and generated better quality images improving our previous research. In detail, various shapes of transfer function can be used due to the proposed equation model. Moreover, we introduced a multi-range model to give nearer objects more weight. As the result, more realistic volume rendering image can be generated at real-time speed by mixing local and ambient occlusion shading.

Acceleration of GPU-based Volume Rendering Using Vertex Splitting (정점분할을 이용한 GPU 기반 볼륨 렌더링의 가속 기법)

  • Yoo, Seong-Yeol;Lee, Eun-Seok;Shin, Byeong-Seok
    • Journal of Korea Game Society
    • /
    • v.12 no.2
    • /
    • pp.53-62
    • /
    • 2012
  • Visualizing a volume dataset with ray-casting which of visualization methods provides high quality image. However it spends too much time for rendering because the size of volume data are huge. Recently, various researches have been proposed to accelerate GPU-based volume rendering to solve these problems. In this paper, we propose an efficient GPU-based empty space skipping to accelerate volume ray-casting using octree traversal. This method creates min-max octree and searches empty space using vertex splitting. It minimizes the bounding polyhedron by eliminating empty space found in the octree traveral step. The rendering results of our method are identical to those of previous GPU-based volume ray-casting, with the advantage of faster run-time because of using minimized bounding polyhedron.

3D Object Generation and Renderer System based on VAE ResNet-GAN

  • Min-Su Yu;Tae-Won Jung;GyoungHyun Kim;Soonchul Kwon;Kye-Dong Jung
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.142-146
    • /
    • 2023
  • We present a method for generating 3D structures and rendering objects by combining VAE (Variational Autoencoder) and GAN (Generative Adversarial Network). This approach focuses on generating and rendering 3D models with improved quality using residual learning as the learning method for the encoder. We deep stack the encoder layers to accurately reflect the features of the image and apply residual blocks to solve the problems of deep layers to improve the encoder performance. This solves the problems of gradient vanishing and exploding, which are problems when constructing a deep neural network, and creates a 3D model of improved quality. To accurately extract image features, we construct deep layers of the encoder model and apply the residual function to learning to model with more detailed information. The generated model has more detailed voxels for more accurate representation, is rendered by adding materials and lighting, and is finally converted into a mesh model. 3D models have excellent visual quality and accuracy, making them useful in various fields such as virtual reality, game development, and metaverse.