• Title/Summary/Keyword: Image based Rendering

Search Result 319, Processing Time 0.028 seconds

A Adaptive Rendering Image Processing for Based on the Mobile (모바일을 기반으로 하는 적응적인 렌더링 영상 처리)

  • Ju, Heon-Sig;Kim, Ha-Jin
    • The KIPS Transactions:PartA
    • /
    • v.10A no.5
    • /
    • pp.425-432
    • /
    • 2003
  • This paper presents an EMR(Electronic Medical Record) chart for efficient PDA through the quad tree image rendering based on the mobile. Using the intermediate image space algorithm instead of the final one for volume rendering, we have solved the probems of th eholes coming from the point-to-point to mapping. The quad-tree based on the delta-tree efficiently represents volume expressions and results in higher compression effects. With the volume rendering, we can decrease the rendering time and get a higher quality and efficiency for PDA through image based rendering.

Accelerating Depth Image-Based Rendering Using GPU (GPU를 이용한 깊이 영상기반 렌더링의 가속)

  • Lee, Man-Hee;Park, In-Kyu
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.11
    • /
    • pp.853-858
    • /
    • 2006
  • In this paper, we propose a practical method for hardware-accelerated rendering of the depth image-based representation(DIBR) of 3D graphic object using graphic processing unit(GPU). The proposed method overcomes the drawbacks of the conventional rendering, i.e. it is slow since it is hardly assisted by graphics hardware and surface lighting is static. Utilizing the new features of modem GPU and programmable shader support, we develop an efficient hardware-accelerating rendering algorithm of depth image-based 3D object. Surface rendering in response of varying illumination is performed inside the vertex shader while adaptive point splatting is performed inside the fragment shader. Experimental results show that the rendering speed increases considerably compared with the software-based rendering and the conventional OpenGL-based rendering method.

Post-Rendering 3D Warping using Projective Texture (투영 텍스춰를 이용한 렌더링 후 3차원 와핑)

  • Park, Hui-Won;Ihm, In-Seong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.8
    • /
    • pp.431-439
    • /
    • 2002
  • Due to the recent development of graphics hardware, real-time rendering of complex scenes is still a challenging task. As results of researches on image based rendering, the rendering schemes based on post-rendering 3D warping have been proposed. In general, these methods produce good rendering results. However, they are not appropriate for real-time rendering since it is not easy to accelerate the time-consuming algorithms within graphics subsystem. As an attempt to resolve this problem of the post-rendering 3D warping technique, we present a new real-time scheme based on projective texture. In our method, two reference images obtained by rendering complicated objects at two consecutive points of time are used. Rendering images of high quality for intermediate points of time are obtained by projecting the reference images onto a simplified object, and then blending the resulting images. Our technique will be effectively used in developing real-time graphics applications such as 3D games and virtual reality software and so on.

Incremental Image-Based Motion Rendering Technique for Implementation of Realistic Computer Animation (사실적인 컴퓨터 애니메이션 구현을 위한 증분형 영상 기반 운동 렌더링 기법)

  • Han, Young-Mo
    • The KIPS Transactions:PartB
    • /
    • v.15B no.2
    • /
    • pp.103-112
    • /
    • 2008
  • Image-based motion capture technology is often used in making realistic computer animation. In this paper we try to implement image-based motion rendering by fixing a camera to a PC. Existing image-based rendering algorithms have disadvantages of high computational burden or low accuracy. The former disadvantage causes too long making-time of an animation. The latter disadvantage degrades reality in making realistic animation. To compensate for those disadvantages of the existing approaches, this paper presents an image-based motion rendering algorithm with low computational load and high estimation accuracy. In the proposed approach, an incremental motion rendering algorithm with low computational load is analyzed in the respect of optimal control theory and revised so that its estimation accuracy is enhanced. If we apply this proposed approach to optic motion capture systems, we can obtain additional advantages that motion capture can be performed without any markers, and with low cost in the respect of equipments and spaces.

High-quality Shear-warp Volume Rendering Using Efficient Supersampling and Pre-integration Technique (효율적인 수퍼샘플링과 선-적분을 이용한 고화질 쉬어-왑 분해 볼륨 렌더링)

  • Kye, Hee-Won;Kim, Tae-Young
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.8
    • /
    • pp.971-981
    • /
    • 2006
  • As shear-warp volume rendering is the fastest rendering method among the software based approaches, image quality is not good as that of other high-quality rendering methods. In this paper, we propose two methods to improve the image quality of shear-warp volume rendering without sacrificing computational efficiency. First, supersampling is performed in intermediate image space. We propose an efficient method to transform between volume and image coordinates at the arbitrary ratio. Second, we utilize pre-integrated rendering technique for shear-warp rendering. We propose new data structure called overlapped min-max map. Using this structure, empty space leaping can be performed so that we can maintain the rendering speed even though pre-integrated rendering is applied. Consequently, shear-warp rendering can generate high-qualify images comparable to those generated by the ray-casting without degrading speed.

  • PDF

Enhancing Depth Accuracy on the Region of Interest in a Scene for Depth Image Based Rendering

  • Cho, Yongjoo;Seo, Kiyoung;Park, Kyoung Shin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.7
    • /
    • pp.2434-2448
    • /
    • 2014
  • This research proposed the domain division depth map quantization for multiview intermediate image generation using Depth Image-Based Rendering (DIBR). This technique used per-pixel depth quantization according to the percentage of depth bits assigned in domains of depth range. A comparative experiment was conducted to investigate the potential benefits of the proposed method against the linear depth quantization on DIBR multiview intermediate image generation. The experiment evaluated three quantization methods with computer-generated 3D scenes, which consisted of various scene complexities and backgrounds, under varying the depth resolution. The results showed that the proposed domain division depth quantization method outperformed the linear method on the 7- bit or lower depth map, especially in the scene with the large object.

Image-Based Relighting Rendering System (영상 기반 실시간 재조명 렌더링 시스템)

  • Kim, Soon-Hyun;Lee, Joo-Haeng;Kyung, Min-Ho
    • Journal of the HCI Society of Korea
    • /
    • v.2 no.1
    • /
    • pp.25-31
    • /
    • 2007
  • We develop an interactive relighting renderer allowing camera view changes based on a deep-frame buffer approach. The renderer first caches the rendering parameters for a given 3D scene in an auxiliary buffer with the same size of the output image. The rendering parameters independent from light changes are selected from the shading models used for shading pixels. Next, as the user interactively edits one light at one time, the relighting renderer instantly re-shades each pixel by updating the contribution of the changed light with the shading parameters cached in the deep-frame buffer. When the camera moves, the cache values should be re-computed because the currently cached values become obsolete. We present a novel method to synthesize them quickly from the cache images of the user specified cameras by using an image-based technique. This computations are all performed on GPU to achieve real-time performance.

  • PDF

Image-Based Relighting Rendering System (영상 기반 실시간 재조명 렌더링 시스템)

  • Kim, Soon-Hyun;Kyung, Min-Ho;Lee, Joo-Haeng
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.38-43
    • /
    • 2007
  • 재조명(relighting) 렌더링은 장면 내에 새로운 광원의 추가 또는 기존 광원 속성의 변경으로 인한 영상의 변화를 효율적으로 계산하는 과정을 말한다. 본 논문에서는 쉐이딩(shading) 계산에서 광원에 독립적인 파라메터를 미리 텍스쳐 이미지 형태로 캐시화하여 재조명 렌더링 과정에서의 계산량을 줄이는 방법을 사용하였다. 이러한 쉐이딩 파라메터들의 캐시 이미지들은 사용자가 카메라 시점을 바꾸고자 할 경우 새로 생성을 하여야 하는데, 이 계산에 많은 시간이 소요된다. 본 논문에서는 새로운 시점에서의 캐시 이미지들를 영상 기반 렌더링(image-based rendering) 기법을 이용하여 실시간에 구하는 방법을 제시한다. 먼저 여러 개의 지정된 카메라 시점에 대한 캐시 이미지들을 미리 생성해 둔다. 다음 원하는 시점의 캐시 이미지는 각 픽셀에 투영되는 3차원 표면점을 역시점변환(inverse viewing transform)을 통해 구하고, 이 점을 지정된 카메라 시점으로 다시 투영하여 캐시 이미지에서의 대응 픽셀을 찾는다. 대응 픽셀의 파라메터 값들을 평균하여 새 캐시 이미지에 써준다. 이 과정들은 하드웨어 그래픽 가속기의 단편 쉐이더(fragment shader)를 이용하여 실시간으로 수행된다.

  • PDF

Omnidirectional Camera-based Image Rendering Synchronization System Using Head Mounted Display (헤드마운티드 디스플레이를 활용한 전방위 카메라 기반 영상 렌더링 동기화 시스템)

  • Lee, Seungjoon;Kang, Suk-Ju
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.6
    • /
    • pp.782-788
    • /
    • 2018
  • This paper proposes a novel method for the omnidirectional camera-based image rendering synchronization system using head mounted display. There are two main processes in the proposed system. The first one is rendering 360-degree images which are remotely photographed to head mounted display. This method is based on transmission control protocol/internet protocol(TCP/IP), and the sequential images are rapidly captured and transmitted to the server using TCP/IP protocol with the byte array data format. Then, the server collects the byte array data, and make them into images. Finally, the observer can see them while wearing head mounted display. The second process is displaying the specific region by detecting the user's head rotation. After extracting the user's head Euler angles from head mounted display's inertial measurement units sensor, the proposed system display the region based on these angles. In the experimental results, rendering the original image at the same resolution in a given network environment causes loss of frame rate, and rendering at the same frame rate results in loss of resolution. Therefore, it is necessary to select optimal parameters considering environmental requirements.

Effects of Depth Map Quantization for Computer-Generated Multiview Images using Depth Image-Based Rendering

  • Kim, Min-Young;Cho, Yong-Joo;Choo, Hyon-Gon;Kim, Jin-Woong;Park, Kyoung-Shin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.11
    • /
    • pp.2175-2190
    • /
    • 2011
  • This paper presents the effects of depth map quantization for multiview intermediate image generation using depth image-based rendering (DIBR). DIBR synthesizes multiple virtual views of a 3D scene from a 2D image and its associated depth map. However, it needs precise depth information in order to generate reliable and accurate intermediate view images for use in multiview 3D display systems. Previous work has extensively studied the pre-processing of the depth map, but little is known about depth map quantization. In this paper, we conduct an experiment to estimate the depth map quantization that affords acceptable image quality to generate DIBR-based multiview intermediate images. The experiment uses computer-generated 3D scenes, in which the multiview images captured directly from the scene are compared to the multiview intermediate images constructed by DIBR with a number of quantized depth maps. The results showed that there was no significant effect on depth map quantization from 16-bit to 7-bit (and more specifically 96-scale) on DIBR. Hence, a depth map above 7-bit is needed to maintain sufficient image quality for a DIBR-based multiview 3D system.