• Title/Summary/Keyword: Multiple viewpoint rendering

Search Result 8, Processing Time 0.02 seconds

Parallel Processing for Integral Imaging Pickup Using Multiple Threads

  • Jang, Young-Hee;Park, Chan;Park, Jae-Hyeung;Kim, Nam;Yoo, Kwan-Hee
    • International Journal of Contents
    • /
    • v.5 no.4
    • /
    • pp.30-34
    • /
    • 2009
  • Many studies have been done on the integral imaging pickup whose objective is to get efficiently elemental images from a lens array with respect to three-dimensional (3D) objects. In the integral imaging pickup process, it is necessary to render an elemental image from each elemental lens in a lens array for 3D objects, and then to combine them into one total image. The multiple viewpoint rendering (MVR) is one of various methods for integral imaging pickup. This method, however, has the computing and rendering time problem for obtaining element images from a lot of elemental lens. In order to solve the problems, in this paper, we propose a parallel MVR (PMVR) method to generate elemental images in a parallel through distribution of elemental lenses into multiple threads simultaneously. As a result, the computation time of integral imaging using PMVR is reduced significantly rather than a sequential approach and then we showed that the PMVR is very useful.

Seamless Image Blending based on Multiple TIP models (다수 시점의 TIP 영상기반렌더링)

  • Roh, Chang-Hyun
    • Journal of Korea Game Society
    • /
    • v.3 no.2
    • /
    • pp.30-34
    • /
    • 2003
  • Image-based rendering is an approach to generate realistic images in real-time without modeling explicit 3D geometry, Especially, TIP(Tour Into the Picture) is preferred for its simplicity in constructing 3D background scene. However, TP has a limitation that a viewpoint cannot go far from the origin of the TIP for the lack of geometrical information. in this paper, we propose a method to interpolating the TIP images to generate smooth and realistic navigation. We construct multiple TIP models in a wide area of the virtual environment. Then we interpolate foreground objects and background object respectively to generate smooth navigation results.

  • PDF

Relighting 3D Scenes with a Continuously Moving Camera

  • Kim, Soon-Hyun;Kyung, Min-Ho;Lee, Joo-Haeng
    • ETRI Journal
    • /
    • v.31 no.4
    • /
    • pp.429-437
    • /
    • 2009
  • This paper proposes a novel technique for 3D scene relighting with interactive viewpoint changes. The proposed technique is based on a deep framebuffer framework for fast relighting computation which adopts image-based techniques to provide arbitrary view-changing. In the preprocessing stage, the shading parameters required for the surface shaders, such as surface color, normal, depth, ambient/diffuse/specular coefficients, and roughness, are cached into multiple deep framebuffers generated by several caching cameras which are created in an automatic manner. When the user designs the lighting setup, the relighting renderer builds a map to connect a screen pixel for the current rendering camera to the corresponding deep framebuffer pixel and then computes illumination at each pixel with the cache values taken from the deep framebuffers. All the relighting computations except the deep framebuffer pre-computation are carried out at interactive rates by the GPU.

MPEG-I RVS Software Speed-up for Real-time Application (실시간 렌더링을 위한 MPEG-I RVS 가속화 기법)

  • Ahn, Heejune;Lee, Myeong-jin
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.655-664
    • /
    • 2020
  • Free viewpoint image synthesis technology is one of the important technologies in the MPEG-I (Immersive) standard. RVS (Reference View Synthesizer) developed by MPEG-I and in use in MPEG group is a DIBR (Depth Information-Based Rendering) program that generates an image at a virtual (intermediate) viewpoint from multiple viewpoints' inputs. RVS uses the mesh surface method based on computer graphics, and outperforms the pixel-based ones by 2.5dB or more compared to the previous pixel method. Even though its OpenGL version provides 10 times speed up over the non OpenGL based one, it still shows a non-real-time processing speed, i.e., 0.75 fps on the two 2k resolution input images. In this paper, we analyze the internal of RVS implementation and modify its structure, achieving 34 times speed up, therefore, real-time performance (22-26 fps), through the 3 key improvements: 1) the reuse of OpenGL buffers and texture objects 2) the parallelization of file I/O and OpenGL execution 3) the parallelization of GPU shader program and buffer transfer.

Multiple TIP Images Blending for Wide Virtual Environment (넓은 가상환경 구축을 위한 다수의 TIP (Tour into the Picture) 영상 합성)

  • Roh, Chang-Hyun;Lee, Wan-Bok;Ryu, Dae-Hyun;Kang, Jung-Jin
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.42 no.1
    • /
    • pp.61-68
    • /
    • 2005
  • Image-based rendering is an approach to generate realistic images in real-time without modeling explicit 3D geometry. Especially, owing to its simplicity, TIP(Tour Into the Picture) is preferred to constructing a 3D background scene. Because existing TIP methods have a limitation in that they lack geometrical information, we can not expect a accurate scene if the viewpoint is far from the origin of the TIP. In this paper, we propose the method of constructing a virtual environment of a wide area by blending multiple TIP images. Firstly, we construct multiple TIP models of the virtual environment. Then we interpolate foreground and background objects respectively, to generate a smooth navigation image. The method proposed here can be applied to various industry applications, such as computer game, 3D car navigation, and so on.

Scene Generation of CNC Tools Utilizing Instant NGP and Rendering Performance Evaluation (Instant NGP를 활용한 CNC Tool의 장면 생성 및 렌더링 성능 평가)

  • Taeyeong Jung;Youngjun Yoo
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.2
    • /
    • pp.83-90
    • /
    • 2024
  • CNC tools contribute to the production of high-precision and consistent results. However, employing damaged CNC tools or utilizing compromised numerical control can lead to significant issues, including equipment damage, overheating, and system-wide errors. Typically, the assessment of external damage to CNC tools involves capturing a single viewpoint through a camera to evaluate tool wear. This study aims to enhance existing methods by using only a single manually focused Microscope camera to enable comprehensive external analysis from multiple perspectives. Applying the NeRF (Neural Radiance Fields) algorithm to images captured with a single manual focus microscope camera, we construct a 3D rendering system. Through this system, it is possible to generate scenes of areas that cannot be captured even with a fixed camera setup, thereby assisting in the analysis of exterior features. However, the NeRF model requires considerable training time, ranging from several hours to over two days. To overcome these limitations of NeRF, various subsequent models have been developed. Therefore, this study aims to compare and apply the performance of Instant NGP, Mip-NeRF, and DS-NeRF, which have garnered attention following NeRF.

A Method of Patch Merging for Atlas Construction in 3DoF+ Video Coding

  • Im, Sung-Gyune;Kim, Hyun-Ho;Lee, Gwangsoon;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.11a
    • /
    • pp.259-260
    • /
    • 2019
  • MPEG-I Visual group is actively working on enhancing immersive experiences with up to six degree of freedom (6DoF). In virtual space of 3DoF+, which is defined as an extension of 360 video with limited changes of the view position in a sitting position, looking at the scene from another viewpoint (another position in space) requires rendering additional viewpoints using multiple videos taken at the different locations at the same time. In the MPEG-I Visual workgroup, methods of efficient coding and transmission of 3DoF+ video are being studied, and they released Test Model for Immersive Media (TMIV) recently. This paper presents the enhanced clustering method which can pack the patches into atlas efficiently in TMIV. The experimental results show that the proposed method achieves significant BD-rate reduction in terms of various end-to-end evaluation methods.

  • PDF

A Patch Packing Method Using Guardband for Efficient 3DoF+ Video Coding (3DoF+ 비디오의 효율적인 부호화를 위한 보호대역을 사용한 패치 패킹 기법)

  • Kim, Hyun-Ho;Kim, Yong-Ju;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.25 no.2
    • /
    • pp.185-191
    • /
    • 2020
  • MPEG-I is actively working on standardization on the immersive video coding which provides up to 6 degree of freedom (6DoF) in terms of viewpoint. In a virtual space of 3DoF+, which is defined as an extension of 360 with motion parallax, looking at the scene from another viewpoint (another position in space) requires rendering an additional viewpoint using multiple videos included in the 3DoF+ video. In the MPEG-I Visual workgroup, efficient coding methods for 3DoF+ video are being studied, and they released Test Model for Immersive Video (TMIV) recently. This paper presents a patch packing method which packs the patches into atlases efficiently for improving coding efficiency of 3DoF+ video in TMIV. The proposed method improves the reconstructed view quality with reduced coding artifacts by introducing guardbands between patches in the atlas.