• Title/Summary/Keyword: virtual view rendering

Search Result 53, Processing Time 0.028 seconds

VIRTUAL VIEW RENDERING USING MULTIPLE STEREO IMAGES

  • Ham, Bum-Sub;Min, Dong-Bo;Sohn, Kwang-Hoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.233-237
    • /
    • 2009
  • This paper represents a new approach which addresses quality degradation of a synthesized view, when a virtual camera moves forward. Generally, interpolation technique using only two neighboring views is used when a virtual view is synthesized. Because a size of the object increases when the virtual camera moves forward, most methods solved this by interpolation in order to synthesize a virtual view. However, as it generates a degraded view such as blurred images, we prevent a synthesized view from being blurred by using more cameras in multiview camera configuration. That is, we solve this by applying super-resolution concept which reconstructs a high resolution image from several low resolution images. Therefore, data fusion is executed by geometric warping using a disparity of the multiple images followed by deblur operation. Experimental results show that the image quality can further be improved by reducing blur in comparison with interpolation method.

  • PDF

RAY-SPACE INTERPOLATION BYWARPING DISPARITY MAPS

  • Moriy, Yuji;Yendoy, Tomohiro;Tanimotoy, Masayuki;Fujiiz, Toshiaki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.583-587
    • /
    • 2009
  • In this paper we propose a new method of Depth-Image-Based Rendering (DIBR) for Free-viewpoint TV (FTV). In the proposed method, virtual viewpoint images are rendered with 3D warping instead of estimating the view-dependent depth since depth estimation is usually costly and it is desirable to eliminate it from the rendering process. However, 3D warping causes some problems that do not occur in the method with view-dependent depth estimation; for example, the appearance of holes on the rendered image, and the occurrence of depth discontinuity on the surface of the object at virtual image plane. Depth discontinuity causes artifacts on the rendered image. In this paper, these problems are solved by reconstructing disparity information at virtual camera position from neighboring two real cameras. In the experiments, high quality arbitrary viewpoint images were obtained.

  • PDF

A GPU based Rendering Method for Multiple-view Autostereoscopic Display (무안경식 다시점 입체 디스플레이를 위한 GPU기반 렌더링 기법)

  • Ahn, Jong-Gil;Kim, Jin-Wook
    • Journal of the HCI Society of Korea
    • /
    • v.4 no.2
    • /
    • pp.9-16
    • /
    • 2009
  • 3D stereo display systems gain more interests recently. Multiple-view autostereoscopic display system enables observers to watch stereo image from multiple viewpoints not wearing specific devices such as shutter glasses or HMD. Therefore, the Multiple-view autostereoscopic display is being spotlighted in the field of virtual reality, mobile, 3D TV and so on. However, one of the critical disadvantages of the system is that observer can enjoy the system only in a small designated area where the system is designed to work properly. This research provides an effective way of GPU based rendering technique to present seamless 3D stereo experiences from an arbitrary observer's view position.

  • PDF

Intermediate View Synthesis Method using Kinect Depth Camera (Kinect 깊이 카메라를 이용한 가상시점 영상생성 기술)

  • Lee, Sang-Beom;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.29-35
    • /
    • 2012
  • A depth image-based rendering (DIBR) technique is one of the rendering processes of virtual views with a color image and the corresponding depth map. The most important issue of DIBR is that the virtual view has no information at newly exposed areas, so called dis-occlusion. In this paper, we propose an intermediate view generation algorithm using the Kinect depth camera that utilizes the infrared structured light. After we capture a color image and its corresponding depth map, we pre-process the depth map. The pre-processed depth map is warped to the virtual viewpoint and filtered by median filtering to reduce the truncation error. Then, the color image is back-projected to the virtual viewpoint using the warped depth map. In order to fill out the remaining holes caused by dis-occlusion, we perform a background-based image in-painting operation. Finally, we obtain the synthesized image without any dis-occlusion. From experimental results, we have shown that the proposed algorithm generated very natural images in real-time.

  • PDF

Hole-Filling Method for Depth-Image-Based Rendering for which Modified-Patch Matching is Used (개선된 패치 매칭을 이용한 깊이 영상 기반 렌더링의 홀 채움 방법)

  • Cho, Jea-Hyung;Song, Wonseok;Choi, Hyuk
    • Journal of KIISE
    • /
    • v.44 no.2
    • /
    • pp.186-194
    • /
    • 2017
  • Depth-image-based rendering is a technique that can be applied in a variety of 3D-display systems. It generates the images that have been captured from virtual viewpoints by using a depth map. However, disoccluded hole-filling problems remain a challenging issue, as a newly exposed area appears in the virtual view. Image inpainting is a popular approach for the filling of the hole region. This paper presents a robust hole-filling method that reduces the error and generates a high quality-virtual view. First, the adaptive-patch size is decided using the color and depth information. Also, a partial filling method for which the patch similarity is used is proposed. These efforts reduce the error occurrence and the propagation. The experiment results show that the proposed method synthesizes the virtual view with a higher visual comfort compared with the existing methods.

Multimodal Interaction on Automultiscopic Content with Mobile Surface Haptics

  • Kim, Jin Ryong;Shin, Seunghyup;Choi, Seungho;Yoo, Yeonwoo
    • ETRI Journal
    • /
    • v.38 no.6
    • /
    • pp.1085-1094
    • /
    • 2016
  • In this work, we present interactive automultiscopic content with mobile surface haptics for multimodal interaction. Our system consists of a 40-view automultiscopic display and a tablet supporting surface haptics in an immersive room. Animated graphics are projected onto the walls of the room. The 40-view automultiscopic display is placed at the center of the front wall. The haptic tablet is installed at the mobile station to enable the user to interact with the tablet. The 40-view real-time rendering and multiplexing technology is applied by establishing virtual cameras in the convergence layout. Surface haptics rendering is synchronized with three-dimensional (3D) objects on the display for real-time haptic interaction. We conduct an experiment to evaluate user experiences of the proposed system. The results demonstrate that the system's multimodal interaction provides positive user experiences of immersion, control, user interface intuitiveness, and 3D effects.

Synthesis of Multi-View Images Based on a Convergence Camera Model

  • Choi, Hyun-Jun
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.2
    • /
    • pp.197-200
    • /
    • 2011
  • In this paper, we propose a multi-view stereoscopic image synthesis algorithm for 3DTV system using depth information with an RGB texture from a depth camera. The proposed algorithm synthesizes multi-view images which a virtual convergence camera model could generate. Experimental results showed that the performance of the proposed algorithm is better than those of conventional methods.

Enhancement Method of Depth Accuracy in DIBR-Based Multiview Image Generation (다시점 영상 생성을 위한 DIBR 기반의 깊이 정확도 향상 방법)

  • Kim, Minyoung;Cho, Yongjoo;Park, Kyoung Shin
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.5 no.9
    • /
    • pp.237-246
    • /
    • 2016
  • DIBR (Depth Image Based Rendering) is a multimedia technology that generates the virtual multi-view images using a color image and a depth image, and it is used for creating glasses-less 3-dimensional display contents. This research describes the effect of depth accuracy about the objective quality of DIBR-based multi-view images. It first evaluated the minimum depth quantization bit that enables the minimum distortion so that people cannot recognize the quality degradation. It then presented the comparative analysis of non-uniform domain-division quantization versus regular linear quantization to find out how effectively express the accuracy of the depth information in same quantization levels according to scene properties.

A New Copyright Protection Scheme for Depth Map in 3D Video

  • Li, Zhaotian;Zhu, Yuesheng;Luo, Guibo;Guo, Biao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.7
    • /
    • pp.3558-3577
    • /
    • 2017
  • In 2D-to-3D video conversion process, the virtual left and right view can be generated from 2D video and its corresponding depth map by depth image based rendering (DIBR). The depth map plays an important role in conversion system, so the copyright protection for depth map is necessary. However, the provided virtual views may be distributed illegally and the depth map does not directly expose to viewers. In previous works, the copyright information embedded into the depth map cannot be extracted from virtual views after the DIBR process. In this paper, a new copyright protection scheme for the depth map is proposed, in which the copyright information can be detected from the virtual views even without the depth map. The experimental results have shown that the proposed method has a good robustness against JPEG attacks, filtering and noise.

View Synthesis Using OpenGL for Multi-viewpoint 3D TV (다시점 3차원 방송을 위한 OpenGL을 이용하는 중간영상 생성)

  • Lee, Hyun-Jung;Hur, Nam-Ho;Seo, Yong-Duek
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.507-520
    • /
    • 2006
  • In this paper, we propose an application of OpenGL functions for novel view synthesis from multi-view images and depth maps. While image based rendering has been meant to generate synthetic images by processing the camera view with a graphic engine, little has been known about how to apply the given images and depth information to the graphic engine and render the scene. This paper presents an efficient way of constructing a 3D space with camera parameters, reconstructing the 3D scene with color and depth images, and synthesizing virtual views in real-time as well as their depth images.