• Title/Summary/Keyword: intermediate view synthesis

Search Result 18, Processing Time 0.029 seconds

View synthesis in uncalibrated images (임의 카메라 구조에서의 영상 합성)

  • Kang, Ji-Hyun;Kim, Dong-Hyun;Sohn, Kwang-Hoon
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.437-438
    • /
    • 2006
  • Virtual view synthesis is essential for 3DTV systems, which utilizes the motion parallax cue. In this paper, we propose a multi-step view synthesis algorithm to efficiently reconstruct an arbitrary view from limited number of known views of a 3D scene. We describe an efficient image rectification procedure which guarantees that an interpolation process produce valid views. This rectification method can deal with all possible camera motions. The idea consists of using a polar parameterization of the image around the epipole. Then, to generate intermediate views, we use an efficient dense disparity estimation algorithm considering features of stereo image pairs. Main concepts of the algorithm are based on the region dividing bidirectional pixel matching. The estimated disparities are used to synthesize intermediate view of stereo images. We use computer simulation to show the result of the proposed algorithm.

  • PDF

A Quadtree-based Disparity Estimation for 3D Intermediate View Synthesis (3차원 중간영상의 합성을 위한 쿼드트리기반 변이추정 방법)

  • 성준호;이성주;김성식;하태현;김재석
    • Journal of Broadcast Engineering
    • /
    • v.9 no.3
    • /
    • pp.257-273
    • /
    • 2004
  • In stereoscopic or multi-view three dimensional display systems, the synthesis of intermediate sequences is inevitably needed to assure look-around capability and continuous motion parallax so that it could enhance comfortable 3D perception. The quadtree-based disparity estimation is one of the most remarkable methods for synthesis of Intermediate sequences due to the simplicity of its algorithm and hardware implementation. In this paper, we propose two ideas in order to reduce the annoying flicker at the object boundaries of synthesized intermediate sequences by quadtree-based disparity estimation. First, new split-scheme provides more consistent auadtree-splitting during the disparity estimation. Secondly, adaptive temporal smoothing using correlation between present frame and previous one relieves error of disparity estimation. Two proposed Ideas are tested by using several stereoscopic sequences, and the annoying flickering is remarkably reduced by them.

Fast View Synthesis Using GPGPU (GPGPU를 이용한 고속 영상 합성 기법)

  • Shin, Hong-Chang;Park, Han-Hoon;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.13 no.6
    • /
    • pp.859-874
    • /
    • 2008
  • In this paper, we develop a fast view synthesis method that generates multiple intermediate views in real-time for the 3D display system when the camera geometry and depth map of reference views are given in advance. The proposed method achieves faster view synthesis than previous approaches in GPU by processing in parallel the entire computations required for the view synthesis. Specifically, we use $CUDA^{TM}$ (by NVIDIA) to control GPU device. For increasing the processing speed, we adapted all the processes for the view synthesis to single instruction multiple data (SIMD) structure that is a main feature of CUDA, maximized the use of the high-speed memories on GPU device, and optimized the implementation. As a result, we could synthesize 9 intermediate view images with the size of 720 by 480 pixels within 0.128 second.

Multi-view Synthesis Algorithm for the Better Efficiency of Codec (부복호화기 효율을 고려한 다시점 영상 합성 기법)

  • Choi, In-kyu;Cheong, Won-sik;Lee, Gwangsoon;Yoo, Jisang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.2
    • /
    • pp.375-384
    • /
    • 2016
  • In this paper, when stereo image, satellite view and corresponding depth maps were used as the input data, we propose a new method that convert these data to data format suitable for compressing, and then by using these format, intermediate view is synthesized. In the transmitter depth maps are merged to a global depth map and satellite view are converted to residual image corresponding hole region as out of frame area and occlusion region. And these images subsampled to reduce a mount of data and stereo image of main view are encoded by HEVC codec and transmitted. In the receiver intermediate views between stereo image and between stereo image and bit-rate are synthesized using decoded global depth map, residual images and stereo image. Through experiments, we confirm good quality of intermediate views synthesized by proposed format subjectively and objectively in comparison to intermediate views synthesized by MVD format versus total bit-rate.

Depth Estimation and Intermediate View Synthesis for Three-dimensional Video Generation (3차원 영상 생성을 위한 깊이맵 추정 및 중간시점 영상합성 방법)

  • Lee, Sang-Beom;Lee, Cheon;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.10B
    • /
    • pp.1070-1075
    • /
    • 2009
  • In this paper, we propose new depth estimation and intermediate view synthesis algorithms for three-dimensional video generation. In order to improve temporal consistency of the depth map sequence, we add a temporal weighting function to the conventional matching function when we compute the matching cost for estimating the depth information. In addition, we propose a boundary noise removal method in the view synthesis operation. after finding boundary noise areas using the depth map, we replace them with corresponding texture information from the other reference image. Experimental results showed that the proposed algorithm improved temporal consistency of the depth sequence and reduced flickering artifacts in the virtual view. It also improved visual quality of the synthesized virtual views by removing the boundary noise.

A New Intermediate View Reconstruction Scheme based-on Stereo Image Rectification Algorithm (스테레오 영상 보정 알고리즘에 기반한 새로운 중간시점 영상합성 기법)

  • 박창주;고정환;김은수
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.5C
    • /
    • pp.632-641
    • /
    • 2004
  • In this paper, a new intermediate view reconstruction method employing a stereo image rectification algorithm by which an uncalibrated input stereo image can be transformed into the calibrated one is suggested and its performance is analyzed. In the proposed method, feature point are extracted from the stereo image pair though detection of the corners and similarities between each pixel of the stereo image. And then, using these detected feature points, the moving vectors between stereo image and the epipolar line is extracted. Finally, the input stereo image is rectified by matching the extracted epipolar line between the stereo image in the horizontal direction and intermediate views are reconstructed by using these rectified stereo images. From some experiments on synthesis of the intermediate views by using three kinds of stereo image; a CCETT's stereo image of 'Man' and two stereo images of 'Face' & 'Car' captured by real camera, it is analyzed that PSNRs of the intermediate views reconstructed from the calibrated image by using the proposed rectification algorithm are improved by 2.5㏈ for 'Man', 4.26㏈ for 'Pace' and 3.85㏈ for 'Car' than !hose of the uncalibrated ones. This good experimental result suggests a possibility of practical application of the unposed stereo image rectification algorithm-based intermediate view reconstruction view to the uncalibrated stereo images.

A New Rectification Scheme for Uncalibrated Stereo Image Pairs and Its Application to Intermediate View Reconstruction

  • Ko, Jung-Hwan;Jung, Yong-Woo;Kim, Eun-Soo
    • Journal of Information Display
    • /
    • v.6 no.4
    • /
    • pp.26-34
    • /
    • 2005
  • In this paper, a new rectification scheme to transform the uncalibrated stereo image pair into the calibrated one is suggested and its performance is analyzed by applying this scheme to the reconstruction of the intermediate views for multi-view stereoscopic display. In the proposed method, feature points are extracted from the stereo image pair by detecting the comers and similarities between each pixel of the stereo image pair. These detected feature points, are then used to extract moving vectors between the stereo image pair and the epipolar line. Finally, the input stereo image pair is rectified by matching the extracted epipolar line between the stereo image pair in the horizontal direction. Based on some experiments done on the synthesis of the intermediate views by using the calibrated stereo image pairs through the proposed rectification algorithm and the uncalibrated ones for three kinds of stereo image pairs; 'Man', 'Face' and 'Car', it is found that PSNRs of the intermediate views reconstructed from the calibrated images improved by about 2.5${\sim}$3.26 dB than those of the uncalibrated ones.

Multi-view Video Coding using View Interpolation (영상 보간을 이용한 다시점 비디오 부호화 방법)

  • Lee, Cheon;Oh, Kwan-Jung;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.12 no.2
    • /
    • pp.128-136
    • /
    • 2007
  • Since the multi-view video is a set of video sequences captured by multiple array cameras for the same three-dimensional scene, it can provide multiple viewpoint images using geometrical manipulation and intermediate view generation. Although multi-view video allows us to experience more realistic feeling with a wide range of images, the amount of data to be processed increases in proportion to the number of cameras. Therefore, we need to develop efficient coding methods. One of the possible approaches to multi-view video coding is to generate an intermediate image using view interpolation method and to use the interpolated image as an additional reference frame. The previous view interpolation method for multi-view video coding employs fixed size block matching over the pre-determined disparity search range. However, if the disparity search range is not proper, disparity error may occur. In this paper, we propose an efficient view interpolation method using initial disparity estimation, variable block-based estimation, and pixel-level estimation using adjusted search ranges. In addition, we propose a multi-view video coding method based on H.264/AVC to exploit the intermediate image. Intermediate images have been improved about $1{\sim}4dB$ using the proposed method compared to the previous view interpolation method, and the coding efficiency have been improved about 0.5 dB compared to the reference model.

Intermediate View Synthesis Method using Kinect Depth Camera (Kinect 깊이 카메라를 이용한 가상시점 영상생성 기술)

  • Lee, Sang-Beom;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.29-35
    • /
    • 2012
  • A depth image-based rendering (DIBR) technique is one of the rendering processes of virtual views with a color image and the corresponding depth map. The most important issue of DIBR is that the virtual view has no information at newly exposed areas, so called dis-occlusion. In this paper, we propose an intermediate view generation algorithm using the Kinect depth camera that utilizes the infrared structured light. After we capture a color image and its corresponding depth map, we pre-process the depth map. The pre-processed depth map is warped to the virtual viewpoint and filtered by median filtering to reduce the truncation error. Then, the color image is back-projected to the virtual viewpoint using the warped depth map. In order to fill out the remaining holes caused by dis-occlusion, we perform a background-based image in-painting operation. Finally, we obtain the synthesized image without any dis-occlusion. From experimental results, we have shown that the proposed algorithm generated very natural images in real-time.

  • PDF

View Synthesis Error Removal for Comfortable 3D Video Systems (편안한 3차원 비디오 시스템을 위한 영상 합성 오류 제거)

  • Lee, Cheon;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.36-42
    • /
    • 2012
  • Recently, the smart applications, such as smart phone and smart TV, become a hot issue in IT consumer markets. In particular, the smart TV provides 3D video services, hence efficient coding methods for 3D video data are required. Three-dimensional (3D) video involves stereoscopic or multi-view images to provide depth experience through 3D display systems. Binocular cues are perceived by rendering proper viewpoint images obtained at slightly different view angles. Since the number of viewpoints of the multi-view video is limited, 3D display devices should generate arbitrary viewpoint images using available adjacent view images. In this paper, after we explain a view synthesis method briefly, we propose a new algorithm to compensate view synthesis errors around object boundaries. We describe a 3D warping technique exploiting the depth map for viewpoint shifting and a hole filling method using multi-view images. Then, we propose an algorithm to remove boundary noises that are generated due to mismatches of object edges in the color and depth images. The proposed method reduces annoying boundary noises near object edges by replacing erroneous textures with alternative textures from the other reference image. Using the proposed method, we can generate perceptually inproved images for 3D video systems.

  • PDF