• 제목/요약/키워드: View synthesis

Search Result 237, Processing Time 0.036 seconds

3D View Synthesis with Feature-Based Warping

  • Hu, Ningning;Zhao, Yao;Bai, Huihui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.11
    • /
    • pp.5506-5521
    • /
    • 2017
  • Three-dimensional video (3DV), as the new generation of video format standard, can provide the viewers with a vivid screen sense and a realistic stereo impression. Meanwhile the view synthesis has become an important issue for 3DV application. Differently from the conventional methods based on depth, in this paper we propose a new view synthesis algorithm, which can employ the correlation among views and warp in the image domain only. There are mainly two contributions. One is the incorporation of sobel edge points into feature extraction and matching, which can obtain a better stable homography and then a visual comfortable synthesis view compared to SIFT points only. The other is a novel image blending method proposed to obtain a better synthesis image. Experimental results demonstrate that the proposed method can improve the synthesis quality both in subjectivity and objectivity.

Voxel-wise UV parameterization and view-dependent texture synthesis for immersive rendering of truncated signed distance field scene model

  • Kim, Soowoong;Kang, Jungwon
    • ETRI Journal
    • /
    • v.44 no.1
    • /
    • pp.51-61
    • /
    • 2022
  • In this paper, we introduced a novel voxel-wise UV parameterization and view-dependent texture synthesis for the immersive rendering of a truncated signed distance field (TSDF) scene model. The proposed UV parameterization delegates a precomputed UV map to each voxel using the UV map lookup table and consequently, enabling efficient and high-quality texture mapping without a complex process. By leveraging the convenient UV parameterization, our view-dependent texture synthesis method extracts a set of local texture maps for each voxel from the multiview color images and separates them into a single view-independent diffuse map and a set of weight coefficients for an orthogonal specular map basis. Furthermore, the view-dependent specular maps for an arbitrary view are estimated by combining the specular weights of each source view using the location of the arbitrary and source viewpoints to generate the view-dependent textures for arbitrary views. The experimental results demonstrate that the proposed method effectively synthesizes texture for an arbitrary view, thereby enabling the visualization of view-dependent effects, such as specularity and mirror reflection.

Synthesis of Multi-View Images Based on a Convergence Camera Model

  • Choi, Hyun-Jun
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.2
    • /
    • pp.197-200
    • /
    • 2011
  • In this paper, we propose a multi-view stereoscopic image synthesis algorithm for 3DTV system using depth information with an RGB texture from a depth camera. The proposed algorithm synthesizes multi-view images which a virtual convergence camera model could generate. Experimental results showed that the performance of the proposed algorithm is better than those of conventional methods.

Fast View Synthesis Using GPGPU (GPGPU를 이용한 고속 영상 합성 기법)

  • Shin, Hong-Chang;Park, Han-Hoon;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.13 no.6
    • /
    • pp.859-874
    • /
    • 2008
  • In this paper, we develop a fast view synthesis method that generates multiple intermediate views in real-time for the 3D display system when the camera geometry and depth map of reference views are given in advance. The proposed method achieves faster view synthesis than previous approaches in GPU by processing in parallel the entire computations required for the view synthesis. Specifically, we use $CUDA^{TM}$ (by NVIDIA) to control GPU device. For increasing the processing speed, we adapted all the processes for the view synthesis to single instruction multiple data (SIMD) structure that is a main feature of CUDA, maximized the use of the high-speed memories on GPU device, and optimized the implementation. As a result, we could synthesize 9 intermediate view images with the size of 720 by 480 pixels within 0.128 second.

View Synthesis and Coding of Multi-view Data in Arbitrary Camera Arrangements Using Multiple Layered Depth Images

  • Yoon, Seung-Uk;Ho, Yo-Sung
    • Journal of Multimedia Information System
    • /
    • v.1 no.1
    • /
    • pp.1-10
    • /
    • 2014
  • In this paper, we propose a new view synthesis technique for coding of multi-view color and depth data in arbitrary camera arrangements. We treat each camera position as a 3-D point in world coordinates and build clusters of those vertices. Color and depth data within a cluster are gathered into one camera position using a hierarchical representation based on the concept of layered depth image (LDI). Since one camera can cover only a limited viewing range, we set multiple reference cameras so that multiple LDIs are generated to cover the whole viewing range. Therefore, we can enhance the visual quality of the reconstructed views from multiple LDIs comparing with that from a single LDI. From experimental results, the proposed scheme shows better coding performance under arbitrary camera configurations in terms of PSNR and subjective visual quality.

  • PDF

View synthesis in uncalibrated images (임의 카메라 구조에서의 영상 합성)

  • Kang, Ji-Hyun;Kim, Dong-Hyun;Sohn, Kwang-Hoon
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.437-438
    • /
    • 2006
  • Virtual view synthesis is essential for 3DTV systems, which utilizes the motion parallax cue. In this paper, we propose a multi-step view synthesis algorithm to efficiently reconstruct an arbitrary view from limited number of known views of a 3D scene. We describe an efficient image rectification procedure which guarantees that an interpolation process produce valid views. This rectification method can deal with all possible camera motions. The idea consists of using a polar parameterization of the image around the epipole. Then, to generate intermediate views, we use an efficient dense disparity estimation algorithm considering features of stereo image pairs. Main concepts of the algorithm are based on the region dividing bidirectional pixel matching. The estimated disparities are used to synthesize intermediate view of stereo images. We use computer simulation to show the result of the proposed algorithm.

  • PDF

Performance Analysis on View Synthesis of 360 Videos for Omnidirectional 6DoF in MPEG-I (MPEG-I의 6DoF를 위한 360 비디오 가상시점 합성 성능 분석)

  • Kim, Hyun-Ho;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.273-280
    • /
    • 2019
  • 360 video is attracting attention as immersive media with the spread of VR applications, and MPEG-I (Immersive) Visual group is actively working on standardization to support immersive media experiences with up to six degree of freedom (6DoF). In virtual space of omnidirectional 6DoF, which is defined as a case of degree of freedom providing 6DoF in a restricted area, looking at the scene at any viewpoint of any position in the space requires rendering the view by synthesizing additional viewpoints called virtual omnidirectional viewpoints. This paper presents the performance results on view synthesis and their analysis, which have been done as exploration experiments (EEs) of omnidirectional 6DoF in MPEG-I. In other words, experiment results on view synthesis in various aspects of synthesis conditions such as the distances between input views and virtual view to be synthesized and the number of input views to be selected from the given set of 360 videos providing omnidirectional 6DoF are presented.

Fast Multi-View Synthesis Using Duplex Foward Mapping and Parallel Processing (순차적 이중 전방 사상의 병렬 처리를 통한 다중 시점 고속 영상 합성)

  • Choi, Ji-Youn;Ryu, Sae-Woon;Shin, Hong-Chang;Park, Jong-Il
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.11B
    • /
    • pp.1303-1310
    • /
    • 2009
  • Glassless 3D display requires multiple images taken from different viewpoints to show a scene. The simplest way to get multi-view image is using multiple camera that as number of views are requires. To do that, synchronize between cameras or compute and transmit lots of data comes critical problem. Thus, generating such a large number of viewpoint images effectively is emerging as a key technique in 3D video technology. Image-based view synthesis is an algorithm for generating various virtual viewpoint images using a limited number of views and depth maps. In this paper, because the virtual view image can be express as a transformed image from real view with some depth condition, we propose an algorithm to compute multi-view synthesis from two reference view images and their own depth-map by stepwise duplex forward mapping. And also, because the geometrical relationship between real view and virtual view is repetitively, we apply our algorithm into OpenGL Shading Language which is a programmable Graphic Process Unit that allow parallel processing to improve computation time. We demonstrate the effectiveness of our algorithm for fast view synthesis through a variety of experiments with real data.

View Synthesis Using OpenGL for Multi-viewpoint 3D TV (다시점 3차원 방송을 위한 OpenGL을 이용하는 중간영상 생성)

  • Lee, Hyun-Jung;Hur, Nam-Ho;Seo, Yong-Duek
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.507-520
    • /
    • 2006
  • In this paper, we propose an application of OpenGL functions for novel view synthesis from multi-view images and depth maps. While image based rendering has been meant to generate synthetic images by processing the camera view with a graphic engine, little has been known about how to apply the given images and depth information to the graphic engine and render the scene. This paper presents an efficient way of constructing a 3D space with camera parameters, reconstructing the 3D scene with color and depth images, and synthesizing virtual views in real-time as well as their depth images.

Depth Estimation and Intermediate View Synthesis for Three-dimensional Video Generation (3차원 영상 생성을 위한 깊이맵 추정 및 중간시점 영상합성 방법)

  • Lee, Sang-Beom;Lee, Cheon;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.10B
    • /
    • pp.1070-1075
    • /
    • 2009
  • In this paper, we propose new depth estimation and intermediate view synthesis algorithms for three-dimensional video generation. In order to improve temporal consistency of the depth map sequence, we add a temporal weighting function to the conventional matching function when we compute the matching cost for estimating the depth information. In addition, we propose a boundary noise removal method in the view synthesis operation. after finding boundary noise areas using the depth map, we replace them with corresponding texture information from the other reference image. Experimental results showed that the proposed algorithm improved temporal consistency of the depth sequence and reduced flickering artifacts in the virtual view. It also improved visual quality of the synthesized virtual views by removing the boundary noise.