• Title/Summary/Keyword: Viewpoint interpolation

Search Result 24, Processing Time 0.025 seconds

Viewpoint interpolation of face images using an ellipsoid model (타원체 MODEL을 사용한 얼굴 영상의 시점합성에 관한 연구)

  • Yoon, Na-Ree;Lee, Byung-Uk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.6C
    • /
    • pp.572-578
    • /
    • 2007
  • To establish eye contact in video teleconferencing, it is necessary to synthesize a front view image by viewpoint interpolation. We can find the viewing direction of a user, and interpolate an image seen from that viewpoint, which will result in a face image observed from the front. There are two categories of previous research: image based method and model based method. The former is simple to calculate, however, it shows limited performance for complex objects. And the latter is robust to noise while it is computationally expensive. We propose to approximate face images as ellipses and match them to build an ellipsoid and then synthesize a new image from a given virtual camera position. We show that it is simple and robust from various experiments.

Modified Ray-space Interpolation for Free Viewpoint Video System (Free Viewpoint 비디오 시스템을 위한 Ray-space 보간 기법 보완 연구)

  • Seo, Kang-Uk;Kim, Dong-Wook;Kim, Hwa-Sung;Yoo, Ji-Sang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2006.11a
    • /
    • pp.41-43
    • /
    • 2006
  • FTV (Free Viewpoint TV, 자유시점 TV)는 사용자들이 원하는 시점을 자유자재로 결정할 수 있는 차세대 TV이다. 또한 영상 획득 시 카메라가 위치하지 않은 새로운 시점을 만들 수 있다. 따라서 FTV는 개인, 산업, 사회, 의학. 사회 분야의 유망한 응용이 될 수 있다. Ray-space에 의한 데이터 표현은 FTV를 위한 데이터 포맷의 한 후보가 될 수 있으며, 실시간으로 임의시점의 영상을 구성하는 데에 있어서 우수한 장점을 가지고 있다. Ray-space에서 사용하는 기법은 컴퓨터 그래픽스가 아니라 순수한 신호 처리 방식이다. 스케일러블 구조, 계층적 구조가 Ray-space로 표현 가능하므로, Ray-space는 비디오 처리의 새로운 플랫폼을 구성할 수 있고 비디오의 개념을 확장할 수 있다. 본 논문에서는 Ray-space 데이터를 이용하여 임의 시점 영상을 생성하기 위해 기존의 보간(interpolation) 기법을 보완한 새로운 기법을 제안함으로써, 보다 자연스러운 영상을 얻고자 하는데 목적이 있다.

  • PDF

RAY-SPACE INTERPOLATION BYWARPING DISPARITY MAPS

  • Moriy, Yuji;Yendoy, Tomohiro;Tanimotoy, Masayuki;Fujiiz, Toshiaki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.583-587
    • /
    • 2009
  • In this paper we propose a new method of Depth-Image-Based Rendering (DIBR) for Free-viewpoint TV (FTV). In the proposed method, virtual viewpoint images are rendered with 3D warping instead of estimating the view-dependent depth since depth estimation is usually costly and it is desirable to eliminate it from the rendering process. However, 3D warping causes some problems that do not occur in the method with view-dependent depth estimation; for example, the appearance of holes on the rendered image, and the occurrence of depth discontinuity on the surface of the object at virtual image plane. Depth discontinuity causes artifacts on the rendered image. In this paper, these problems are solved by reconstructing disparity information at virtual camera position from neighboring two real cameras. In the experiments, high quality arbitrary viewpoint images were obtained.

  • PDF

A Method for Reproducing Stereo Images to Adjust Screen Parallax on a 3D Display (3D 디스플레이에서의 화면 시차 제어를 위한 입체 영상재생성 기법)

  • Rhee, Seon-Min;Choi, Jong-Moo;Choi, Soo-Mi
    • Journal of the Korea Computer Graphics Society
    • /
    • v.16 no.4
    • /
    • pp.1-10
    • /
    • 2010
  • We present a method to reproduce in-between views from captured stereo images to control depth feeling that a user can perceive on a 3D display. The stereo images captured from a pair of cameras have a fixed viewpoint and a screen parallax which depend on the physical position and the distance between the cameras. In this paper, we produce stereo images of an intermediate viewpoint between two original cameras by a view interpolation on the input stereo images. Furthermore, the camera separation of the reproduced stereo images can be controlled by a linear interpolation coefficient used by the view interpolation. By using the proposed method, stereo images can be reproduced where the depth feeling and a three dimensional effect is suitable for the individual's eye separation or the characteristic of an application.

Real-time Virtual View Synthesis using Virtual Viewpoint Disparity Estimation and Convergence Check (가상 변이맵 탐색과 수렴 조건 판단을 이용한 실시간 가상시점 생성 방법)

  • Shin, In-Yong;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.1A
    • /
    • pp.57-63
    • /
    • 2012
  • In this paper, we propose a real-time view interpolation method using virtual viewpoint disparity estimation and convergence check. For the real-time process, we estimate a disparity map at the virtual viewpoint from stereo images using the belief propagation method. This method needs only one disparity map, compared to the conventional methods that need two disparity maps. In the view synthesis part, we warp pixels from the reference images to the virtual viewpoint image using the disparity map at the virtual viewpoint. For real-time acceleration, we utilize a high speed GPU parallel programming, called CUDA. As a result, we can interpolate virtual viewpoint images in real-time.

Navigation based on Multi Cylindrical Environment Map

  • Park, Youngsup;Hyekyung Ko;Cheungwoon Cho;Kyunghyun Yoon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.167.6-167
    • /
    • 2001
  • The cylindrical environment maps of image-based representation methods make high-quality, simple and low-price real-time navigation possible. In this paper, we propose a method to navigate from one viewpoint to the next in the virtual inside space, composed of several cylindrical environment maps. Our system is classified into the two modules. first of all, the panoramic image viewer that employs the rotation and zoom-in/out methods to navigate the virtual inside space, such as the Quicklime VR. The other is smooth real-time navigation using cubic mesh interpolation when the viewpoint moves from one environment map to another in the virtual space.

  • PDF

A Method for Surface Reconstruction and Synthesizing Intermediate Images for Multi-viewpoint 3-D Displays

  • Fujii, Mahito;Ito, Takayuki;Miyake, Sei
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1996.06b
    • /
    • pp.35-40
    • /
    • 1996
  • In this paper, a method for 3-D surface reconstruction with two real cameras is presented. The method, which combines the extraction of binocular disparity and its interpolation can be applied to the synthesis of images from virtual viewpoints. The synthesized virtual images are as natural as the real images even when we observe the images as stereoscopic images. The method opens up many applications, such as synthesizing input images for multi-viewpoint 3-D displays, enhancing the depth impression in 2-D images and so on. We also have developed a video-rate stereo machine able to obtain binocular disparity in 1/30 sec with two cameras. We show the performance of the machine.

  • PDF

Layered Depth Image Representation And H.264 Encoding of Multi-view video For Free viewpoint TV (자유시점 TV를 위한 다시점 비디오의 계층적 깊이 영상 표현과 H.264 부호화)

  • Shin, Jong Hong
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.7 no.2
    • /
    • pp.91-100
    • /
    • 2011
  • Free viewpoint TV can provide multi-angle view point images for viewer needs. In the real world, But all angle view point images can not be captured by camera. Only a few any angle view point images are captured by each camera. Group of the captured images is called multi-view image. Therefore free viewpoint TV wants to production of virtual sub angle view point images form captured any angle view point images. Interpolation methods are known of this problem general solution. To product interpolated view point image of correct angle need to depth image of multi-view image. Unfortunately, multi-view video including depth image is necessary to develop a new compression encoding technique for storage and transmission because of a huge amount of data. Layered depth image is an efficient representation method of multi-view video data. This method makes a data structure that is synthesis of multi-view color and depth image. This paper proposed enhanced compression method using layered depth image representation and H.264/AVC video coding technology. In experimental results, confirmed high compression performance and good quality reconstructed image.

Geomatrically Non-linear Analysis Method by Curvature Based Flexibility Matrix (유연도 매트릭스를 사용한 기하학적 비선형 해석방법)

  • Kim, Jin Sup;Kwon, Min Ho
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.15 no.2
    • /
    • pp.125-135
    • /
    • 2011
  • The latest study for formulation of finite element method and computation techniques has progressed widely. The classical method in the formulation of frame elements for geometrically nonlinear analysis derives the geometric stiffness directly from the governing differential equation for bending with axial force. From the computational viewpoint of this paper, the most common approach is the finite element method. Commonly, the formulation of frame elements for geometrically nonlinear structures is based on appropriate interpolation functions for the transverse and axial displacements of the member. The formulation of flexibility-based elements, on the other hand, is based on interpolation functions for the internal forces. In this paper, a new method is used to suppose that interpolation functions for the displacements from the curvatures is Lagrangian interpolation. This paper derives flexibility matrix from that displacement functions and is considered the application of it. Using the flexibility matrix, this paper apply the program considered geometrically nonlinear analysis to common problems.

Multi-view Video Coding using View Interpolation (영상 보간을 이용한 다시점 비디오 부호화 방법)

  • Lee, Cheon;Oh, Kwan-Jung;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.12 no.2
    • /
    • pp.128-136
    • /
    • 2007
  • Since the multi-view video is a set of video sequences captured by multiple array cameras for the same three-dimensional scene, it can provide multiple viewpoint images using geometrical manipulation and intermediate view generation. Although multi-view video allows us to experience more realistic feeling with a wide range of images, the amount of data to be processed increases in proportion to the number of cameras. Therefore, we need to develop efficient coding methods. One of the possible approaches to multi-view video coding is to generate an intermediate image using view interpolation method and to use the interpolated image as an additional reference frame. The previous view interpolation method for multi-view video coding employs fixed size block matching over the pre-determined disparity search range. However, if the disparity search range is not proper, disparity error may occur. In this paper, we propose an efficient view interpolation method using initial disparity estimation, variable block-based estimation, and pixel-level estimation using adjusted search ranges. In addition, we propose a multi-view video coding method based on H.264/AVC to exploit the intermediate image. Intermediate images have been improved about $1{\sim}4dB$ using the proposed method compared to the previous view interpolation method, and the coding efficiency have been improved about 0.5 dB compared to the reference model.