• Title/Summary/Keyword: View Synthesis Technology

Search Result 55, Processing Time 0.023 seconds

View Synthesis and Coding of Multi-view Data in Arbitrary Camera Arrangements Using Multiple Layered Depth Images

  • Yoon, Seung-Uk;Ho, Yo-Sung
    • Journal of Multimedia Information System
    • /
    • v.1 no.1
    • /
    • pp.1-10
    • /
    • 2014
  • In this paper, we propose a new view synthesis technique for coding of multi-view color and depth data in arbitrary camera arrangements. We treat each camera position as a 3-D point in world coordinates and build clusters of those vertices. Color and depth data within a cluster are gathered into one camera position using a hierarchical representation based on the concept of layered depth image (LDI). Since one camera can cover only a limited viewing range, we set multiple reference cameras so that multiple LDIs are generated to cover the whole viewing range. Therefore, we can enhance the visual quality of the reconstructed views from multiple LDIs comparing with that from a single LDI. From experimental results, the proposed scheme shows better coding performance under arbitrary camera configurations in terms of PSNR and subjective visual quality.

  • PDF

View Synthesis Using OpenGL for Multi-viewpoint 3D TV (다시점 3차원 방송을 위한 OpenGL을 이용하는 중간영상 생성)

  • Lee, Hyun-Jung;Hur, Nam-Ho;Seo, Yong-Duek
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.507-520
    • /
    • 2006
  • In this paper, we propose an application of OpenGL functions for novel view synthesis from multi-view images and depth maps. While image based rendering has been meant to generate synthetic images by processing the camera view with a graphic engine, little has been known about how to apply the given images and depth information to the graphic engine and render the scene. This paper presents an efficient way of constructing a 3D space with camera parameters, reconstructing the 3D scene with color and depth images, and synthesizing virtual views in real-time as well as their depth images.

Impact Angle Control Guidance Synthesis for Evasive Maneuver against Intercept Missile

  • Yogaswara, Y.H.;Hong, Seong-Min;Tahk, Min-Jea;Shin, Hyo-Sang
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.18 no.4
    • /
    • pp.719-728
    • /
    • 2017
  • This paper proposes a synthesis of new guidance law to generate an evasive maneuver against enemy's missile interception while considering its impact angle, acceleration, and field-of-view constraints. The first component of the synthesis is a new function of repulsive Artificial Potential Field to generate the evasive maneuver as a real-time dynamic obstacle avoidance. The terminal impact angle and terminal acceleration constraints compliance are based on Time-to-Go Polynomial Guidance as the second component. The last component is the Logarithmic Barrier Function to satisfy the field-of-view limitation constraint by compensating the excessive total acceleration command. These three components are synthesized into a new guidance law, which involves three design parameter gains. Parameter study and numerical simulations are delivered to demonstrate the performance of the proposed repulsive function and guidance law. Finally, the guidance law simulations effectively achieve the zero terminal miss distance, while satisfying an evasive maneuver against intercept missile, considering impact angle, acceleration, and field-of-view limitation constraints simultaneously.

Fast Multi-View Synthesis Using Duplex Foward Mapping and Parallel Processing (순차적 이중 전방 사상의 병렬 처리를 통한 다중 시점 고속 영상 합성)

  • Choi, Ji-Youn;Ryu, Sae-Woon;Shin, Hong-Chang;Park, Jong-Il
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.11B
    • /
    • pp.1303-1310
    • /
    • 2009
  • Glassless 3D display requires multiple images taken from different viewpoints to show a scene. The simplest way to get multi-view image is using multiple camera that as number of views are requires. To do that, synchronize between cameras or compute and transmit lots of data comes critical problem. Thus, generating such a large number of viewpoint images effectively is emerging as a key technique in 3D video technology. Image-based view synthesis is an algorithm for generating various virtual viewpoint images using a limited number of views and depth maps. In this paper, because the virtual view image can be express as a transformed image from real view with some depth condition, we propose an algorithm to compute multi-view synthesis from two reference view images and their own depth-map by stepwise duplex forward mapping. And also, because the geometrical relationship between real view and virtual view is repetitively, we apply our algorithm into OpenGL Shading Language which is a programmable Graphic Process Unit that allow parallel processing to improve computation time. We demonstrate the effectiveness of our algorithm for fast view synthesis through a variety of experiments with real data.

Disparity Refinement near the Object Boundaries for Virtual-View Quality Enhancement

  • Lee, Gyu-cheol;Yoo, Jisang
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.5
    • /
    • pp.2189-2196
    • /
    • 2015
  • Stereo matching algorithm is usually used to obtain a disparity map from a pair of images. However, the disparity map obtained by using stereo matching contains lots of noise and error regions. In this paper, we propose a virtual-view synthesis algorithm using disparity refinement in order to improve the quality of the synthesized image. First, the error region is detected by examining the consistency of the disparity maps. Then, motion information is acquired by applying optical flow to texture component of the image in order to improve the performance. Then, the occlusion region is found using optical flow on the texture component of the image in order to improve the performance of the optical flow. The refined disparity map is finally used for the synthesis of the virtual view image. The experimental results show that the proposed algorithm improves the quality of the generated virtual-view.

COLOR CORRECTION METHOD USING GRAY GRADIENT BAR FOR MULTI-VIEW CAMERA SYSTEM

  • Jung, Jae-Il;Ho, Yo-Sung
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.1-6
    • /
    • 2009
  • Due to the different camera properties of the multi-view camera system, the color properties of captured images can be inconsistent. This inconsistency makes post-processing such as depth estimation, view synthesis and compression difficult. In this paper, the method to correct the different color properties of multi-view images is proposed. We utilize a gray gradient bar on a display device to extract the color sensitivity property of the camera and calculate a look-up table based on the sensitivity property. The colors in the target image are converted by mapping technique referring to the look-up table. Proposed algorithm shows the good subjective results and reduces the mean absolute error among the color values of multi-view images by 72% on average in experimental results.

  • PDF

Layered Depth Image Representation And H.264 Encoding of Multi-view video For Free viewpoint TV (자유시점 TV를 위한 다시점 비디오의 계층적 깊이 영상 표현과 H.264 부호화)

  • Shin, Jong Hong
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.7 no.2
    • /
    • pp.91-100
    • /
    • 2011
  • Free viewpoint TV can provide multi-angle view point images for viewer needs. In the real world, But all angle view point images can not be captured by camera. Only a few any angle view point images are captured by each camera. Group of the captured images is called multi-view image. Therefore free viewpoint TV wants to production of virtual sub angle view point images form captured any angle view point images. Interpolation methods are known of this problem general solution. To product interpolated view point image of correct angle need to depth image of multi-view image. Unfortunately, multi-view video including depth image is necessary to develop a new compression encoding technique for storage and transmission because of a huge amount of data. Layered depth image is an efficient representation method of multi-view video data. This method makes a data structure that is synthesis of multi-view color and depth image. This paper proposed enhanced compression method using layered depth image representation and H.264/AVC video coding technology. In experimental results, confirmed high compression performance and good quality reconstructed image.

Real-time Virtual-viewpoint Image Synthesis Algorithm Using Kinect Camera

  • Lee, Gyu-Cheol;Yoo, Jisang
    • Journal of Electrical Engineering and Technology
    • /
    • v.9 no.3
    • /
    • pp.1016-1022
    • /
    • 2014
  • Kinect is a motion sensing camera released by Microsoft in November 2010 for the Xbox360 that is used to produce depth and color images. Because Kinect uses an infrared pattern, it generates holes and noises around an object's boundaries in the obtained images. The flickering phenomenon and unmatched edges also occur. In this paper, we propose a real time virtual-view video synthesis algorithm which results in a high quality virtual view by solving these problems stated above. The experimental results show that the proposed algorithm performs much better than the conventional algorithms.

Design and Implementation of Multiple View Image Synthesis Scheme based on RAM Disk for Real-Time 3D Browsing System (실시간 3D 브라우징 시스템을 위한 램 디스크 기반의 다시점 영상 합성 기법의 설계 및 구현)

  • Sim, Chun-Bo;Lim, Eun-Cheon
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.5
    • /
    • pp.13-23
    • /
    • 2009
  • One of the main purpose of multiple-view image processing technology is support realistic 3D image to device user by using multiple viewpoint display devices and compressed data restoration devices. This paper proposes a multiple view image synthesis scheme based on RAM disk which makes possible to browse 3D images generated by applying effective composing method to real time input stereo images. The proposed scheme first converts input images to binary image. We applies edge detection algorithm such as Sobel algorithm and Prewiit algorithm to find edges used to evaluate disparities from images of 4 multi-cameras. In addition, we make use of time interval between hardware trigger and software trigger to solve the synchronization problem which has stated ambiguously in related studies. We use a unique identifier on each snapshot of images for distributed environment. With respect of performance results, the proposed scheme takes 0.67 sec in each binary array. to transfer entire images which contains left and right side with disparity information for high quality 3D image browsing. We conclude that the proposed scheme is suitable for real time 3D applications.

Free view video synthesis using multi-view 360-degree videos (다시점 360도 영상을 사용한 자유시점 영상 생성 방법)

  • Cho, Young-Gwang;Ahn, Heejune
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.05a
    • /
    • pp.600-603
    • /
    • 2020
  • 360 영상은 시청자가 시야방향을 결정하는 3DoF(3 Degree of Freedom)를 지원한다. 본 연구에서는 다수의 360 영상에서 깊이 정보를 획득하고, 이를 DIBR (Depth -based Image Rendering) 기법을 사용하여 임의 시점 시청기능을 제공하는 6DoF(6 Degree of Freedom) 영상제작 기법을 제안한다. 이를 위하여 기존의 평면 다시점 영상기법을 확장하여 360 ERP 투영 영상으로부터 카메라의 파라미터 예측을 하는 방법과 깊이영상 추출 방법을 설계 및 구현하고 그 성능을 조사하였으며, OpenGL 그래픽스기반의 RVS(Reference View Synthesizer) 라이브러리를 사용하여 DIBR을 적용하였다.