• Title/Summary/Keyword: Depth영상

Search Result 1,543, Processing Time 0.022 seconds

Real-time Multiple Stereo Image Synthesis using Depth Information (깊이 정보를 이용한 실시간 다시점 스테레오 영상 합성)

  • Jang Se hoon;Han Chung shin;Bae Jin woo;Yoo Ji sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.4C
    • /
    • pp.239-246
    • /
    • 2005
  • In this paper. we generate a virtual right image corresponding to the input left image by using given RGB texture data and 8 bit gray scale depth data. We first transform the depth data to disparity data and then produce the virtual right image with this disparity. We also proposed a stereo image synthesis algorithm which is adaptable to a viewer's position and an real-time processing algorithm with a fast LUT(look up table) method. Finally, we could synthesize a total of eleven stereo images with different view points for SD quality of a texture image with 8 bit depth information in a real time.

H.264 Encoding Technique of Multi-view Image expressed by Layered Depth Image (계층적 깊이 영상으로 표현된 다시점 영상에 대한 H.264 부호화 기술)

  • Kim, Min-Tae;Jee, Inn-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.1
    • /
    • pp.81-90
    • /
    • 2010
  • This paper presents H.264 coding schemes for multi-view video using the concept of layered depth image(LDI) representation and efficient compression technique for LDI. After converting those data to the proposed representation, we encode color, depth, and auxiliary data representing the hierarchical structure, respectively, Two kinds of preprocessing approaches are proposed for multiple color and depth components. In order to compress auxiliary data, we have employed a near lossless coding method. Finally, we have reconstructed the original viewpoints successfully from the decoded approach that is useful for dealing with multiple color and depth data simultaneously.

A Study on Compensation of Disparity for Incorrect 3D Depth in the Triple Fresnel Lenses floating Image System (심중 프렌넬 렌즈 시스템에서 재생된 입체부양영상의 올바른 깊이감을 구현하기 위한 시차보정 방법에 대한 연구)

  • Lee, K.H.;Kim, S.H.;Yoon, Y.S.;Kim, S.K.
    • Korean Journal of Optics and Photonics
    • /
    • v.18 no.4
    • /
    • pp.246-255
    • /
    • 2007
  • The floating image system (FIS) is a device to display input source in the space between fast surface of the display and an observer and it provides pseudo 3D depth to an observer when input source as real object or 2D image was displayed through the optical lens system in the FIS. The Advanced floating image system (AFIS) was designed to give more effective 3D depth than existing FIS by adding front and rear depth cues to the displayed stereogram, which it was used as input source. The magnitude of disparity and size of stereogram were strongly related each other and they have been optimized for presenting 3D depths in a non-optical lens systems. Thus, if they were used in optical lens system, they will have reduced or magnified parameters, leading to problem such as providing incorrect 3D depth cues to an observer. Although the size of stereogram and disparity were demagnified by total magnifying power of optical system, the viewing distance (VD) from the display to an observer and base distance (BD) for the gap between the eyes were fixed. For this reason, the quantity of disparity in displayed stereogram through the existing FIS has not kept the magnifying power to the total optical system. Therefore, we proposed the methods to provide correct 3D depth to an observer by compensating quantity of disparity in stereogram which was satisfied to keep total magnifying power of optical lenses system by AFIS. Consequently, the AFIS provides a good floating depth (pseudo 3D) with correct front and rear 3D depth cues to an observer.

Performance Comparisons of Depth Map Post Processing for 3D Video System (3 차원 영상 시스템의 깊이영상 후처리 필터 성능 비교)

  • Lee, Do Hoon;Yoon, Eun Ji;Oh, Byung Tae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2014.06a
    • /
    • pp.81-83
    • /
    • 2014
  • This paper provides the performance comparison of selected post filters for depth map in 3D video system. In the performance comparison, the dilation filter which is currently adopted in 3D-ATM reference S/W, the bilateral filter, and the depth-oriented depth boundary reconstruction filter. In the paper, we first introduce those filters in details, and show the experimental results as post filter in 3D video system.

  • PDF

Depth compression method for 3D video (3차원 영상을 위한 깊이 영상 압축 방법)

  • Nam, Jung-Hak;Hwang, Neung-Joo;Cho, Gwang-Shin;Sim, Dong-Gyu;Lee, Soo-Youn;Bang, Gun;Hur, Nam-Ho
    • Journal of Broadcast Engineering
    • /
    • v.15 no.5
    • /
    • pp.703-706
    • /
    • 2010
  • Recently, a need to encode a depth image has been raising with the deployment of 3D video services. The 3DV/FTV group in the MPEG has standardized the compression method of depth map image. Because conventional depth map coding methods are independently encoded without referencing the color image, coding performance of conventional algorithms is poor. In this letter, we proposed a novel method which rearranged modes of depth blocks according to modes of corresponding color blocks by using a correlation between color and depth images. In experimental results, the proposed method achieves bits reduction of 2.2% compared with coding method based on JSVM.

Real-time Eye Contact System Using a Kinect Depth Camera for Realistic Telepresence (Kinect 깊이 카메라를 이용한 실감 원격 영상회의의 시선 맞춤 시스템)

  • Lee, Sang-Beom;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4C
    • /
    • pp.277-282
    • /
    • 2012
  • In this paper, we present a real-time eye contact system for realistic telepresence using a Kinect depth camera. In order to generate the eye contact image, we capture a pair of color and depth video. Then, the foreground single user is separated from the background. Since the raw depth data includes several types of noises, we perform a joint bilateral filtering method. We apply the discontinuity-adaptive depth filter to the filtered depth map to reduce the disocclusion area. From the color image and the preprocessed depth map, we construct a user mesh model at the virtual viewpoint. The entire system is implemented through GPU-based parallel programming for real-time processing. Experimental results have shown that the proposed eye contact system is efficient in realizing eye contact, providing the realistic telepresence.

Intermediate Depth Image Generation using Disparity Increment of Stereo Depth Images (스테레오 깊이영상의 변위증분을 이용한 중간시점 깊이영상 생성)

  • Koo, Ja-Myung;Seo, Young-Ho;Choi, Hyun-Jun;Yoo, Ji-Sang;Kim, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.17 no.2
    • /
    • pp.363-373
    • /
    • 2012
  • This paper proposes a method to generate a depth image at an arbitrary intermediate view-point, which is targeting a video service for free-view, auto-stereoscopy, holography, etc. It assumes that the leftmost and the rightmost depth images are given and they both have been camera-calibrated and image-rectified. This method calculates and uses a disparity increment per depth value. In this paper, it is obtained by stereo matching for the given two depth image by considering more general cases. The disparity increment is used to find the location in the intermediate view-point depth image (IVPD) for each depth in the given images. Thus, this paper finds two IVPDs, from left image and from right image. Noises are removed and holes are filled in each IVPDs and the two results are combined to get the final IVPD. The proposed method was implemented and applied to several test sequences. The results revealed that the quality of the generated IVPD corresponds to 33.84dB of PSNR in average and it takes about 1 second to generate a HD IVPD. We evaluate that this image quality is quite good by considering the low correspondency among the left images, intermediate images, and the right images in the test sequences. If the execution speed is improved, the proposed method can be a very useful method to generate an IVPD at an arbitrary view-point, we believe.

3D Depth Information Extraction Algorithm Based on Motion Estimation in Monocular Video Sequence (단안 영상 시퀸스에서 움직임 추정 기반의 3차원 깊이 정보 추출 알고리즘)

  • Park, Jun-Ho;Jeon, Dae-Seong;Yun, Yeong-U
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.549-556
    • /
    • 2001
  • The general problems of recovering 3D for 2D imagery require the depth information for each picture element form focus. The manual creation of those 3D models is consuming time and cost expensive. The goal in this paper is to simplify the depth estimation algorithm that extracts the depth information of every region from monocular image sequence with camera translation to implement 3D video in realtime. The paper is based on the property that the motion of every point within image which taken from camera translation depends on the depth information. Full-search motion estimation based on block matching algorithm is exploited at first step and ten, motion vectors are compensated for the effect by camera rotation and zooming. We have introduced the algorithm that estimates motion of object by analysis of monocular motion picture and also calculates the averages of frame depth and relative depth of region to the average depth. Simulation results show that the depth of region belongs to a near object or a distant object is in accord with relative depth that human visual system recognizes.

  • PDF

H.264 Encoding Technique of Multi-view Video expressed by Layered Depth Image (계층적 깊이 영상으로 표현된 다시점 비디오에 대한 H.264 부호화 기술)

  • Shin, Jong-Hong;Jee, Inn-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.2
    • /
    • pp.43-51
    • /
    • 2014
  • Multi-view video including depth image is necessary to develop a new compression encoding technique for storage and transmission, because of a huge amount of data. Layered depth image is an efficient representation method of multi-view video data. This method makes a data structure that is synthesis of multi-view color and depth image. This efficient method to compress new contents is suggested to use layered depth image representation and to apply for video compression encoding by using 3D warping. This paper proposed enhanced compression method using layered depth image representation and H.264/AVC video coding technology. In experimental results, we confirmed high compression performance and good quality of reconstructed image.

Depth Video Coding Method for Spherical Object (구형 객체의 깊이 영상 부호화 방법)

  • Kwon, Soon-Kak;Lee, Dong-Seok;Park, Yoo-Hyun
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.21 no.6
    • /
    • pp.23-29
    • /
    • 2016
  • In this paper, we propose a method of depth video coding to find the closest sphere through the depth information when the spherical object is captured. We find the closest sphere to the captured spherical object using method of least squares in each block. Then, we estimate the depth value through the found sphere and encode the depth video through difference between the measured depth values and the estimated depth values. Also, we encode factors of the estimated sphere with encoded pixels within the block. The proposed method improves the coding efficiency up to 81% compared to the conventional DPCM method.