• Title/Summary/Keyword: Depth영상

Search Result 1,546, Processing Time 0.031 seconds

Low-Resolution Depth Map Upsampling Method Using Depth-Discontinuity Information (깊이 불연속 정보를 이용한 저해상도 깊이 영상의 업샘플링 방법)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.10
    • /
    • pp.875-880
    • /
    • 2013
  • When we generate 3D video that provides immersive and realistic feeling to users, depth information of the scene is essential. Since the resolution of the depth map captured by a depth sensor is lower than of the color image, we need to upsample the low-resolution depth map for high-resolution 3D video generation. In this paper, we propose a depth upsampling method using depth-discontinuity information. Using the high-resolution color image and the low-resolution depth map, we detect depth-discontinuity regions. Then, we define an energy function for the depth map upsampling and optimize it using the belief propagation method. Experimental results show that the proposed method outperforms other depth upsampling methods in terms of the bad pixel rate.

Intermediate View Synthesis Method using Kinect Depth Camera (Kinect 깊이 카메라를 이용한 가상시점 영상생성 기술)

  • Lee, Sang-Beom;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.29-35
    • /
    • 2012
  • A depth image-based rendering (DIBR) technique is one of the rendering processes of virtual views with a color image and the corresponding depth map. The most important issue of DIBR is that the virtual view has no information at newly exposed areas, so called dis-occlusion. In this paper, we propose an intermediate view generation algorithm using the Kinect depth camera that utilizes the infrared structured light. After we capture a color image and its corresponding depth map, we pre-process the depth map. The pre-processed depth map is warped to the virtual viewpoint and filtered by median filtering to reduce the truncation error. Then, the color image is back-projected to the virtual viewpoint using the warped depth map. In order to fill out the remaining holes caused by dis-occlusion, we perform a background-based image in-painting operation. Finally, we obtain the synthesized image without any dis-occlusion. From experimental results, we have shown that the proposed algorithm generated very natural images in real-time.

  • PDF

The Enhancement of the Boundary-Based Depth Image (경계 기반의 깊이 영상 개선)

  • Ahn, Yang-Keun;Hong, Ji-Man
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.4
    • /
    • pp.51-58
    • /
    • 2012
  • Recently, 3D technology based on depth image is widely used in various fields including 3D space recognition, image acquisition, interaction, and games. Depth camera is used in order to produce depth image, various types of effort are made to improve quality of the depth image. In this paper, we suggests using area-based Canny edge detector to improve depth image in applying 3D technology based on depth camera. The suggested method provides improved depth image with pre-processing and post-processing by fixing image quality deterioration, which may take place in acquiring depth image in a limited environment. For objective image quality evaluation, we have confirmed that the image is improved by 0.42dB at maximum, by applying and comparing improved depth image to virtual view reference software. In addition, with DSCQS(Double Stimulus Continuous Quality Scale) evaluation method, we are reassured of the effectiveness of improved depth image through objective evaluation of subjective quality.

View Synthesis Using OpenGL for Multi-viewpoint 3D TV (다시점 3차원 방송을 위한 OpenGL을 이용하는 중간영상 생성)

  • Lee, Hyun-Jung;Hur, Nam-Ho;Seo, Yong-Duek
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.507-520
    • /
    • 2006
  • In this paper, we propose an application of OpenGL functions for novel view synthesis from multi-view images and depth maps. While image based rendering has been meant to generate synthetic images by processing the camera view with a graphic engine, little has been known about how to apply the given images and depth information to the graphic engine and render the scene. This paper presents an efficient way of constructing a 3D space with camera parameters, reconstructing the 3D scene with color and depth images, and synthesizing virtual views in real-time as well as their depth images.

Depth Interpolation Method using Random Walk Probability Model (랜덤워크 확률 모델을 이용한 깊이 영상 보간 방법)

  • Lee, Gyo-Yoon;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.12C
    • /
    • pp.738-743
    • /
    • 2011
  • For the high quality 3-D broadcasting, depth maps are important data. Although commercially available depth cameras capture high-accuracy depth maps in real time, their resolutions are much smaller than those of the corresponding color images due to technical limitations. In this paper, we propose the depth map up-sampling method using a high-resolution color image and a low-resolution depth map. We define a random walk probability model in an operation unit which has nearest seed pixels. The proposed method is appropriate to match boundaries between the color image and the depth map. Experimental results show that our method enhances the depth map resolution successfully.

Development of a Multi-view Image Generation Simulation Program Using Kinect (키넥트를 이용한 다시점 영상 생성 시뮬레이션 프로그램 개발)

  • Lee, Deok Jae;Kim, Minyoung;Cho, Yongjoo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.10a
    • /
    • pp.818-819
    • /
    • 2014
  • Recently there are many works conducted on utilizing the DIBR (Depth-Image-Based Rendering) based intermediate images for the three-dimensional displays that do not require the use of stereoscopic glasses. However the prior works have used expensive depth cameras to obtain high-resolution depth images since DIBR-based intermediate image generation method requires the accuracy for depth information. In this study, we have developed the simulation to generate multi-view intermediate images based on the depth and color images using Microsoft Kinect. This simulation aims to support the acquisition of multi-view intermediate images utilizing the low-resolution depth and color image from Kinect, and provides the integrated service for the quality evaluation of the intermediate images. This paper describes the architecture and the system implementation of this simulation program.

  • PDF

Multi-view Generation using High Resolution Stereoscopic Cameras and a Low Resolution Time-of-Flight Camera (고해상도 스테레오 카메라와 저해상도 깊이 카메라를 이용한 다시점 영상 생성)

  • Lee, Cheon;Song, Hyok;Choi, Byeong-Ho;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4A
    • /
    • pp.239-249
    • /
    • 2012
  • Recently, the virtual view generation method using depth data is employed to support the advanced stereoscopic and auto-stereoscopic displays. Although depth data is invisible to user at 3D video rendering, its accuracy is very important since it determines the quality of generated virtual view image. Many works are related to such depth enhancement exploiting a time-of-flight (TOF) camera. In this paper, we propose a fast 3D scene capturing system using one TOF camera at center and two high-resolution cameras at both sides. Since we need two depth data for both color cameras, we obtain two views' depth data from the center using the 3D warping technique. Holes in warped depth maps are filled by referring to the surrounded background depth values. In order to reduce mismatches of object boundaries between the depth and color images, we used the joint bilateral filter on the warped depth data. Finally, using two color images and depth maps, we generated 10 additional intermediate images. To realize fast capturing system, we implemented the proposed system using multi-threading technique. Experimental results show that the proposed capturing system captured two viewpoints' color and depth videos in real-time and generated 10 additional views at 7 fps.

A Preprocessing Algorithm for Layered Depth Image Coding (계층적 깊이영상 정보의 압축 부호화를 위한 전처리 방법)

  • 윤승욱;김성열;호요성
    • Journal of Broadcast Engineering
    • /
    • v.9 no.3
    • /
    • pp.207-213
    • /
    • 2004
  • The layered depth image (LDI) is an efficient approach to represent three-dimensional objects with complex geometry for image-based rendering (IBR). LDI contains several attribute values together with multiple layers at each pixel location. In this paper, we propose an efficient preprocessing algorithm to compress depth information of LDI. Considering each depth value as a point in the two-dimensional space, we compute the minimum distance between a straight line passing through the previous two values and the current depth value. Finally, the minimum distance replaces the current attribute value. The proposed algorithm reduces the variance of the depth information , therefore, It Improves the transform and coding efficiency.

Analysis of Depth Map Resolution for Coding Performance in 3D Video System (깊이영상 해상도 조절에 따른 3 차원 비디오 부호화 성능 분석)

  • Lee, Do Hoon;Yang, Yun mo;Oh, Byung Tae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2015.07a
    • /
    • pp.452-454
    • /
    • 2015
  • This paper provides the coding performance comparisons of depth map resolution in 3D video system. In multiview plus depth map system, depth map is used for synthesis view rendering, and affects to synthesis views quality. In the paper, we show the experimental results as depth map resolution in 3D video system, and show performance variation as dilation filter.

  • PDF

Foreground Segmentation and High-Resolution Depth Map Generation Using a Time-of-Flight Depth Camera (깊이 카메라를 이용한 객체 분리 및 고해상도 깊이 맵 생성 방법)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37C no.9
    • /
    • pp.751-756
    • /
    • 2012
  • In this paper, we propose a foreground extraction and depth map generation method using a time-of-flight (TOF) depth camera. Although, the TOF depth camera captures the scene's depth information in real-time, it has a built-in noise and distortion. Therefore, we perform several preprocessing steps such as image enhancement, segmentation, and 3D warping, and then use the TOF depth data to generate the depth-discontinuity regions. Then, we extract the foreground object and generate the depth map as of the color image. The experimental results show that the proposed method efficiently generates the depth map even for the object boundary and textureless regions.