• Title/Summary/Keyword: 깊이맵 생성

Search Result 83, Processing Time 0.019 seconds

Analysis of Relationship between Objective Performance Measurement and 3D Visual Discomfort in Depth Map Upsampling (깊이맵 업샘플링 방법의 객관적 성능 측정과 3D 시각적 피로도의 관계 분석)

  • Gil, Jong In;Mahmoudpour, Saeed;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.19 no.1
    • /
    • pp.31-43
    • /
    • 2014
  • A depth map is an important component for stereoscopic image generation. Since the depth map acquired from a depth camera has a low resolution, upsamling a low-resolution depth map to a high-resolution one has been studied past decades. Upsampling methods are evaluated by objective evaluation tools such as PSNR, Sharpness Degree, Blur Metric. As well, the subjective quality is compared using virtual views generated by DIBR (depth image based rendering). However, works on the analysis of the relation between depth map upsampling and stereoscopic images are relatively few. In this paper, we investigate the relationship between subjective evaluation of stereoscopic images and objective performance of upsampling methods using cross correlation and linear regression. Experimental results demonstrate that the correlation of edge PSNR and visual fatigue is the highest and the blur metric has lowest correlation. Further, from the linear regression, we found relative weights of objective measurements. Further we introduce a formulae that can estimate 3D performance of conventional or new upsampling methods.

Stereoscopic Image Generation with Optimal Disparity using Depth Map Preprocessing and Depth Information Analysis (깊이맵의 전처리와 깊이 정보의 기하학적 분석을 통한 최적의 스테레오스코픽 영상 자동 생성 기법)

  • Lee, Jae-Ho;Kim, Chang-Ick
    • Journal of Broadcast Engineering
    • /
    • v.14 no.2
    • /
    • pp.164-177
    • /
    • 2009
  • The DIBR(depth image-based rendering) method gives the sense of depth to viewers by using one color image and corresponding depth image. At this time, the qualities of the generated left- and right-image depend on the baseline distance of the virtual cameras corresponding to the view of the generated left- and right-image. In this paper, we present a novel method for enhancing the sense of depth by adjusting baseline distance of virtual cameras. Geometric analysis shows that the sense of depth is better in accordance with the increasing disparity due to the reduction of the image distortion. However, the entailed image degradation is not considered. Experimental results show that there is maximum bound in the disparity increasement due to image degradation and the visual field. Since the image degradation is reduced for increasing that bound, we add a depth map preprocessing. Since the interactive service where the disparity and view position are controlled by viewers can also be provided, the proposed method can be applied to the mobile broadcasting system such as DMB as well as 3DTV system.

A Robust Depth Map Upsampling Against Camera Calibration Errors (카메라 보정 오류에 강건한 깊이맵 업샘플링 기술)

  • Kim, Jae-Kwang;Lee, Jae-Ho;Kim, Chang-Ick
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.8-17
    • /
    • 2011
  • Recently, fusion camera systems that consist of depth sensors and color cameras have been widely developed with the advent of a new type of sensor, time-of-flight (TOF) depth sensor. The physical limitation of depth sensors usually generates low resolution images compared to corresponding color images. Therefore, the pre-processing module, such as camera calibration, three dimensional warping, and hole filling, is necessary to generate the high resolution depth map that is placed in the image plane of the color image. However, the result of the pre-processing step is usually inaccurate due to errors from the camera calibration and the depth measurement. Therefore, in this paper, we present a depth map upsampling method robust these errors. First, the confidence of the measured depth value is estimated by the interrelation between the color image and the pre-upsampled depth map. Then, the detailed depth map can be generated by the modified kernel regression method which exclude depth values having low confidence. Our proposed algorithm guarantees the high quality result in the presence of the camera calibration errors. Experimental comparison with other data fusion techniques shows the superiority of our proposed method.

Anaglyph Image Generation Using Binocular Disparity and Depth Information (양안시차와 깊이 정보를 이용한 애너그리프 영상 생성)

  • Mok, Seung-Jun;Jung, Kyung-Boo;Kim, Il-Moek;Choi, Byung-Uk
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2010.06c
    • /
    • pp.521-524
    • /
    • 2010
  • 영상을 입체적으로 보는 방법에는 안경을 이용한 편광 방식과 시분할 방식, 안경이 필요 없는 패럴랙스 베리어, 렌티큘러, 다시점 방식 그리고 완전 3차원 방법들이 있다. 그 중에서도 안경을 이용하여 가장 쉽게 제작이 가능하고, 비용이 저렴한 애너그리프(Anaglyph) 방법이 흔히 사용된다. 애너그리프 영상을 생성하는 방법에도 여러 가지가 존재하고 최근까지 눈에 피로를 적게 하면서 입체감을 최대한 표현할 수 있는 영상을 생성하기 위한 연구가 진행되고 있다. 본 논문에서는 이러한 조건을 만족시키기 위하여 새로운 애너그리프 영상 생성 방법을 제안한다. 본 논문에서 제안하는 방법은 깊이 맵과 양안시차를 계산하여 가장 입체감 있는 애너그리프 영상을 생성 하는 방법이다. 깊이 맵의 계산을 통해 얻은 변이 정보와 실험적으로 계산한 사람 눈의 양안시차를 좌측 영상에 적용한다. 좌측 영상과 우측 영상을 최적화된 색상혼합방법을 사용하여 합성하면 제안한 방식의 애너그리프 영상이 생성된다. 본 논문에서 제안하는 방법은 기존의 초점을 고려하지 않는 애너그리프 방식의 문제점을 깊이 맵을 이용하여 해결할 수 있고 또한 양안시차를 고려하고 최적화된 색상혼합을 사용하기 때문에 눈에 피로가 적어진다.

  • PDF

Boundary Noise Removal in Synthesized Intermediate Viewpoint Images for 3D Video (3차원 비디오의 중간시점 합성영상의 경계 잡음 제거 방법)

  • Lee, Cheon;Ho, Yo-Sung
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2008.11a
    • /
    • pp.109-112
    • /
    • 2008
  • 최근 MPEG(moving picture experts group)에서 표준화를 진행하고 있는 3차원 비디오 시스템은 다시점 영상과 깊이영상을 동시에 이용하여 사용자가 임의의 시점을 선택하거나 스테레오스코픽 장치와 같은 3차원 영상 재생장 치를 동해 3차원 영상을 제공하는 차세대 방송 시스템이다 제한된 시점수를 이용하여 보다 많은 시점의 영상을 제공하려면 중간시점의 영상을 보간하는 장치가 필수적이다. 이 시스템의 입력정보인 깊이값을 이용하면 시점이동을 쉽게 할 수 있는데, 보간한 영상의 화질은 이 깊이값의 정확도에 따라 결정된다. 깊이맵은 대개 컴퓨터 비전을 기반으로 한 스테레오 정합기술을 이용 획득하는데, 객체의 경계와 같은 깊이값 불연속 영역에서 주로 깊이값 오류가 발생하게 된다. 이런 오류는 생성한 중간영상의 배경에 원치 않는 잡음을 발생시킨다. 기존의 방법에서는 측정한 깊이법의 객체 경계와 영상의 객체 경계가 일치한다는 가정으로 중간영상을 합성했다. 그러나 실제로는 깊이값 측정 과정에서 두 가지 경계가 일치하지 않아 전경의 일부분이 배경으로 합성되어 잡음을 발생하는 것이다. 본 논문에서는 깊이맵을 기반으로 중간시점의 영상을 보간할 때 발생하는 경계 잡음을 처리하는 방법을 제안한다. 중간영상을 합성할 때 비폐색 영역을 합성한 후 경계 잡음이 발생할 수 있는 영역을 비폐색 영역을 따라 구별한 다음, 잡음이 없는 참조 영상을 이용함으로써 경계 잡음을 처리할 수 있다. 실험 결과를 통해 배경 잡음이 사라진 자연스러운 합성영상을 생성했다.

  • PDF

Generation of Stereoscopic Image from 2D Image based on Saliency and Edge Modeling (관심맵과 에지 모델링을 이용한 2D 영상의 3D 변환)

  • Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.20 no.3
    • /
    • pp.368-378
    • /
    • 2015
  • 3D conversion technology has been studied over past decades and integrated to commercial 3D displays and 3DTVs. The 3D conversion plays an important role in the augmented functionality of three-dimensional television (3DTV), because it can easily provide 3D contents. Generally, depth cues extracted from a static image is used for generating a depth map followed by DIBR (Depth Image Based Rendering) rendering for producing a stereoscopic image. However except some particular images, the existence of depth cues is rare so that the consistent quality of a depth map cannot be accordingly guaranteed. Therefore, it is imperative to make a 3D conversion method that produces satisfactory and consistent 3D for diverse video contents. From this viewpoint, this paper proposes a novel method with applicability to general types of image. For this, saliency as well as edge is utilized. To generate a depth map, geometric perspective, affinity model and binomic filter are used. In the experiments, the proposed method was performed on 24 video clips with a variety of contents. From a subjective test for 3D perception and visual fatigue, satisfactory and comfortable viewing of 3D contents was validated.

Depth Map Generation Using Infocused and Defocused Images (초점 영상 및 비초점 영상으로부터 깊이맵을 생성하는 방법)

  • Mahmoudpour, Saeed;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.19 no.3
    • /
    • pp.362-371
    • /
    • 2014
  • Blur variation caused by camera de-focusing provides a proper cue for depth estimation. Depth from Defocus (DFD) technique calculates the blur amount present in an image considering that blur amount is directly related to scene depth. Conventional DFD methods use two defocused images that might yield the low quality of an estimated depth map as well as a reconstructed infocused image. To solve this, a new DFD methodology based on infocused and defocused images is proposed in this paper. In the proposed method, the outcome of Subbaro's DFD is combined with a novel edge blur estimation method so that improved blur estimation can be achieved. In addition, a saliency map mitigates the ill-posed problem of blur estimation in the region with low intensity variation. For validating the feasibility of the proposed method, twenty image sets of infocused and defocused images with 2K FHD resolution were acquired from a camera with a focus control in the experiments. 3D stereoscopic image generated by an estimated depth map and an input infocused image could deliver the satisfactory 3D perception in terms of spatial depth perception of scene objects.

Stereoscopic Effect of 3D images according to the Quality of the Depth Map and the Change in the Depth of a Subject (깊이맵의 상세도와 주피사체의 깊이 변화에 따른 3D 이미지의 입체효과)

  • Lee, Won-Jae;Choi, Yoo-Joo;Lee, Ju-Hwan
    • Science of Emotion and Sensibility
    • /
    • v.16 no.1
    • /
    • pp.29-42
    • /
    • 2013
  • In this paper, we analyze the effect of the depth perception, volume perception and visual discomfort according to the change of the quality of the depth image and the depth of the major object. For the analysis, a 2D image was converted to eighteen 3D images using depth images generated based on the different depth position of a major object and background, which were represented in three detail levels. The subjective test was carried out using eighteen 3D images so that the degrees of the depth perception, volume perception and visual discomfort recognized by the subjects were investigated according to the change in the depth position of the major object and the quality of depth map. The absolute depth position of a major object and the relative depth difference between background and the major object were adjusted in three levels, respectively. The details of the depth map was also represented in three levels. Experimental results showed that the quality of the depth image differently affected the depth perception, volume perception and visual discomfort according to the absolute and relative depth position of the major object. In the case of the cardboard depth image, it severely damaged the volume perception regardless of the depth position of the major object. Especially, the depth perception was also more severely deteriorated by the cardboard depth image as the major object was located inside the screen than outside the screen. Furthermore, the subjects did not felt the difference of the depth perception, volume perception and visual comport from the 3D images generated by the detail depth map and by the rough depth map. As a result, it was analyzed that the excessively detail depth map was not necessary for enhancement of the stereoscopic perception in the 2D-to-3D conversion.

  • PDF

Real-time Stereo Video Generation using Graphics Processing Unit (GPU를 이용한 실시간 양안식 영상 생성 방법)

  • Shin, In-Yong;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.16 no.4
    • /
    • pp.596-601
    • /
    • 2011
  • In this paper, we propose a fast depth-image-based rendering method to generate a virtual view image in real-time using a graphic processor unit (GPU) for a 3D broadcasting system. Before the transmission, we encode the input 2D+depth video using the H.264 coding standard. At the receiver, we decode the received bitstream and generate a stereo video using a GPU which can compute in parallel. In this paper, we apply a simple and efficient hole filling method to reduce the decoder complexity and reduce hole filling errors. Besides, we design a vertical parallel structure for a forward mapping process to take advantage of the single instruction multiple thread structure of GPU. We also utilize high speed GPU memories to boost the computation speed. As a result, we can generate virtual view images 15 times faster than the case of CPU-based processing.

Single-Image Depth Estimation Based on CNN Using Edge Map (에지 맵을 이용한 CNN 기반 단일 영상의 깊이 추정)

  • Ko, Yeong-Kwon;Moon, Hyeon-Cheol;Kim, Hyun-Ho;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.695-696
    • /
    • 2020
  • CNN(CNN: Convolutional Neural Network)은 컴퓨터 비전의 많은 분야에서 뛰어난 성능을 보이고 있으며, 단일 영상으로부터 깊이(depth) 추정에서도 기존 기법보다 향상된 성능을 보이고 있다. 그러나, 단일 영상으로부터 신경망이 얻을 수 있는 정보는 제한적이기 때문에 스테레오 카메라로부터 얻은 좌/우 영상으로부터의 깊이 추정보다 성능 향상에 한계가 있다. 따라서 본 논문에서는 에지 맵(edge map)을 이용한 CNN 기반의 단일 영상에서의 깊이 추정의 개선 기법을 제안한다. 제안 방법은 먼저 단일 영상에 대한 전처리를 통해서 에지 맵과 양방향 필터링된(bilateral filtered) 영상을 생성하고, 이를 CNN 입력으로 하여 기존 단일 영상 깊이 추정 기법 대비 개선된 성능을 보임을 확인하였다.

  • PDF