• Title/Summary/Keyword: Depth resolution

Search Result 670, Processing Time 0.033 seconds

Resolution-independent Up-sampling for Depth Map Using Fractal Transforms

  • Liu, Meiqin;Zhao, Yao;Lin, Chunyu;Bai, Huihui;Yao, Chao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.6
    • /
    • pp.2730-2747
    • /
    • 2016
  • Due to the limitation of the bandwidth resource and capture resolution of depth cameras, low resolution depth maps should be up-sampled to high resolution so that they can correspond to their texture images. In this paper, a novel depth map up-sampling algorithm is proposed by exploiting the fractal internal self-referential feature. Fractal parameters which are extracted from a depth map, describe the internal self-referential feature of the depth map, do not introduce inherent scale and just retain the relational information of the depth map, i.e., fractal transforms provide a resolution-independent description for depth maps and could up-sample depth maps to an arbitrary high resolution. Then, an enhancement method is also proposed to further improve the performance of the up-sampled depth map. The experimental results demonstrate that better quality of synthesized views is achieved both on objective and subjective performance. Most important of all, arbitrary resolution depth maps can be obtained with the aid of the proposed scheme.

Low-Resolution Depth Map Upsampling Method Using Depth-Discontinuity Information (깊이 불연속 정보를 이용한 저해상도 깊이 영상의 업샘플링 방법)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.10
    • /
    • pp.875-880
    • /
    • 2013
  • When we generate 3D video that provides immersive and realistic feeling to users, depth information of the scene is essential. Since the resolution of the depth map captured by a depth sensor is lower than of the color image, we need to upsample the low-resolution depth map for high-resolution 3D video generation. In this paper, we propose a depth upsampling method using depth-discontinuity information. Using the high-resolution color image and the low-resolution depth map, we detect depth-discontinuity regions. Then, we define an energy function for the depth map upsampling and optimize it using the belief propagation method. Experimental results show that the proposed method outperforms other depth upsampling methods in terms of the bad pixel rate.

SIMS glancing anlge을 적용한 tunnel oxide 내 Nitorgen 깊이 분해능 향상 연구

  • Lee, Jong-Pil;Choe, Geun-Yeong;Kim, Gyeong-Won;Kim, Ho-Jeong;Han, O-Seok
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2011.02a
    • /
    • pp.41-41
    • /
    • 2011
  • Flash memory에서 tunnel oxide film은 electron tunnelling 현상을 이용하여 gate에 전하를 전달하는 통로로 사용되고 있다. 특히, tunnel oxide film 내부의 charge trap 현상과 불순물이 소자 특성에 직접적인 영향을 주고 있어, 후속 N2O/NO 열처리 공정에서 SiO2/Si 계면에 nitrogen을 주입하여 tunnel oxide film 특성을 개선하고 있다. 따라서 N2O/NO 열처리 공정 최적화를 위해서는 tunnel oxide film 내 N 농도와 분포에 대한 정확한 평가가 필수적이다[1]. 본 실험에서는 low energy magnetic SIMS를 이용하여 N2O로 열처리된 tunnel oxide film 내의 N농도를 보다 정확하게 평가하고자 하였다. 사용된 시료는 Si substrate에 oxidation 이후 N2O 열처리를 진행하여 tunnel oxide를 형성시켰으며, 분석 impact energy는 surface effect최소화와 최상의 depth resolution 확보를 위해 250eV를 사용하였으며, matrix effect와 mass interference를 방지하기 위해 MCs+ cluster mode[2]로 CsN signal를 검출하였다. 실험 결과, 특정 primary beam 입사각도에서 nitrogen depth resolution 저하 현상이 발생하였고, SIMS crater 표면이 매우 거칠게 나타났다. 이에, Depth resolution 저하 현상을 개선하기 위해 극한의 glancing 입사각 조건으로 secondary extraction voltage 변화를 통해 depth resolution이 개선되는 최적의 impact energy와 primary beam 입사각 조건을 확보하였다. 그 결과 nitrogen의 depth resolution은 1.6nm의 depth resolution을 확보하였으며, 보다 정확한 N 농도와 분포를 평가할 수 있게 되었다.

  • PDF

Fusion System of Time-of-Flight Sensor and Stereo Cameras Considering Single Photon Avalanche Diode and Convolutional Neural Network (SPAD과 CNN의 특성을 반영한 ToF 센서와 스테레오 카메라 융합 시스템)

  • Kim, Dong Yeop;Lee, Jae Min;Jun, Sewoong
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.4
    • /
    • pp.230-236
    • /
    • 2018
  • 3D depth perception has played an important role in robotics, and many sensory methods have also proposed for it. As a photodetector for 3D sensing, single photon avalanche diode (SPAD) is suggested due to sensitivity and accuracy. We have researched for applying a SPAD chip in our fusion system of time-of-fight (ToF) sensor and stereo camera. Our goal is to upsample of SPAD resolution using RGB stereo camera. Currently, we have 64 x 32 resolution SPAD ToF Sensor, even though there are higher resolution depth sensors such as Kinect V2 and Cube-Eye. This may be a weak point of our system, however we exploit this gap using a transition of idea. A convolution neural network (CNN) is designed to upsample our low resolution depth map using the data of the higher resolution depth as label data. Then, the upsampled depth data using CNN and stereo camera depth data are fused using semi-global matching (SGM) algorithm. We proposed simplified fusion method created for the embedded system.

Depth Resolution Analysis of Axially Distributed Stereo Camera Systems under Fixed Constrained Resources

  • Cho, Myungjin;Shin, Donghak
    • Journal of the Optical Society of Korea
    • /
    • v.17 no.6
    • /
    • pp.500-505
    • /
    • 2013
  • In this paper, we propose a novel framework to evaluate the depth resolution of axially distributed stereo sensing (ADSS) under fixed resource constraints. The proposed framework can evaluate the performance of ADSS systems based on various sensing parameters such as the number of cameras, the number of total pixels, pixel size and so on. The Monte Carlo simulations for the proposed framework are performed and the evaluation results are presented.

Generation of ROI Enhanced High-resolution Depth Maps in Hybrid Camera System (복합형 카메라 시스템에서 관심영역이 향상된 고해상도 깊이맵 생성 방법)

  • Kim, Sung-Yeol;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.13 no.5
    • /
    • pp.596-601
    • /
    • 2008
  • In this paper, we propose a new scheme to generate region-of-interest (ROI) enhanced depth maps in the hybrid camera system, which is composed of a low-resolution depth camera and a high-resolution stereoscopic camera. The proposed method creates an ROI depth map for the left image by carrying out a three-dimensional (3-D) warping operation onto the depth information obtained from the depth camera. Then, we generate a background depth map for the left image by applying a stereo matching algorithm onto the left and right images captured by the stereoscopic camera. Finally, we merge the ROI map with the background one to create the final depth map. The proposed method provides higher quality depth information on ROI than the previous methods.

RECONSTRUCTING A SUPER-RESOLUTION IMAGE FOR DEPTH-VARYING SCENES

  • Yokoyamay, Ami;Kubotaz, Akira;Hatoriz, Yoshinori
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.446-449
    • /
    • 2009
  • In this paper, we present a novel method for reconstructing a super-resolution image using multi-view low-resolution images captured for depth varying scene without requiring complex analysis such as depth estimation and feature matching. The proposed method is based on the iterative back projection technique that is extended to the 3D volume domain (i.e., space + depth), unlike the conventional superresolution methods that handle only 2D translation among captured images.

  • PDF

A Study on Depth Information Acquisition Improved by Gradual Pixel Bundling Method at TOF Image Sensor

  • Kwon, Soon Chul;Chae, Ho Byung;Lee, Sung Jin;Son, Kwang Chul;Lee, Seung Hyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.7 no.1
    • /
    • pp.15-19
    • /
    • 2015
  • The depth information of an image is used in a variety of applications including 2D/3D conversion, multi-view extraction, modeling, depth keying, etc. There are various methods to acquire depth information, such as the method to use a stereo camera, the method to use the depth camera of flight time (TOF) method, the method to use 3D modeling software, the method to use 3D scanner and the method to use a structured light just like Microsoft's Kinect. In particular, the depth camera of TOF method measures the distance using infrared light, whereas TOF sensor depends on the sensitivity of optical light of an image sensor (CCD/CMOS). Thus, it is mandatory for the existing image sensors to get an infrared light image by bundling several pixels; these requirements generate a phenomenon to reduce the resolution of an image. This thesis proposed a measure to acquire a high-resolution image through gradual area movement while acquiring a low-resolution image through pixel bundling method. From this measure, one can obtain an effect of acquiring image information in which illumination intensity (lux) and resolution were improved without increasing the performance of an image sensor since the image resolution is not improved as resolving a low-illumination intensity (lux) in accordance with the gradual pixel bundling algorithm.

Iterative Deep Convolutional Grid Warping Network for Joint Depth Upsampling (반복적인 격자 워핑 기법을 이용한 깊이 영상 초해상도 기술)

  • Yang, Yoonmo;Kim, Dongsin;Oh, Byung Tae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.205-207
    • /
    • 2020
  • This paper proposes a novel deep learning-based method to upsample a depth map. Most conventional methods estimate high-resolution depth map by modifying pixel value of given depth map using high-resolution color image and low-resolution depth map. However, these methods cause under- or over-shooting problems that restrict performance improvement. To overcome these problems, the proposed method iteratively performs grid warping scheme which shifts pixel values to restore blurred image for estimating high-resolution depth map. Experimental results show that the proposed method improves both quantitative and visual quality compared to the existing method.

  • PDF

A Study on Super Resolution Image Reconstruction for Acquired Images from Naval Combat System using Generative Adversarial Networks (생성적 적대 신경망을 이용한 함정전투체계 획득 영상의 초고해상도 영상 복원 연구)

  • Kim, Dongyoung
    • Journal of Digital Contents Society
    • /
    • v.19 no.6
    • /
    • pp.1197-1205
    • /
    • 2018
  • In this paper, we perform Single Image Super Resolution(SISR) for acquired images of EOTS or IRST from naval combat system. In order to conduct super resolution, we use Generative Adversarial Networks(GANs), which consists of a generative model to create a super-resolution image from the given low-resolution image and a discriminative model to determine whether the generated super-resolution image is qualified as a high-resolution image by adjusting various learning parameters. The learning parameters consist of a crop size of input image, the depth of sub-pixel layer, and the types of training images. Regarding evaluation method, we apply not only general image quality metrics, but feature descriptor methods. As a result, a larger crop size, a deeper sub-pixel layer, and high-resolution training images yield good performance.