• Title/Summary/Keyword: 3D depth

Search Result 2,616, Processing Time 0.026 seconds

Analysis of Depth Map Resolution for Coding Performance in 3D Video System (깊이영상 해상도 조절에 따른 3 차원 비디오 부호화 성능 분석)

  • Lee, Do Hoon;Yang, Yun mo;Oh, Byung Tae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2015.07a
    • /
    • pp.452-454
    • /
    • 2015
  • This paper provides the coding performance comparisons of depth map resolution in 3D video system. In multiview plus depth map system, depth map is used for synthesis view rendering, and affects to synthesis views quality. In the paper, we show the experimental results as depth map resolution in 3D video system, and show performance variation as dilation filter.

  • PDF

View Synthesis Using OpenGL for Multi-viewpoint 3D TV (다시점 3차원 방송을 위한 OpenGL을 이용하는 중간영상 생성)

  • Lee, Hyun-Jung;Hur, Nam-Ho;Seo, Yong-Duek
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.507-520
    • /
    • 2006
  • In this paper, we propose an application of OpenGL functions for novel view synthesis from multi-view images and depth maps. While image based rendering has been meant to generate synthetic images by processing the camera view with a graphic engine, little has been known about how to apply the given images and depth information to the graphic engine and render the scene. This paper presents an efficient way of constructing a 3D space with camera parameters, reconstructing the 3D scene with color and depth images, and synthesizing virtual views in real-time as well as their depth images.

Influence of Depth Differences by Setting 3D Stereoscopic Convergence Point on Presence, Display Perception, and Negative Experiences (스테레오 영상의 깊이감에 따른 프레즌스, 지각된 특성, 부정적 경험의 차이)

  • Lee, SangWook;Chung, Donghun
    • Journal of Broadcast Engineering
    • /
    • v.19 no.1
    • /
    • pp.44-55
    • /
    • 2014
  • The goal of 3D stereoscopy is not only to maximize positive experiences (such as sense of realism) by adding depth information to 2D video but to also minimize negative experiences (such as fatigue). This study examines the impact of different depth levels induced by adjusting 3D camera convergences on positive and negative experiences and finds an optimal parameter for viewers. The results show that there are significant differences among depth levels on spatial involvement, realistic immersion, presence, depth perception, screen transmission, materiality, shape perception, spatial extension and display perception. There are also significant differences for fatigue and unnaturalness. This study suggests that reducing the camera convergence angle of an object by $0.17^{\circ}$ behind the object is the optimal parameter in a 3D stereoscopic setting.

A Fast 3D Depth Estimation Algorithm Using On-line Stereo Matching of Intensity Functionals (영상휘도 함수의 온 라인 스테레오 매칭을 이용한 고속 3차원 거리 추정 알고리즘)

  • Kwon, H.Y.;Bien, Z.;Suh, I.H.
    • Proceedings of the KIEE Conference
    • /
    • 1989.11a
    • /
    • pp.484-487
    • /
    • 1989
  • A fast stereo algorithm for 3D depth estimation is presented. We propose an on-line stereo matching technique, by which the flows of stereo image signals are dynamically controlled to satisfy the matching condition of intensity functionals. The disparity is rapidly estimated from the control of. the signal flows and 3D depth is determined from the disparity.

  • PDF

Underwater Localization using EM Wave Attenuation with Depth Information (전자기파의 감쇠패턴 및 깊이 정보 취득을 이용한 수중 위치추정 기법)

  • Kwak, Kyungmin;Park, Daegil;Chung, Wan Kyun;Kim, Jinhyun
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.3
    • /
    • pp.156-162
    • /
    • 2016
  • For the underwater localization, acoustic sensor systems are widely used due to greater penetration properties of acoustic signals in underwater environments. On the other hand, the good penetration property causes multipath and interference effects in structured environment too. To overcome this demerit, a localization method using the attenuation of electro-magnetic(EM) waves was proposed in several literatures, in which distance estimation and 2D-localization experiments show remarkable results. However, in 3D-localization application, the estimation difficulties increase due to the nonuniform (doughnut like) radiation pattern of an omni-directional antenna related to the depth direction. For solving this problem, we added a depth sensor for improving underwater 3D-localization with the EM wave method. A micro scale pressure sensor is located in the mobile node antenna, and the depth data from the pressure sensor is calibrated by the curve fitting algorithm. We adapted the depth(z) data to 3D EM wave pattern model for the error reduction of the localization. Finally, some experiments were executed for 3D localization with the fast calculation and less errors.

3D Fingertip Estimation based on the TOF Camera for Virtual Touch Screen System (가상 터치스크린 시스템을 위한 TOF 카메라 기반 3차원 손 끝 추정)

  • Kim, Min-Wook;Ahn, Yang-Keun;Jung, Kwang-Mo;Lee, Chil-Woo
    • The KIPS Transactions:PartB
    • /
    • v.17B no.4
    • /
    • pp.287-294
    • /
    • 2010
  • TOF technique is one of the skills that can obtain the object's 3D depth information. But depth image has low resolution and fingertip occupy very small region, so, it is difficult to find the precise fingertip's 3D information by only using depth image from TOF camera. In this paper, we estimate fingertip's 3D location using Arm Model and reliable hand's 3D location information that is modified by hexahedron as hand model. Using proposed method we can obtain more precise fingertip's 3D information than using only depth image.

Enhancing Depth Accuracy on the Region of Interest in a Scene for Depth Image Based Rendering

  • Cho, Yongjoo;Seo, Kiyoung;Park, Kyoung Shin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.7
    • /
    • pp.2434-2448
    • /
    • 2014
  • This research proposed the domain division depth map quantization for multiview intermediate image generation using Depth Image-Based Rendering (DIBR). This technique used per-pixel depth quantization according to the percentage of depth bits assigned in domains of depth range. A comparative experiment was conducted to investigate the potential benefits of the proposed method against the linear depth quantization on DIBR multiview intermediate image generation. The experiment evaluated three quantization methods with computer-generated 3D scenes, which consisted of various scene complexities and backgrounds, under varying the depth resolution. The results showed that the proposed domain division depth quantization method outperformed the linear method on the 7- bit or lower depth map, especially in the scene with the large object.

Comparison of Objective Metrics and 3D Evaluation Using Upsampled Depth Map (깊이맵 업샘플링을 이용한 객관적 메트릭과 3D 평가의 비교)

  • Mahmoudpour, Saeed;Choi, Changyeol;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.20 no.2
    • /
    • pp.204-214
    • /
    • 2015
  • Depth map upsampling is an approach to increase the spatial resolution of depth maps obtained from a depth camera. Depth map quality is closely related to 3D perception of stereoscopic image, multi-view image and holography. In general, the performance of upsampled depth map is evaluated by PSNR (Peak Signal to Noise Ratio). On the other hand, time-consuming 3D subjective tests requiring human subjects are carried out for examining the 3D perception as well as visual fatigue for 3D contents. Therefore, if an objective metric is closely correlated with a subjective test, the latter can be replaced by the objective metric. For this, this paper proposes a best metric by investigating the relationship between diverse objective metrics and 3D subjective tests. Diverse reference and no-reference metrics are adopted to evaluate the performance of upsampled depth maps. The subjective test is performed based on DSCQS test. From the utilization and analysis of three kinds of correlations, we validated that SSIM and Edge-PSNR can replace the subjective test.

SuperDepthTransfer: Depth Extraction from Image Using Instance-Based Learning with Superpixels

  • Zhu, Yuesheng;Jiang, Yifeng;Huang, Zhuandi;Luo, Guibo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.10
    • /
    • pp.4968-4986
    • /
    • 2017
  • In this paper, we primarily address the difficulty of automatic generation of a plausible depth map from a single image in an unstructured environment. The aim is to extrapolate a depth map with a more correct, rich, and distinct depth order, which is both quantitatively accurate as well as visually pleasing. Our technique, which is fundamentally based on a preexisting DepthTransfer algorithm, transfers depth information at the level of superpixels. This occurs within a framework that replaces a pixel basis with one of instance-based learning. A vital superpixels feature enhancing matching precision is posterior incorporation of predictive semantic labels into the depth extraction procedure. Finally, a modified Cross Bilateral Filter is leveraged to augment the final depth field. For training and evaluation, experiments were conducted using the Make3D Range Image Dataset and vividly demonstrate that this depth estimation method outperforms state-of-the-art methods for the correlation coefficient metric, mean log10 error and root mean squared error, and achieves comparable performance for the average relative error metric in both efficacy and computational efficiency. This approach can be utilized to automatically convert 2D images into stereo for 3D visualization, producing anaglyph images that are visually superior in realism and simultaneously more immersive.

An Improved Approach for 3D Hand Pose Estimation Based on a Single Depth Image and Haar Random Forest

  • Kim, Wonggi;Chun, Junchul
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.3136-3150
    • /
    • 2015
  • A vision-based 3D tracking of articulated human hand is one of the major issues in the applications of human computer interactions and understanding the control of robot hand. This paper presents an improved approach for tracking and recovering the 3D position and orientation of a human hand using the Kinect sensor. The basic idea of the proposed method is to solve an optimization problem that minimizes the discrepancy in 3D shape between an actual hand observed by Kinect and a hypothesized 3D hand model. Since each of the 3D hand pose has 23 degrees of freedom, the hand articulation tracking needs computational excessive burden in minimizing the 3D shape discrepancy between an observed hand and a 3D hand model. For this, we first created a 3D hand model which represents the hand with 17 different parts. Secondly, Random Forest classifier was trained on the synthetic depth images generated by animating the developed 3D hand model, which was then used for Haar-like feature-based classification rather than performing per-pixel classification. Classification results were used for estimating the joint positions for the hand skeleton. Through the experiment, we were able to prove that the proposed method showed improvement rates in hand part recognition and a performance of 20-30 fps. The results confirmed its practical use in classifying hand area and successfully tracked and recovered the 3D hand pose in a real time fashion.