• Title/Summary/Keyword: depth conversion

Search Result 288, Processing Time 0.027 seconds

2D/3D conversion method using depth map based on haze and relative height cue (실안개와 상대적 높이 단서 기반의 깊이 지도를 이용한 2D/3D 변환 기법)

  • Han, Sung-Ho;Kim, Yo-Sup;Lee, Jong-Yong;Lee, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.10 no.9
    • /
    • pp.351-356
    • /
    • 2012
  • This paper presents the 2D/3D conversion technique using depth map which is generated based on the haze and relative height cue. In cases that only the conventional haze information is used, errors in image without haze could be generated. To reduce this kind of errors, a new approach is proposed combining the haze information with depth map which is constructed based on the relative height cue. Also the gray scale image from Mean Shift Segmentation is combined with depth map of haze information to sharpen the object's contour lines, upgrading the quality of 3D image. Left and right view images are generated by DIBR(Depth Image Based Rendering) using input image and final depth map. The left and right images are used to generate red-cyan 3D image and the result is verified by measuring PSNR between the depth maps.

Motion Depth Generation Using MHI for 3D Video Conversion (3D 동영상 변환을 위한 MHI 기반 모션 깊이맵 생성)

  • Kim, Won Hoi;Gil, Jong In;Choi, Changyeol;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.22 no.4
    • /
    • pp.429-437
    • /
    • 2017
  • 2D-to-3D conversion technology has been studied over past decades and integrated to commercial 3D displays and 3DTVs. Generally, depth cues extracted from a static image is used for generating a depth map followed by DIBR (Depth Image Based Rendering) for producing a stereoscopic image. Further, motion is also an important cue for depth estimation and is estimated by block-based motion estimation, optical flow and so forth. This papers proposes a new method for motion depth generation using Motion History Image (MHI) and evaluates the feasiblity of the MHI utilization. In the experiments, the proposed method was performed on eight video clips with a variety of motion classes. From a qualitative test on motion depth maps as well as the comparison of the processing time, we validated the feasibility of the proposed method.

Foreground Extraction and Depth Map Creation Method based on Analyzing Focus/Defocus for 2D/3D Video Conversion (2D/3D 동영상 변환을 위한 초점/비초점 분석 기반의 전경 영역 추출과 깊이 정보 생성 기법)

  • Han, Hyun-Ho;Chung, Gye-Dong;Park, Young-Soo;Lee, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.11 no.1
    • /
    • pp.243-248
    • /
    • 2013
  • In this paper, depth of foreground is analysed by focus and color analysis grouping for 2D/3D video conversion and depth of foreground progressing method is preposed by using focus and motion information. Candidate foreground image is generated by estimated movement of image focus information for extracting foreground from 2D video. Area of foreground is extracted by filling progress using color analysis on hole area of inner object existing candidate foreground image. Depth information is generated by analysing value of focus existing on actual frame for allocating depth at generated foreground area. Depth information is allocated by weighting motion information. Results of previous proposed algorithm is compared with proposed method from this paper for evaluating the quality of generated depth information.

A Study on 2D/3D image Conversion Method using Create Depth Map (2D/3D 변환을 위한 깊이정보 생성기법에 관한 연구)

  • Han, Hyeon-Ho;Lee, Gang-Seong;Lee, Sang-Hun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.4
    • /
    • pp.1897-1903
    • /
    • 2011
  • This paper discusses a 2D/3D conversion of images using technologies like object extraction and depth-map creation. The general procedure for converting 2D images into a 3D image is extracting objects from 2D image, recognizing the distance of each points, generating the 3D image and correcting the image to generate with less noise. This paper proposes modified new methods creating a depth-map from 2D image and recognizing the distance of objects in it. Depth-map information which determines the distance of objects is the key data creating a 3D image from 2D images. To get more accurate depth-map data, noise filtering is applied to the optical flow. With the proposed method, better depth-map information is calculated and better 3D image is constructed.

Computational integral imaging reconstruction of 3D object using a depth conversion technique

  • Tan, Chun-Wei;Shin, Dong-Hak;Lee, Byung-Gook;Kim, Eun-Soo
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2008.10a
    • /
    • pp.730-733
    • /
    • 2008
  • In this paper, a novel CII method using a depth conversion technique is proposed. The proposed method can move a far 3D object near lenslet array and reduce the computation cost dramatically. To show the usefulness of the proposed method, we carry out the preliminary experiment and its results are presented.

  • PDF

Pipeline defect detection with depth identification using PZT array and time-reversal method

  • Yang Xu;Mingzhang Luo;Guofeng Du
    • Smart Structures and Systems
    • /
    • v.32 no.4
    • /
    • pp.253-266
    • /
    • 2023
  • The time-reversal method is employed to improve the ability of pipeline defect detection, and a new approach of identifying the pipeline defect depth is proposed in this research. When the L(0,2) mode ultrasonic guided wave excited through a lead zirconate titinate (PZT) transduce array propagates along the pipeline with a defect, it will interact with the defect and be partially converted to flexural F(n, m) modes and longitudinal L(0,1) mode. Using a receiving PZT array attached axisymmetrically around the pipeline, the L(0,2) reflection signal as well as the mode conversion signals at the defect are obtained. An appropriate rectangle window is used to intercept the L(0,2) reflection signal and the mode conversion signals from the obtained direct detection signals. The intercepted signals are time reversed and re-excited in the pipeline again, result in the guided wave energy focusing on the pipeline defect, the L(0,2) reflection and the L(0,1) mode conversion signals being enhanced to a higher level, especially for the small defect in the early crack stage. Besides the L(0,2) reflection signal, the L(0,1) mode conversion signal also contains useful pipeline defect information. It is possible to identify the pipeline defect depth by monitoring the variation trend of L(0,2) and L(0,1) reflection coefficients. The finite element method (FEM) simulation and experiment results are given in the paper, the enhancement of pipeline defect reflection signals by time-reversal method is obvious, and the way to identify pipeline defect depth is demonstrated to be effective.

Stress Conversion Factor on Penetration Depth of Knoop Indentation for Assessment of Nano Residual Stress (나노 잔류응력 측정을 위한 비등방 압입자의 깊이별 응력환산계수 분석)

  • Kim, Won Jun;Kim, Yeong Jin;Kim, Young-Cheon
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.26 no.4
    • /
    • pp.95-100
    • /
    • 2019
  • Nanoindentation has been widely used for evaluating mechanical properties of nano-devices, from MEMS to packaging modules. Residual stress is also estimated from indentation tests, especially the Knoop indenter which is used for the determination of residual stress directionality. According to previous researches, the ratio of the two stress conversion factors of Knoop indentation is a constant at approximately 0.34. However, the ratio is supported by insufficient quantitative analyses, and only a few experimental results with indentation depth variation. Hence, a barrier for in-field application exists. In this research, the ratio of two conversion factors with variation in indentation depth using finite elements method has been attempted at. The magnitudes of each conversion factors were computed at uniaxial stress state from the modelled theoretical Knoop indenter and specimen. A model to estimate two stress conversion factor of the long and short axis of Knoop indenter at various indentation depths is proposed and analyzed.

Stereoscopic Image Conversion Algorithm using Object Segmentation and Motion Parallax (객체 분할과 운동 시차를 이용한 입체 영상 변환 알고리즘)

  • Jung, Jae-Sung;Cho, Hwa-Hyun;Yoon, Jong-Ho;Choi, Myung-Ryul
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.1129-1132
    • /
    • 2005
  • In this paper, we proposed real-time stereoscopic image conversion algorithm using object segmentation and motion parallax. The proposed algorithm separates objects using luminance of image, extracts moving object among objects of the image using motion parallax and generates depth map. Parallax process is done based on the depth map. The proposed method has been evaluated using visual test and APD(Absolute Parallx Difference) for comparing the stereoscopic image of the proposed method with that of MTD. The proposed method offers realistic stereoscopic conversion effect regardless of the direction and velocity of the 2-D image.

  • PDF

Orthoscopic real image reconstruction in integral imaging by modifying coordinate of elemental image (집적영상에서 요소영상의 좌표변환을 이용한 정치실영상 구현)

  • Jang, Jae-young;Cho, Myungjin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.7
    • /
    • pp.1646-1652
    • /
    • 2015
  • In this paper, we propose a depth conversion method for orthoscopic real image reconstruction in integral imaging. Pseudoscopic image has been regarded a problem in conventional integral imaging. the depth of reconstructed image is depending on a coordinate of an elemental image. The conversion from pseudoscopic to orthoscopic may be possible by analysing the geometrical relation between pickup and reconstruction system of elemental image. The feasibility of the proposed method has been confirmed through preliminary experiments as well as ray optical analysis.

High-Quality Depth Map Generation of Humans in Monocular Videos (단안 영상에서 인간 오브젝트의 고품질 깊이 정보 생성 방법)

  • Lee, Jungjin;Lee, Sangwoo;Park, Jongjin;Noh, Junyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.20 no.2
    • /
    • pp.1-11
    • /
    • 2014
  • The quality of 2D-to-3D conversion depends on the accuracy of the assigned depth to scene objects. Manual depth painting for given objects is labor intensive as each frame is painted. Specifically, a human is one of the most challenging objects for a high-quality conversion, as a human body is an articulated figure and has many degrees of freedom (DOF). In addition, various styles of clothes, accessories, and hair create a very complex silhouette around the 2D human object. We propose an efficient method to estimate visually pleasing depths of a human at every frame in a monocular video. First, a 3D template model is matched to a person in a monocular video with a small number of specified user correspondences. Our pose estimation with sequential joint angular constraints reproduces a various range of human motions (i.e., spine bending) by allowing the utilization of a fully skinned 3D model with a large number of joints and DOFs. The initial depth of the 2D object in the video is assigned from the matched results, and then propagated toward areas where the depth is missing to produce a complete depth map. For the effective handling of the complex silhouettes and appearances, we introduce a partial depth propagation method based on color segmentation to ensure the detail of the results. We compared the result and depth maps painted by experienced artists. The comparison shows that our method produces viable depth maps of humans in monocular videos efficiently.