• Title/Summary/Keyword: 3 차원 깊이 지도

Search Result 301, Processing Time 0.027 seconds

Curvature Estimation Based Depth Map Generation (곡률 계산에 기반한 깊이지도 생성 알고리즘)

  • Soh, Yongseok;Sim, Jae-Young;Lee, Sang-Uk
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2011.07a
    • /
    • pp.343-344
    • /
    • 2011
  • 최근 3 차원 디스플레이 기술의 발전에 힘입어 3 차원 컨텐츠에 대한 수요도 늘고 있다. 스테레오스코픽(Stereoscopic) 렌즈를 이용하여 3 차원 컨텐츠를 만들거나 여러 장의 2 차원 영상을 이용한 3 차원 복원 연구가 활발히 진행되는 가운데 본 논문에서는 단일 2 차원 영상을 이용해서 깊이 지도를 획득하는 알고리즘을 제안한다. 단일 영상을 보고 3 차원 구조를 파악하는 인간의 시각 체계의 능력에 착안하여 본 논문에서는 단일 영상을 이용하여 깊이 정보를 추출하는 알고리즘을 제안한다. 깊이 단서들 중, 가림 단서를 소개하고 추가로 인간의 시각 체계에서 사용하는 깊이 단서들을 결합하여 기계 학습 알고리즘에 접목시킨다. 실험을 통해 우리는 제안 알고리즘이 물체의 외곽정보를 이용하여 양질의 깊이 지도를 준다는 것을 확인할 수 있다.

  • PDF

3D Spatial Info Development using Layered Depth Images (계층적 깊이 영상을 활용한 3차원 공간정보 구현)

  • Song, Sang-Hun;Jo, Myung-Hee
    • Proceedings of the KSRS Conference
    • /
    • 2007.03a
    • /
    • pp.97-102
    • /
    • 2007
  • 3차원 공간정보는 2차원에 비해 공간적 현실감이 뛰어나기 때문에 최근 경관분석,도시 계획 및 웹(Web) 을 통한 지도 서비스 분야 등에서 이에 대한 관심이 증가하고 있으나,3차원 공간 정보의 기하학적 특성상 기존의 2차원 공간정보에 비해 데이터 량이 방대해 지고 있으며 이를 활용한 또 다른 콘텐츠 제작과 빠르고 효율적인 처리에 많은 문제점을 가지고 있다. 본 논문에서는 이러한 문제점을 해결하기 위한 방법으로 위성 및 항공으로부터 획득한 DEM(Digital Elevation Model)을 이용하여 생성된 3차 원의 지형정보와 도시 모델링 및 텍스처 맵핑 과정을 통해 획득한 정보를 기반으로 하여 각각의 위치에 카메라를 설정하고, 설정된 카메라 위치를 기반으로 Camera Matrix를 구한다. 이렇게 획득한 카메라의 정보엔 깊이 정보를 포함하고 있는데,깊이 정보를 기반으로 하여 3차원의 워핑(Warping)작업을 통해 계층적 핍이 영상(LDI)를 생성하고,생성된 계층적 깊이 영상을 이용하여 3차원의 공간정보를 구현한다.

  • PDF

Hybrid disparity map generation method based on reliability (신뢰도를 기반한 혼합형 변위 지도 생성 방법)

  • Jang, Woo-Seok;Ho, Yo-Sung
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2015.07a
    • /
    • pp.73-74
    • /
    • 2015
  • 3 차원 컨텐츠 제작은 많은 관심을 받고 있는 분야이다. 변위 지도로 표현 가능한 깊이 정보는 3 차원 컨텐츠를 생성하는데 필수적이다. 본 논문에서는 깊이 카메라 및 스테레오 카메라를 이용하여 정확한 변위 지도를 생성하는 방법을 제안한다. 제안하는 방법은 스테레오 영상 사이의 변위를 예측하기 위해서 깊이 카메라 정보를 3 차원 워핑 방식에 의해서 좌우 카메라 위치로 투영한다. 투영된 깊이 정보는 스테레오 영상의 크기에 맞춰서 업샘플링된다. 최종적으로 업샘플링된 깊이 카메라 정보와 스테레오 정보가 결합되어 정확한 변위 지도를 생성한다. 실험 결과는 제안하는 방법이 기존의 단일 센서를 이용한 방식에 비해서 좀더 정확한 결과를 생성함을 보여준다.

  • PDF

Stereoscopic Conversion of Monoscopic Video using Edge Direction Histogram (에지 방향성 히스토그램을 이용한 2차원 동영상의 3차원 입체변환기법)

  • Kim, Jee-Hong;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.8C
    • /
    • pp.782-789
    • /
    • 2009
  • In this paper, we propose an algorithm for creating stereoscopic video from a monoscopic video. Parallel straight lines in a 3D space get narrower as they are farther from the perspective images on a 2D plane and finally meet at one point that is called a vanishing point. A viewer uses depth perception clues called a vanishing point which is the farthest from a viewer's viewpoint in order to perceive depth information from objects and surroundings thereof to the viewer. The viewer estimates the vanishing point with geometrical features in monoscopic images, and can perceive the depth information with the relationship between the position of the vanishing point and the viewer's viewpoint. In this paper, we propose a method to estimate a vanishing point with edge direction histogram in a general monoscopic image and to create a depth map depending on the position of the vanishing point. With the conversion method proposed through the experimental results, it is seen that stable stereoscopic conversion of a given monoscopic video is achieved.

Facial Feature Localization from 3D Face Image using Adjacent Depth Differences (인접 부위의 깊이 차를 이용한 3차원 얼굴 영상의 특징 추출)

  • 김익동;심재창
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.5
    • /
    • pp.617-624
    • /
    • 2004
  • This paper describes a new facial feature localization method that uses Adjacent Depth Differences(ADD) in 3D facial surface. In general, human recognize the extent of deepness or shallowness of region relatively, in depth, by comparing the neighboring depth information among regions of an object. The larger the depth difference between regions shows, the easier one can recognize each region. Using this principal, facial feature extraction will be easier, more reliable and speedy. 3D range images are used as input images. And ADD are obtained by differencing two range values, which are separated at a distance coordinate, both in horizontal and vertical directions. ADD and input image are analyzed to extract facial features, then localized a nose region, which is the most prominent feature in 3D facial surface, effectively and accurately.

Adaptive Depth Fusion based on Reliability of Depth Cues for 2D-to-3D Video Conversion (2차원 동영상의 3차원 변환을 위한 깊이 단서의 신뢰성 기반 적응적 깊이 융합)

  • Han, Chan-Hee;Choi, Hae-Chul;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.12
    • /
    • pp.1-13
    • /
    • 2012
  • 3D video is regarded as the next generation contents in numerous applications. The 2D-to-3D video conversion technologies are strongly required to resolve a lack of 3D videos during the period of transition to the full ripe 3D video era. In 2D-to-3D conversion methods, after the depth image of each scene in 2D video is estimated, stereoscopic video is synthesized using DIBR (Depth Image Based Rendering) technologies. This paper proposes a novel depth fusion algorithm that integrates multiple depth cues contained in 2D video to generate stereoscopic video. For the proper depth fusion, it is checked whether some cues are reliable or not in current scene. Based on the result of the reliability tests, current scene is classified into one of 4 scene types and scene-adaptive depth fusion is applied to combine those reliable depth cues to generate the final depth information. Simulation results show that each depth cue is reasonably utilized according to scene types and final depth is generated by cues which can effectively represent the current scene.

A Region Depth Estimation Algorithm using Motion Vector from Monocular Video Sequence (단안영상에서 움직임 벡터를 이용한 영역의 깊이추정)

  • 손정만;박영민;윤영우
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.2
    • /
    • pp.96-105
    • /
    • 2004
  • The recovering 3D image from 2D requires the depth information for each picture element. The manual creation of those 3D models is time consuming and expensive. The goal in this paper is to estimate the relative depth information of every region from single view image with camera translation. The paper is based on the fact that the motion of every point within image which taken from camera translation depends on the depth. Motion vector using full-search motion estimation is compensated for camera rotation and zooming. We have developed a framework that estimates the average frame depth by analyzing motion vector and then calculates relative depth of region to average frame depth. Simulation results show that the depth of region belongs to a near or far object is consistent accord with relative depth that man recognizes.

  • PDF

Reconstruction of the Lost Hair Depth for 3D Human Actor Modeling (3차원 배우 모델링을 위한 깊이 영상의 손실된 머리카락 영역 복원)

  • Cho, Ji-Ho;Chang, In-Yeop;Lee, Kwan-H.
    • Journal of the HCI Society of Korea
    • /
    • v.2 no.2
    • /
    • pp.1-9
    • /
    • 2007
  • In this paper, we propose a reconstruction technique of the lost hair region for 3D human actor modeling. An active depth sensor system can simultaneously capture both color and geometry information of any objects in real-time. However, it cannot acquire some regions whose surfaces are shiny and dark. Therefore, to get a natural 3D human model, the lost region in depth image should be recovered, especially human hair region. The recovery is performed using both color and depth images. We find out the hair region using color image first. After the boundary of hair region is estimated, the inside of hair region is estimated using an interpolation technique and closing operation. A 3D mesh model is generated after performing a series of operations including adaptive sampling, triangulation, mesh smoothing, and texture mapping. The proposed method can generate recovered 3D mesh stream automatically. The final 3D human model allows the user view interaction or haptic interaction in realistic broadcasting system.

  • PDF

Recent Trends of Weakly-supervised Deep Learning for Monocular 3D Reconstruction (단일 영상 기반 3차원 복원을 위한 약교사 인공지능 기술 동향)

  • Kim, Seungryong
    • Journal of Broadcast Engineering
    • /
    • v.26 no.1
    • /
    • pp.70-78
    • /
    • 2021
  • Estimating 3D information from a single image is one of the essential problems in numerous applications. Since a 2D image inherently might originate from an infinite number of different 3D scenes, thus 3D reconstruction from a single image is notoriously challenging. This challenge has been overcame by the advent of recent deep convolutional neural networks (CNNs), by modeling the mapping function between 2D image and 3D information. However, to train such deep CNNs, a massive training data is demanded, but such data is difficult to achieve or even impossible to build. Recent trends thus aim to present deep learning techniques that can be trained in a weakly-supervised manner, with a meta-data without relying on the ground-truth depth data. In this article, we introduce recent developments of weakly-supervised deep learning technique, especially categorized as scene 3D reconstruction and object 3D reconstruction, and discuss limitations and further directions.

3D Depth Information Extraction Algorithm Based on Motion Estimation in Monocular Video Sequence (단안 영상 시퀸스에서 움직임 추정 기반의 3차원 깊이 정보 추출 알고리즘)

  • Park, Jun-Ho;Jeon, Dae-Seong;Yun, Yeong-U
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.549-556
    • /
    • 2001
  • The general problems of recovering 3D for 2D imagery require the depth information for each picture element form focus. The manual creation of those 3D models is consuming time and cost expensive. The goal in this paper is to simplify the depth estimation algorithm that extracts the depth information of every region from monocular image sequence with camera translation to implement 3D video in realtime. The paper is based on the property that the motion of every point within image which taken from camera translation depends on the depth information. Full-search motion estimation based on block matching algorithm is exploited at first step and ten, motion vectors are compensated for the effect by camera rotation and zooming. We have introduced the algorithm that estimates motion of object by analysis of monocular motion picture and also calculates the averages of frame depth and relative depth of region to the average depth. Simulation results show that the depth of region belongs to a near object or a distant object is in accord with relative depth that human visual system recognizes.

  • PDF