• Title/Summary/Keyword: 깊이 맵

Search Result 170, Processing Time 0.025 seconds

2D to 3D Conversion Using The Machine Learning-Based Segmentation And Optical Flow (학습기반의 객체분할과 Optical Flow를 활용한 2D 동영상의 3D 변환)

  • Lee, Sang-Hak
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.3
    • /
    • pp.129-135
    • /
    • 2011
  • In this paper, we propose the algorithm using optical flow and machine learning-based segmentation for the 3D conversion of 2D video. For the segmentation allowing the successful 3D conversion, we design a new energy function, where color/texture features are included through machine learning method and the optical flow is also introduced in order to focus on the regions with the motion. The depth map are then calculated according to the optical flow of segmented regions, and left/right images for the 3D conversion are produced. Experiment on various video shows that the proposed method yields the reliable segmentation result and depth map for the 3D conversion of 2D video.

High-resolution Depth Generation using Multi-view Camera and Time-of-Flight Depth Camera (다시점 카메라와 깊이 카메라를 이용한 고화질 깊이 맵 제작 기술)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.1-7
    • /
    • 2011
  • The depth camera measures range information of the scene in real time using Time-of-Flight (TOF) technology. Measured depth data is then regularized and provided as a depth image. This depth image is utilized with the stereo or multi-view image to generate high-resolution depth map of the scene. However, it is required to correct noise and distortion of TOF depth image due to the technical limitation of the TOF depth camera. The corrected depth image is combined with the color image in various methods, and then we obtain the high-resolution depth of the scene. In this paper, we introduce the principal and various techniques of sensor fusion for high-quality depth generation that uses multiple camera with depth cameras.

Automatic 3D Map-Object Generation Using Texture Analysis Table (텍스처 분석 테이블을 이용한 3D 지형 객체 자동 생성)

  • 선영범;김태용;이원형
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2003.11b
    • /
    • pp.634-637
    • /
    • 2003
  • 본 논문은 지형중심 게임에서 깊이레벨에 기반한 텍스처 분석 테이블(TAT)을 이용하여 높이에 따라 정의된 지형 객체들을 효율적으로 생성 시킬 수 있는 알고리즘을 제안한다. 기존의 방법에서는 맵에디터 상에서 지형의 텍스처와 지형의 사실적 표현을 위해 나무나 바위 등의 지형 객체를 수작업으로 편집하였는데 제안한 알고리즘을 적용하면 깊이 단계별 최소의 지형 텍스처만을 사용하여 매우 다양한 종류의 지형 텍스처를 생성해 낼 수 있으며, TAT로부터 깊이 정보값을 활용하여 자연-객체들(Natural Object)을 자동으로 생성시킬 수 있다. 이로써 게임 지형을 제작하는데 불필요한 작업량을 줄일 수 있으며, 그만큼 인공-객체들(Artificial Obejct)을 생성하는데 많은 시간을 투입할 수 있다.

  • PDF

Structured lights Calibration for Depth Map Acquisition System (깊이맵 획득을 위한 가시구조광 캘리브레이션)

  • Yang, Seung-Jun;Choo, Hyon-Gon;Cha, Jihun;Kim, Jinwoong
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2011.11a
    • /
    • pp.242-243
    • /
    • 2011
  • 구조광을 이용하는 깊이 정보 획득 방법에서 코드화된 패턴의 색상 정보는 촬영된 영상으로부터 패턴을 해석하여 패턴의 위상 변화량으로부터 물체의 깊이 정보를 찾기 위함으로 구조광 패턴들이 대상에 정확하게 투영되는 것이 중요하다. 그러나 프로젝터의 특성에 따라 패턴의 RGB 채널들이 종종 좌표에서 어긋나는 현상이 발생하게 된다. 본 논문에서는 프로젝터의 특성에 따른 컬러 구조광의 캘리브레이션을 위한 방법을 제안한다. 제안하는 방법은 시변화 가시구조광 시스템의 캘리브레이션 과정 중에서 투사된 영상으로부터 RGB 패턴 채널을 추출하고, 추출된 패턴으로부터 각 RGB 채널에 대한 히스토그램을 통하여 패턴 채널이 어느 방향으로 번져 나갔는지를 파악하여 원 패턴에 대한 재정렬을 수행한다. 본 논문의 실험결과에 따르면, 기존의 방법에 비해 간단한 방법으로 가시구조광 패턴에 대한 캘리브레션을 수행할 수 있음을 보여준다.

  • PDF

Stereo Matching Algorithm Using TAD-Adaptive Census Transform Based on Multi Sparse Windows (Multi Sparse Windows 기반의 TAD-Adaptive Census Transform을 이용한 스테레오 정합 알고리즘)

  • Lee, Ingyu;Moon, Byungin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.10a
    • /
    • pp.1559-1562
    • /
    • 2015
  • 최근 3 차원 깊이 정보를 활용하는 분야가 많아짐에 따라, 정확한 깊이 정보를 추출하기 위한 연구가 계속 진행되고 있다. 특히 ASW(Adaptive Support Weight)는 기존의 영역 기반 알고리즘의 정확도를 향상시키기 위한 방법으로 많이 이용되고 있다. 그 중에서 ACT(Adaptive Census Transform)는 폐백 영역이나 경계 영역에서 정확도가 낮다는 단점이 있었다. 본 논문에서는 정확한 깊이 맵 (depth map)을 추출하기 위해, 기존의 ACT를 개선한 스테레오 정합 알고리즘을 제안한다. 이는 잡음에 강하고 재사용성이 높은 MSW(Multiple Sparse Windows)를 기반으로, TAD(Truncated Absolute Difference)와 ACT 두 개의 정합 알고리즘을 동시에 사용하여 폐색 영역과 울체의 경계 영역에서 정확도가 낮은 기존의 방법을 개선한다. Middlebury에서 제공하는 영상을 사용한 시뮬레이션 결과는 제안한 방법이 기존의 방법보다 평균적으로 약 1.9% 낮은 에러율(error rate)을 가짐을 보여준다.

View Synthesis Error Removal for Comfortable 3D Video Systems (편안한 3차원 비디오 시스템을 위한 영상 합성 오류 제거)

  • Lee, Cheon;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.36-42
    • /
    • 2012
  • Recently, the smart applications, such as smart phone and smart TV, become a hot issue in IT consumer markets. In particular, the smart TV provides 3D video services, hence efficient coding methods for 3D video data are required. Three-dimensional (3D) video involves stereoscopic or multi-view images to provide depth experience through 3D display systems. Binocular cues are perceived by rendering proper viewpoint images obtained at slightly different view angles. Since the number of viewpoints of the multi-view video is limited, 3D display devices should generate arbitrary viewpoint images using available adjacent view images. In this paper, after we explain a view synthesis method briefly, we propose a new algorithm to compensate view synthesis errors around object boundaries. We describe a 3D warping technique exploiting the depth map for viewpoint shifting and a hole filling method using multi-view images. Then, we propose an algorithm to remove boundary noises that are generated due to mismatches of object edges in the color and depth images. The proposed method reduces annoying boundary noises near object edges by replacing erroneous textures with alternative textures from the other reference image. Using the proposed method, we can generate perceptually inproved images for 3D video systems.

  • PDF

Applying differential techniques for 2D/3D video conversion to the objects grouped by depth information (2D/3D 동영상 변환을 위한 그룹화된 객체별 깊이 정보의 차등 적용 기법)

  • Han, Sung-Ho;Hong, Yeong-Pyo;Lee, Sang-Hun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.3
    • /
    • pp.1302-1309
    • /
    • 2012
  • In this paper, we propose applying differential techniques for 2D/3D video conversion to the objects grouped by depth information. One of the problems converting 2D images to 3D images using the technique tracking the motion of pixels is that objects not moving between adjacent frames do not give any depth information. This problem can be solved by applying relative height cue only to the objects which have no moving information between frames, after the process of splitting the background and objects and extracting depth information using motion vectors between objects. Using this technique all the background and object can have their own depth information. This proposed method is used to generate depth map to generate 3D images using DIBR(Depth Image Based Rendering) and verified that the objects which have no movement between frames also had depth information.

The Design and Implementation of Real-time Virtual Image Synthesis System of Map-based Depth (깊이 맵 기반의 실시간 가상 영상합성 시스템의 설계 및 구현)

  • Lee, Hye-Mi;Ryu, Nam-Hoon;Roh, Gwhan-Sung;Kim, Eung-Kon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.11
    • /
    • pp.1317-1322
    • /
    • 2014
  • To complete an image, it is needed to go through the process to capture the actual actor's motion and compose it with virtual environment. Due to the excessive cost for production or lack of post-processing technology, however, it is mostly conducted by manual labor. The actor plays his role depending on his own imagination at the virtual chromakey studio, and at that time, he has to move considering the possible collision with or reaction to an object that does not exist. And in the process of composition applying CG, when the actor's motion does not go with the virtual environment, the original image may have to be discarded and it is necessary to remake the film. The current study suggested and realized depth-based real-time 3D virtual image composition system to reduce the ratio of remaking the film, shorten the production time, and lower the production cost. As it is possible to figure out the mutual collision or reaction by composing the virtual background, 3D model, and the actual actor in real time at the site of filming, the actor's wrong position or acting can be corrected right there instantly.

Autonomous Driving Platform using Hybrid Camera System (복합형 카메라 시스템을 이용한 자율주행 차량 플랫폼)

  • Eun-Kyung Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1307-1312
    • /
    • 2023
  • In this paper, we propose a hybrid camera system that combines cameras with different focal lengths and LiDAR (Light Detection and Ranging) sensors to address the core components of autonomous driving perception technology, which include object recognition and distance measurement. We extract objects within the scene and generate precise location and distance information for these objects using the proposed hybrid camera system. Initially, we employ the YOLO7 algorithm, widely utilized in the field of autonomous driving due to its advantages of fast computation, high accuracy, and real-time processing, for object recognition within the scene. Subsequently, we use multi-focal cameras to create depth maps to generate object positions and distance information. To enhance distance accuracy, we integrate the 3D distance information obtained from LiDAR sensors with the generated depth maps. In this paper, we introduce not only an autonomous vehicle platform capable of more accurately perceiving its surroundings during operation based on the proposed hybrid camera system, but also provide precise 3D spatial location and distance information. We anticipate that this will improve the safety and efficiency of autonomous vehicles.

Depth Map Estimation Model Using 3D Feature Volume (3차원 특징볼륨을 이용한 깊이영상 생성 모델)

  • Shin, Soo-Yeon;Kim, Dong-Myung;Suh, Jae-Won
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.11
    • /
    • pp.447-454
    • /
    • 2018
  • This paper proposes a depth image generation algorithm of stereo images using a deep learning model composed of a CNN (convolutional neural network). The proposed algorithm consists of a feature extraction unit which extracts the main features of each parallax image and a depth learning unit which learns the parallax information using extracted features. First, the feature extraction unit extracts a feature map for each parallax image through the Xception module and the ASPP(Atrous spatial pyramid pooling) module, which are composed of 2D CNN layers. Then, the feature map for each parallax is accumulated in 3D form according to the time difference and the depth image is estimated after passing through the depth learning unit for learning the depth estimation weight through 3D CNN. The proposed algorithm estimates the depth of object region more accurately than other algorithms.