• Title/Summary/Keyword: 깊이 카메라

Search Result 470, Processing Time 0.03 seconds

Alignment of Convergent Multi-view Depth Map in Based on the Camera Intrinsic Parameter (카메라의 내부 파라미터를 고려한 수렴형 다중 깊이 지도의 정렬)

  • Lee, Kanghoon;Park, Jong-Il;Shin, Hong-Chang;Bang, Gun
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2015.07a
    • /
    • pp.457-459
    • /
    • 2015
  • 본 논문에서는 원의 호 곡선에 따라 배치된 다중 RGB 카메라 영상으로 생성한 깊이 지도를 정렬하는 방법을 제안한다. 원의 호 곡선에 따라 배치된 카메라는 각 카메라의 광축이 한 점으로 만나서 수렴하는 형태가 이상적이다. 그러나 카메라 파라미터를 살펴보면 광축이 서로 수렴하지 않는다. 또한 카메라 파라미터는 오차가 존재하고 내부 파라미터도 서로 다르기 때문에 각 카메라 영상들은 수평과 수직 오차가 발생한다. 이와 같은 문제점을 해결하기 위해 첫 번째로 광축이 한 점으로 수렴하기 위해서 카메라 외부 파라미터를 보정하여 깊이 영상 정렬을 하였다. 두 번째로 내부 파라미터를 수정하여 각 깊이 영상들의 수평과 수직 오차를 감소시켰다. 일반적으로 정렬된 깊이 지도를 얻기 위해서는 초기 RGB 카메라 영상으로 정렬을 수행하고 그 결과 영상으로 깊이 영상을 생성한다. 하지만 RGB 영상으로 카메라의 회전과 위치를 보정하여 정렬하면 카메라 위치 변화에 따른 깊이 지도 변화값 적용이 복잡해 진다. 즉 정렬 계산 과정에서 소수점 단위 값이 사라지기에 최종 깊이 지도의 값에 영향을 미친다. 그래서 RGB 영상으로 깊이 지도를 생성하고 그것을 처음 RGB 카메라 파라미터로 워핑(warping)하였다. 그리고 워핑된 깊이 지도 값을 가지고 정렬을 수행하였다.

  • PDF

Virtual View-point Depth Image Synthesis System for CGH (CGH를 위한 가상시점 깊이영상 합성 시스템)

  • Kim, Taek-Beom;Ko, Min-Soo;Yoo, Ji-Sang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.7
    • /
    • pp.1477-1486
    • /
    • 2012
  • In this paper, we propose Multi-view CGH Making System using method of generation of virtual view-point depth image. We acquire reliable depth image using TOF depth camera. We extract parameters of reference-view cameras. Once the position of camera of virtual view-point is defined, select optimal reference-view cameras considering position of it and distance between it and virtual view-point camera. Setting a reference-view camera whose position is reverse of primary reference-view camera as sub reference-view, we generate depth image of virtual view-point. And we compensate occlusion boundaries of virtual view-point depth image using depth image of sub reference-view. In this step, remaining hole boundaries are compensated with minimum values of neighborhood. And then, we generate final depth image of virtual view-point. Finally, using result of depth image from these steps, we generate CGH. The experimental results show that the proposed algorithm performs much better than conventional algorithms.

Foreground Segmentation and High-Resolution Depth Map Generation Using a Time-of-Flight Depth Camera (깊이 카메라를 이용한 객체 분리 및 고해상도 깊이 맵 생성 방법)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37C no.9
    • /
    • pp.751-756
    • /
    • 2012
  • In this paper, we propose a foreground extraction and depth map generation method using a time-of-flight (TOF) depth camera. Although, the TOF depth camera captures the scene's depth information in real-time, it has a built-in noise and distortion. Therefore, we perform several preprocessing steps such as image enhancement, segmentation, and 3D warping, and then use the TOF depth data to generate the depth-discontinuity regions. Then, we extract the foreground object and generate the depth map as of the color image. The experimental results show that the proposed method efficiently generates the depth map even for the object boundary and textureless regions.

Generation of ROI Enhanced High-resolution Depth Maps in Hybrid Camera System (복합형 카메라 시스템에서 관심영역이 향상된 고해상도 깊이맵 생성 방법)

  • Kim, Sung-Yeol;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.13 no.5
    • /
    • pp.596-601
    • /
    • 2008
  • In this paper, we propose a new scheme to generate region-of-interest (ROI) enhanced depth maps in the hybrid camera system, which is composed of a low-resolution depth camera and a high-resolution stereoscopic camera. The proposed method creates an ROI depth map for the left image by carrying out a three-dimensional (3-D) warping operation onto the depth information obtained from the depth camera. Then, we generate a background depth map for the left image by applying a stereo matching algorithm onto the left and right images captured by the stereoscopic camera. Finally, we merge the ROI map with the background one to create the final depth map. The proposed method provides higher quality depth information on ROI than the previous methods.

High-resolution Depth Generation using Multi-view Camera and Time-of-Flight Depth Camera (다시점 카메라와 깊이 카메라를 이용한 고화질 깊이 맵 제작 기술)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.1-7
    • /
    • 2011
  • The depth camera measures range information of the scene in real time using Time-of-Flight (TOF) technology. Measured depth data is then regularized and provided as a depth image. This depth image is utilized with the stereo or multi-view image to generate high-resolution depth map of the scene. However, it is required to correct noise and distortion of TOF depth image due to the technical limitation of the TOF depth camera. The corrected depth image is combined with the color image in various methods, and then we obtain the high-resolution depth of the scene. In this paper, we introduce the principal and various techniques of sensor fusion for high-quality depth generation that uses multiple camera with depth cameras.

Specification and Limitation of ToF Cameras (ToF 카메라의 특성과 그 한계)

  • Hong, Su-Min;Ho, Yo-Sung
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.06a
    • /
    • pp.12-15
    • /
    • 2016
  • 요즘 들어, 3차원 콘텐츠의 수요는 지속적으로 증가하고 있다. 3차원 콘텐츠의 품질은 해당 장면의 깊이 정보에 큰 영향을 받기 때문에 정확한 깊이 정보를 얻는 방법이 매우 중요하다. 깊이 정보를 얻는 방법은 크게 수동형 방식과 능동형 방식으로 나뉘는데, 수동형 방식은 계산 과정이 복잡하고 깊이맵의 품질이 보장되지 않는 단점을 갖기 때문에 능동형 방식이 많이 사용되고 있다. 능동형 방식은 깊이 카메라를 이용하여 직접적인 깊이 정보를 얻는 방식으로, 대게 ToF(Time-of-flight) 기술이 사용된다. 이 논문에서는 ToF 깊이 카메라로 촬영된 실제 깊이맵의 특성을 분석하기 위해 여러 가지 촬영 환경과 객체에 대해서 SR4000 깊이 카메라와 키넥트 v2 센서를 이용하여 깊이맵 품질을 비교했다. 실험 결과, 적외선이 제대로 반사되기 어려운 방사성 물질이나 표면, 경계 영역, 어두운 영역, 머리 영역 등에서 정확한 깊이 정보를 얻기 어려웠으며, 실외 환경에서 정확한 깊이 정보가 획득되지 않는 것을 확인할 수 있었다.

  • PDF

High-qualtiy 3-D Video Generation using Scale Space (계위 공간을 이용한 고품질 3차원 비디오 생성 방법 -다단계 계위공간 개념을 이용해 깊이맵의 경계영역을 정제하는 고화질 복합형 카메라 시스템과 고품질 3차원 스캐너를 결합하여 고품질 깊이맵을 생성하는 방법-)

  • Lee, Eun-Kyung;Jung, Young-Ki;Ho, Yo-Sung
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.620-624
    • /
    • 2009
  • In this paper, we present a new camera system combining a high-quality 3-D scanner and hybrid camera system to generate a multiview video-plus-depth. In order to get the 3-D video using the hybrid camera system and 3-D scanner, we first obtain depth information for background region from the 3-D scanner. Then, we get the depth map for foreground area from the hybrid camera system. Initial depths of each view image are estimated by performing 3-D warping with the depth information. Thereafter, multiview depth estimation using the initial depths is carried out to get each view initial disparity map. We correct the initial disparity map using a belief propagation algorithm so that we can generate the high-quality multiview disparity map. Finally, we refine depths of the foreground boundary using extracted edge information. Experimental results show that the proposed depth maps generation method produces a 3-D video with more accurate multiview depths and supports more natural 3-D views than the previous works.

  • PDF

Multi-view Generation using High Resolution Stereoscopic Cameras and a Low Resolution Time-of-Flight Camera (고해상도 스테레오 카메라와 저해상도 깊이 카메라를 이용한 다시점 영상 생성)

  • Lee, Cheon;Song, Hyok;Choi, Byeong-Ho;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4A
    • /
    • pp.239-249
    • /
    • 2012
  • Recently, the virtual view generation method using depth data is employed to support the advanced stereoscopic and auto-stereoscopic displays. Although depth data is invisible to user at 3D video rendering, its accuracy is very important since it determines the quality of generated virtual view image. Many works are related to such depth enhancement exploiting a time-of-flight (TOF) camera. In this paper, we propose a fast 3D scene capturing system using one TOF camera at center and two high-resolution cameras at both sides. Since we need two depth data for both color cameras, we obtain two views' depth data from the center using the 3D warping technique. Holes in warped depth maps are filled by referring to the surrounded background depth values. In order to reduce mismatches of object boundaries between the depth and color images, we used the joint bilateral filter on the warped depth data. Finally, using two color images and depth maps, we generated 10 additional intermediate images. To realize fast capturing system, we implemented the proposed system using multi-threading technique. Experimental results show that the proposed capturing system captured two viewpoints' color and depth videos in real-time and generated 10 additional views at 7 fps.

Depth Generation Method Using Multiple Color and Depth Cameras (다시점 카메라와 깊이 카메라를 이용한 3차원 장면의 깊이 정보 생성 방법)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.3
    • /
    • pp.13-18
    • /
    • 2011
  • In this paper, we explain capturing, postprocessing, and depth generation methods using multiple color and depth cameras. Although the time-of-flight (TOF) depth camera measures the scene's depth in real-time, there are noises and lens distortion in the output depth images. The correlation between the multi-view color images and depth images is also low. Therefore, it is essential to correct the depth images and then we use them to generate the depth information of the scene. The results of stereo matching based on the disparity information from the depth cameras showed the better performance than the previous method. Moreover, we obtained the accurate depth information even at the occluded or textureless regions which are the weaknesses of stereo matching.

3D Depth Estimation by a Single Camera (단일 카메라를 이용한 3D 깊이 추정 방법)

  • Kim, Seunggi;Ko, Young Min;Bae, Chulkyun;Kim, Dae Jin
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.281-291
    • /
    • 2019
  • Depth from defocus estimates the 3D depth by using a phenomenon in which the object in the focal plane of the camera forms a clear image but the object away from the focal plane produces a blurred image. In this paper, algorithms are studied to estimate 3D depth by analyzing the degree of blur of the image taken with a single camera. The optimized object range was obtained by 3D depth estimation derived from depth from defocus using one image of a single camera or two images of different focus of a single camera. For depth estimation using one image, the best performance was achieved using a focal length of 250 mm for both smartphone and DSLR cameras. The depth estimation using two images showed the best 3D depth estimation range when the focal length was set to 150 mm and 250 mm for smartphone camera images and 200 mm and 300 mm for DSLR camera images.