Depth Generation Method Using Multiple Color and Depth Cameras

다시점 카메라와 깊이 카메라를 이용한 3차원 장면의 깊이 정보 생성 방법

  • Kang, Yun-Suk (Realistic Broadcasting Research Center, Gwangju Institute of Science and Technology) ;
  • Ho, Yo-Sung (Realistic Broadcasting Research Center, Gwangju Institute of Science and Technology)
  • 강윤석 (광주과학기술원 실감방송연구센터) ;
  • 호요성 (광주과학기술원 실감방송연구센터)
  • Received : 2010.09.13
  • Accepted : 2011.01.26
  • Published : 2011.05.25

Abstract

In this paper, we explain capturing, postprocessing, and depth generation methods using multiple color and depth cameras. Although the time-of-flight (TOF) depth camera measures the scene's depth in real-time, there are noises and lens distortion in the output depth images. The correlation between the multi-view color images and depth images is also low. Therefore, it is essential to correct the depth images and then we use them to generate the depth information of the scene. The results of stereo matching based on the disparity information from the depth cameras showed the better performance than the previous method. Moreover, we obtained the accurate depth information even at the occluded or textureless regions which are the weaknesses of stereo matching.

본 논문에서는 다시점 색상 카메라와 다시점 깊이 카메라를 이용하여 촬영한 영상의 후처리 방법과 3차원 장면의 깊이 정보를 생성하는 방법을 제안한다. 깊이 카메라는 장면의 깊이 정보를 실시간으로 측정할 수 있는 장점이 있지만, 잡음과 왜곡이 발생하고 색상 영상과의 상관도도 떨어진다. 따라서 다시점 깊이 영상에 후처리 작업을 수행한 후, 이를 다시점 색상 영상과 조합하여 3차원 깊이 정보를 생성한다. 깊이 카메라로부터 얻은 각 시점에서의 초기 변이 정보를 기반으로 한 스테레오 정합의 결과는 기존 방법의 결과 보다 우수한 성능을 나타내었음을 볼 수 있었다.

Keywords

References

  1. A. Smolic and P. Kauff, "Interactive 3D Video Representation and Coding Technologies," Proceedings of the IEEE, Spatial Issue on Advances in Video Coding and Delivery, vol. 93, no. 1, pp. 99-110, Jan. 2005.
  2. J. Zhu, L. Wang, R. Yang, and J. Davis, "Fusion of Time-of-Flight Depth and Stereo for High Accuracy Depth Maps," Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 231-236, June 2008.
  3. B. Bartczak and R. Koch, "Dense Depth Maps from Low Resolution Time-of-Flight Depth and High Resolution Color Views," Proc. of 5th International Symposium on Visual Computing, pp. 1-12, Nov. 2009.
  4. http://www.vision.caltech.edu/bouguetj, Camera Calibration Toolbox for MATLAB.
  5. Y.S. Kang and Y.S. Ho, "Geometrical Compensation for Multi-view Video in Multiple Camera Array," Proc. of Int'l Symposium on Electronics and Marine, pp. 83-86, Sept. 2008.
  6. A. Ilie and G. Welch, "Ensuring color consistency across multiple cameras," Proc. of IEEE international Conference on Computer Vision, pp. II: 1268-1275, Oct. 2005.
  7. A. Wang, T. Qiu, and L. Shao, "A Simple Method of Radial Distortion Correction with Centre of Distortion Estimation," Journal of Mathematical Imaging and Vision, vol. 35, no. 3, pp. 165-172, July 2009. https://doi.org/10.1007/s10851-009-0162-1
  8. G. Gilboa, N. Sochen, and Y.Y. Zeevi, "Regularized Shock Filters and Complex Diffusion", ECCV 2002, LNCS 2350, pp. 399-313, May 2002.
  9. Y.S. Ho and Y.S. Kang, "Multi-view Depth Generation using Multi-Depth Camera System," Proc. of International Conference on 3D Systems and Application (3DSA), pp. 1-4, May 2010.
  10. P.F. Felzenszwalb and D.P. Huttenlocher, "Efficient Belief Propagation for Early Vision," International Journal of Computer Vision, vol. 70, no. 1, pp. 41-54, Oct. 2006. https://doi.org/10.1007/s11263-006-7899-4