DOI QR코드

DOI QR Code

Constructing 3D Outlines of Objects based on Feature Points using Monocular Camera

단일카메라를 사용한 특징점 기반 물체 3차원 윤곽선 구성

  • 박상현 (고려대학교 컴퓨터.전파통신공학과) ;
  • 이정욱 (건국대학교 항공우주정보시스템공학과) ;
  • 백두권 (고려대학교 컴퓨터.전파통신공학과)
  • Received : 2010.09.17
  • Accepted : 2010.10.29
  • Published : 2010.12.31

Abstract

This paper presents a method to extract 3D outlines of objects in an image obtained from a monocular vision. After detecting the general outlines of the object by MOPS(Multi-Scale Oriented Patches) -algorithm and we obtain their spatial coordinates. Simultaneously, it obtains the space-coordinates with feature points to be immanent within the outlines of objects through SIFT(Scale Invariant Feature Transform)-algorithm. It grasps a form of objects to join the space-coordinates of outlines and SIFT feature points. The method which is proposed in this paper, it forms general outlines of objects, so that it enables a rapid calculation, and also it has the advantage capable of collecting a detailed data because it supplies the internal-data of outlines through SIFT feature points.

본 논문에서는 단일 카메라로부터 획득한 영상으로부터 물체의 3차원 윤곽선을 구성하는 방법을 제안한다. MOPS(Multi-Scale Oriented Patches) 알고리즘을 이용하여 물체의 대략적인 윤곽선을 검출하고 윤곽선 위에 분포한 특징점의 공간좌표를 획득한다. 동시에 SIFT(Scale Invariant Feature Transform) 알고리즘을 통하여 물체의 윤곽선 내부에 존재하는 특징점 공간좌표를 획득한다. 이러한 정보를 병합하여 물체의 전체 3차원 윤곽선 정보를 구성한다. 본 논문에서 제안하는 방법은 대략적인 물체의 윤곽선만 구성하기 때문에 빠른 계산이 가능하며 SIFT 특징점을 통해 윤곽선 내부 정보를 보완하기 때문에 물체의 자세한 3차원 정보를 얻을 수 있는 장점이 있다.

Keywords

References

  1. M. Garcia, A. Solanas, “3D simultaneous localization andmodeling from stereo vision”, in IEEE Conference onRobotics and Automation, pp.847-853, 2004. https://doi.org/10.1109/ROBOT.2004.1307255
  2. Andrew J. Davison, Nobuyuki Kita, “3D SimultaneousLocalization and Map-Building Using Active Vision for aRobot Moving on Undulating Terrain”, IEEE Conference onComputer Vision and Pattern Recognition, Dec. 2001.
  3. M. Achtelika, A. Bachrachb, R. Heb, S. Prenticeb, and N.Royb, “Autonomous navigation and exploration of aquadrotor helicopter in GPS-denied indoor environments,” inRobotics: Science and Systems Conference, June 2008.
  4. Mondragon, I.F., Campoy, P., Correa, J.F., Mejias, L., “Visualmodel feature tracking for UAV control”, IEEE InternationalSymposium on Intelligent Signal Processing, WISP, 2007.
  5. S. H. Park, J. W. Lee, D. K. Baik, "SLAM for MonocularVision based MAV using SIFT Algorithm", Proc. of the KSSFall Conference 2009, pp.245-249, Oct. 2009.
  6. D. Lowe, “Distinctive image features from scale-invariantkeypoints”, In International Journal of Computer Vision, vol.20, pp.91-10, 2003. https://doi.org/10.1023/B:VISI.0000029664.99615.94
  7. Se, S., Lowe, D. and Little, J., “Mobile robot localization andmapping with uncertainty using scale-invariant visuallandmarks”, International Journal of Robotics Research, vol.21 i8, pp.735-758, 2002. https://doi.org/10.1177/027836402761412467
  8. Ankur Agarwal, Bill Triggs, “Recovering 3D Human Posefrom Monocular Images”, IEEE Transactions on patternanalysis and machine intelligence, Vol.28, No.1, Jan. 2006. https://doi.org/10.1109/TPAMI.2006.21
  9. M. Brown, R. Szeliski, and S. Winder, "Multi-imagematching using multi-scale oriented patches", TechnicalReport MSR-TR-2004-133, Microsoft Research, Dec. 2004.
  10. J. Canny, "A computational approach to edge detection", IEEE Trans. Pattern Analysis and Machine Intell, 8(6):679.698, 1986. https://doi.org/10.1109/TPAMI.1986.4767851