DOI QR코드

DOI QR Code

An Improved RANSAC Algorithm Based on Correspondence Point Information for Calculating Correct Conversion of Image Stitching

이미지 Stitching의 정확한 변환관계 계산을 위한 대응점 관계정보 기반의 개선된 RANSAC 알고리즘

  • Received : 2017.07.24
  • Accepted : 2017.09.25
  • Published : 2018.01.31

Abstract

Recently, the use of image stitching technology has been increasing as the number of contents based on virtual reality increases. Image Stitching is a method for matching multiple images to produce a high resolution image and a wide field of view image. The image stitching is used in various fields beyond the limitation of images generated from one camera. Image Stitching detects feature points and corresponding points to match multiple images, and calculates the homography among images using the RANSAC algorithm. Generally, corresponding points are needed for calculating conversion relation. However, the corresponding points include various types of noise that can be caused by false assumptions or errors about the conversion relationship. This noise is an obstacle to accurately predict the conversion relation. Therefore, RANSAC algorithm is used to construct an accurate conversion relationship from the outliers that interfere with the prediction of the model parameters because matching methods can usually occur incorrect correspondence points. In this paper, we propose an algorithm that extracts more accurate inliers and computes accurate transformation relations by using correspondence point relation information used in RANSAC algorithm. The correspondence point relation information uses distance ratio between corresponding points used in image matching. This paper aims to reduce the processing time while maintaining the same performance as RANSAC.

최근 가상현실 기반의 콘텐츠들이 늘어나면서 이미지 Stitching 기술의 사용이 증가하고 있다. 이미지 Stitching이란 고해상도 이미지 및 넓은 시야(Wide Field of View)의 이미지를 생성하기 위해 다중의 영상을 정합하는 방법이다. 이런 이미지 Stitching은 하나의 카메라로부터 생성되는 영상의 한계를 넘어 다양한 분야에서 활용되고 있다. 이미지 Stitching은 다중의 영상을 정합하기 위해 특징 점 및 대응점을 검출하고 RANSAC 알고리즘을 이용하여 영상간의 변환관계(Homography)를 계산한다. 일반적으로 변환관계 계산을 위해 대응점들이 필요하다. 그러나 대응점들에는 변환관계에 대한 잘못된 가정이나 오류로 인해 발생할 수 있는 다양한 유형의 노이즈(Noise)가 포함되어 있다. 이러한 노이즈는 변환관계를 정확히 예측하는 방해 요인이 된다. 이처럼 일반적으로 사용되는 대응점 매칭(Matching) 방법들은 잘못된 대응점들을 매칭할 수 있는 경우가 발생하기 때문에 모델 파라미터의 예측을 방해하는 대응점(Outlier)로부터 정확한 변환관계를 구축하기 위해 RANSAC 알고리즘을 사용한다. 본 논문에서는 RANSAC 알고리즘에 사용되는 대응점 관계 정보를 이용하여 좀 더 정확한 대응점(Inlier)을 추출하고 정확한 변환관계를 계산하는 알고리즘을 제안한다. 대응점 관계 정보는 이미지 매칭에 사용되는 대응점 간의 거리 비율을 사용하며, 본 논문은 기존 RANSAC 알고리즘과 같은 성능을 유지하면서 처리 시간을 단축시키는데 있다.

Keywords

References

  1. Matthew Brown and David G. Lowe, "Automatic panoramic image stitching using invariant features," International Journal of Computer Vision, Vol.74, No.1, pp.59-73, 2007. https://doi.org/10.1007/s11263-006-0002-3
  2. Daehyun Kim and Jongsoo Choi, "View Interpolation Algorithm for Continuously Changing Viewpoints in the Multi-panorama Based Navigation," IEIE Journal (SP), Vol.40, No.6, pp.141-148, 2003.
  3. Sehwan Kim, Kiyoung Kim, and Woontack Woo, "Multiple Camera Calibration for Panoramic 3D Virtual Environment," IEIE Journal (CI), Vol.41, No.2, pp.137-148, 2004.
  4. M. A. Fischler and R. C. Bolles, "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography," Communications of the ACM, Vol.24, No.6, pp.381-395, 1981. https://doi.org/10.1145/358669.358692
  5. E. Dubrofsky, "Homography Estimation," Master Thesis, University of British Columbia, Canada, 2009.
  6. Richard Hartley and Andrew Zisserman, "Multiple View Geometry in Computer Vision, 2nd Edition," Cambridge University Press, 2003.
  7. Chris Harris and Mike Stephens, "A combined corner and edge detector," Proceedings of Fourth Alvey Vision Conference, University of Manchester, England, 1988.
  8. Edward Rosten and Tom Drummond, "Machine learning for high-speed corner detection," European Conference on Computer Vision (ECCV 2006), pp.430-443, Graz, Austria, May 7-13, 2006.
  9. David G. Lowe, "Distinctive Image Features from Scale-Invariant Keypoints," International Journal of Computer Vision, Vol.60, No.2, pp.91-110, 2004. https://doi.org/10.1023/B:VISI.0000029664.99615.94
  10. Ethan Rublee, Vincent Rabaud, Kurt Konolige, and Gary Bradski, "ORB: an efficient alternative to SIFT or SURF," IEEE International Conference on Computer Vision(ICCV 2011), pp.2564-2571, Barcelona, Spain, Nov. 6-13, 2011.
  11. J. Matas and O. Chum, "Randomized RANSAC with Td,d test," Image and Vision Computing, Vol.22, No.10, pp.837-842, 2004. https://doi.org/10.1016/j.imavis.2004.02.009
  12. D. Capel, "An effective bail-out test for RANSAC consensus scoring," Proceedings of the 16th British Machine Vision Conference, pp.629-638, Oxford Brookes University, England, Sept. 5-8, 2005.
  13. J. Matas and O. Chum, "Randomized RANSAC with sequential probability ratio test," Proceedings of the 10th International Conference on Computer Vision, pp.1727-1732, Beijing, China, Oct. 15-21, 2005.
  14. O. Chum and J. Matas, "Matching with PROSAC: progressive sample consensus," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vol.1, pp.220-226, Jun. 20-25, San Diego, CA, USA, 2005.
  15. Rahul Raguram, Jan-Michael Frahm, and Marc Pollefeys, "A Comparative Analysis of RANSAC Techniques Leading to Adaptive Real-Time Random Sample Consensus," ECCV 2008, Part II, LNCS 5303, pp.500-513, 2008.
  16. Dustin Morley and Hassan Foroosh, "Improving RANSAC-Based Segmentation Through CNN Encapsulation," Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.6338-6347, Jul. 21-26, Hawaii Convention Center, HI, USA, 2017.