Browse > Article
http://dx.doi.org/10.7848/ksgpc.2020.38.2.165

A Fast Image Matching Method for Oblique Video Captured with UAV Platform  

Byun, Young Gi (Spatial Information Research Institute, Korea Land and Geospatial Informatix Corp.)
Kim, Dae Sung (Agency for Defence Development)
Publication Information
Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography / v.38, no.2, 2020 , pp. 165-172 More about this Journal
Abstract
There is growing interest in Vision-based video image matching owing to the constantly developing technology of unmanned-based systems. The purpose of this paper is the development of a fast and effective matching technique for the UAV oblique video image. We first extracted initial matching points using NCC (Normalized Cross-Correlation) algorithm and improved the computational efficiency of NCC algorithm using integral image. Furthermore, we developed a triangulation-based outlier removal algorithm to extract more robust matching points among the initial matching points. In order to evaluate the performance of the propose method, our method was quantitatively compared with existing image matching approaches. Experimental results demonstrated that the proposed method can process 2.57 frames per second for video image matching and is up to 4 times faster than existing methods. The proposed method therefore has a good potential for the various video-based applications that requires image matching as a pre-processing.
Keywords
Video Image Processing; Feature Point Extraction; Feature Image Matching; Outlier Removal; Sensor Modeling;
Citations & Related Records
연도 인용수 순위
  • Reference
1 David, N. (2003), Preemptive RANSAC for live structure and motion estimation, Proceedings Ninth IEEE International Conference on Computer Vision, 13-16 October, Nice, France, pp. 199-206.
2 Fischler, M.A. and Bolles, R. C. (1981), Random sample consensus: A paradigm for model fitting with applications to image analysis, Communications of the ACM, Vol. 24, pp. 381-395.   DOI
3 Hariss, H., and Stephens, M. (1988), A combined corner and edge detector, In Proc. of Fourth Alvey Vision Conference-1988, 31 August-2 September, Manchester, United States, pp. 147-151.
4 Hartley, R. and Zisserman, A. (2003), Multiple View Geometry in Computer Vision, Cambridge University Press, second edition, New York, N.Y.
5 Heo, Y.S., Lee, K.M., and Lee, S.U. (2011), Robust stereo matching using adaptive normalized cross-correlation, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 33, pp. 807-822.   DOI
6 Herbert, B., Andreas E., Tinne, T., and Luc, V. G. (2008), SURF: Speeded Up Robust Features, Computer Vision and Image Understanding, Vol. 110, pp. 346-359.   DOI
7 Inder, K., Silver, V., and Shi, X. (2019), Learning control policies of driverless vehicles from UAV video streams in complex urban environments, Remote Sensing, Vol. 11, pp. 2723.   DOI
8 Kanade, T. and Okutomi, M. (1994), A stereo matching algorithm with an adaptive window: theory and experiment, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 16, pp. 920-932.   DOI
9 Ke, R., Li, Z., Tang, J., Pan, Z., and Wang, Y. (2019), Realtime traffic flow parameter estimation from UAV video based on Ensemble classifier and optical flow, IEEE Transactions on Intelligent Transportation Systems, San Francisco, United States, Vol. 20, pp. 54-64.   DOI
10 Lucas, B. and Kanade, T. (1981), An iterative image registration technique with an application to stereo vision, Proceedings of the 7th international joint conference on Artificial intelligence, 24-28 August, San Francisco United States, Vol. 2, pp. 674-679.
11 Lowe, D.G. (2004), Distinctive Image Features from Scale-Invariant Keypoints, International Journal of Computer Vision, Vol. 110, pp. 91-110.   DOI
12 Robert, C., Timonthy, W., and Randal, W. (2014), Relative navigation approach for vision-based aerial GPS-denied navigation, Journal of intelligent and Robotic System, Vol. 74, pp. 97-111.   DOI
13 Smith, S. and Brady, J. (1997), SUSAN-a new approach to low-level image processing, International Journal of Computer Vision, Vol. 23, pp. 45-78.   DOI
14 Tissainayangam, P. and Suter, M. (2004), Assessing the performance of corner detectors for point feature tracking application, Image and Vision Computing, Vol. 22, pp. 663-679.   DOI
15 Tao, L., Zaifeng, S., and Pumeng, W. (2020), Robust and efficient corner detector using non-corners exclusion, Applied Sciences, Vol. 10, pp. 1-14.
16 Zhang, S., Li, S., Zhang, B., and Peng M. (2020), Integration of optimal spatial distributed tie-points in RANSAC-based image registration, European Journal of Remote Sensing, Vol. 53, pp. 67-80.   DOI
17 Viola, P. and Jones. M. (2001), Rapid object detection using a boosted cascade of simple features, Proceeding of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 8-14 December, Kauai, United State, Vol. 1, pp. 511-518.
18 Xue, X., Li, Y., Dong, H., and Shen, Q. (2018), Robust correlation tracking for UAV videos via feature fusion and saliency proposals, Remote Sensing, Vol. 10, pp. 1644.   DOI
19 Zhang, Z., Deriche, R., Faugeras, O., and Luong, Q. T. (1995), A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry, Artificial Intelligence, Vol. 78, pp. 87-119.   DOI
20 Zhuo, X., Tobias, K., Friedrich, F., and Peter, R. (2017), Automatic UAV image geo-registration by matching UAV images to goereferenced image data, Remote Sensing, Vol. 9, pp. 376.   DOI