DOI QR코드

DOI QR Code

Performance Comparison and Analysis between Keypoints Extraction Algorithms using Drone Images

드론 영상을 이용한 특징점 추출 알고리즘 간의 성능 비교

  • Lee, Chung Ho (Department of Spatial Information Engineering, Namseoul University) ;
  • Kim, Eui Myoung (Department of Drone.GIS Engineering, Namseoul University)
  • Received : 2022.03.03
  • Accepted : 2022.04.14
  • Published : 2022.04.30

Abstract

Images taken using drones have been applied to fields that require rapid decision-making as they can quickly construct high-quality 3D spatial information for small regions. To construct spatial information based on drone images, it is necessary to determine the relationship between images by extracting keypoints between adjacent drone images and performing image matching. Therefore, in this study, three study regions photographed using a drone were selected: a region where parking lots and a lake coexisted, a downtown region with buildings, and a field region of natural terrain, and the performance of AKAZE (Accelerated-KAZE), BRISK (Binary Robust Invariant Scalable Keypoints), KAZE, ORB (Oriented FAST and Rotated BRIEF), SIFT (Scale Invariant Feature Transform), and SURF (Speeded Up Robust Features) algorithms were analyzed. The performance of the keypoints extraction algorithms was compared with the distribution of extracted keypoints, distribution of matched points, processing time, and matching accuracy. In the region where the parking lot and lake coexist, the processing speed of the BRISK algorithm was fast, and the SURF algorithm showed excellent performance in the distribution of keypoints and matched points and matching accuracy. In the downtown region with buildings, the processing speed of the AKAZE algorithm was fast and the SURF algorithm showed excellent performance in the distribution of keypoints and matched points and matching accuracy. In the field region of natural terrain, the keypoints and matched points of the SURF algorithm were evenly distributed throughout the image taken by drone, but the AKAZE algorithm showed the highest matching accuracy and processing speed.

드론을 이용하여 촬영한 영상은 소규모 지역에 대하여 고품질의 3차원 공간정보를 빠르게 구축할 수 있어 신속한 의사결정이 필요한 분야에 적용되고 있다. 드론 영상을 기반으로 공간정보를 구축하기 위해서는 인접한 드론 영상 간에 특징점 추출하고 영상 매칭을 수행하여 영상 간의 관계를 결정할 필요가 있다. 이에 본 연구에서는 드론을 이용하여 촬영한 주차장과 호수가 공존하는 지역, 건물이 있는 도심 지역, 자연 지형의 들판 지역의 3가지 대상지역을 선정하고 AKAZE (Accelerated-KAZE), BRISK (Binary Robust Invariant Scalable Keypoints), KAZE, ORB(Oriented FAST and Rotated BRIEF), SIFT (Scale Invariant Feature Transform), and SURF (Speeded Up Robust Features) 알고리즘의 성능을 분석하였다. 특징점 추출 알고리즘의 성능은 추출된 특징점의 분포, 매칭점의 분포, 소요시간, 그리고 매칭 정확도를 비교하였다. 주차장과 호수가 공존하는 지역에서는 BRISK 알고리즘의 속도가 신속하였으며, SURF 알고리즘이 특징점과 매칭점의 분포도와 매칭 정확도에서 우수한 성능을 나타내었다. 건물이 있는 도심 지역에서는 AKAZE 알고리즘의 속도가 신속하였으며 SURF 알고리즘이 특징점과 매칭점의 분포도와 매칭 정확도에서 우수한 성능을 나타내었다. 자연 지형의 들판 지역에서는 SURF 알고리즘의 특징점, 매칭점이 드론으로 촬영한 영상 전반적으로 고르게 분포되어 있으나 AKAZE 알고리즘이 가장 높은 매칭 정확도와 신속한 속도를 나타내었다.

Keywords

References

  1. Alcantarilla, P.F., Bartoli, A., and Davison, A.J. (2012), KAZE features, 12th European Conference on Computer Vision, pp. 214-227.
  2. Alcantarilla, P.F., Nuevo, J., and Bartoli, A. (2013), Fast explicit diffusion for accelerated features in nonlinear scale spaces, British Machine Vision Conference 2013, pp. 1-13.
  3. Babri, U.M., Tanvir, M., and Khurshid, K. (2016), Feature based correspondence: a comparative study on image matching algorithms, International Journal of Advanced Computer Science and Applications, Vol. 7, No. 3, pp. 206-210.
  4. Bay, H., Tuytelaars, T., and Gool, L.V. (2006), SURF: speeded up robust features, European Conference on Computer Vision, Vol. 3951, pp. 404-417.
  5. Choi, H.S. and Kim, E.M. (2019), Automatic geo-referencing of sequential drone images using linear features and distinct points, Korea Society of Surveying, Geodesy, Photogrammetry, and Cartography, Vol. 37, No. 1, pp. 19-28. (in Korean with English abstract)
  6. Fischler M.A., and Bolles R.C. (1981), Random Sample Consensus: A Paradigm for Model Fitting with Apphcatlons to Image Analysis and Automated Cartography, Graphics and Image Processing, Vol. 24, No. 6, pp. 381-395.
  7. He, F., and Habib, A. (2016), Automated Relative Orientation of UAV-Based Imagery in the Presence of Prior Information for the Flight Trajectory, Photogrammetric Engineering and Remote Sensing, Vol. 82, No. 11, pp. 879-891. https://doi.org/10.14358/PERS.82.11.879
  8. Hong, S.C., and Shin, H.S. (2020), Comparative performance analysis of feature detection and matching methods for lunar terrain images, Surveying and Geo-Spatial Information Engineering, Vol. 40, No. 4, pp. 437-444. (in Korean with English abstract)
  9. Jang, H.S., Kim, S.K., Lee, J.S., Yoo, S.H., Hong, S.H., Kim, M.K., and Sohn, H.G. (2020), Improved Image Matching Method Based on Affine Transformation Using Nadir and Oblique-Looking Drone Imagery, Korea Society of Surveying, Geodesy, Photogrammetry, and Cartography, Vol. 38, No. 5, pp. 477-486.
  10. Karami, E., Prasad, S., and Shehata, M. (2015), Image matching Using SIFT, SURF, BRIEF and ORB: performance comparison for distorted images, 2015 Newfoundland Electrical and Computer Engineering Conference, arXiv:1710.02726.
  11. Kim, E.M. (2020), Techniques of Photogrammetry and Computer Vision, Goomibook, Seoul.
  12. Kim, D.P., Back, K.S., and Kim, S.B. (2021), Production and accuracy analysis of topographic status map using drone images, Korean Geo-Environmental Society, Vol. 22, No. 2, pp. 35-39. (in Korean with English abstract)
  13. Kim, Y.W., Kim, D.S., and Kim S.H. (2021), Development of recommendation model for image keypoint dtection and descriptor extraction algorithm, Korean Institute of Information Technology, Vol 19, No. 4, pp. 27-35. (in Korean with English abstract)
  14. Lee, Y.H., and Kim, H.J. (2015), Evaluation of feature extraction and matching algorithms for the use of mobile application, Journal of the Semiconductor & Display Technology, Vol. 14, No. 4, pp. 56-60. (in Korean with English abstract)
  15. Leutenegger, S., Chli, M., and Siegwart, Y.R. (2011), BRISK: binary robust invariant scalable keypoints, 2011 International Conference on Computer Vision, pp. 2548-2555.
  16. Lowe, D.G. (2004), Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision, Vol. 60, pp. 91-110. https://doi.org/10.1023/B:VISI.0000029664.99615.94
  17. Rublee, E., Rabaud, V., Konolige, K., and Bradski, G. (2011), ORB: an efficient alternative to SIFT or SURF, 2011 International Conference on Computer Vision, pp. 2564-2571.
  18. Seong, J.H., Lee, K.R., Han, Y.K., and Lee, W.H. (2019), Geometric correction of none-GCP UAV orthophoto using feature points of reference image, Korean Society for Geospatial Information Science, Vol. 27, No. 6, pp. 27-34. (in Korean with English abstract)
  19. Skoczylas, M. (2014), Vision analysis system for autonomous landing of micro drone, Acta Mechanica et Automatica, Vol. 8, No. 4, pp. 199-203. https://doi.org/10.2478/ama-2014-0036
  20. Xi, W., Shi, Z., and Li, D. (2017), Comparisons of feature extraction algorithm based on unmanned aerial vehicle image, Open Physics, Vol. 15, pp. 472-478. https://doi.org/10.1515/phys-2017-0053