DOI QR코드

DOI QR Code

Feature-based Matching Algorithms for Registration between LiDAR Point Cloud Intensity Data Acquired from MMS and Image Data from UAV

MMS로부터 취득된 LiDAR 점군데이터의 반사강도 영상과 UAV 영상의 정합을 위한 특징점 기반 매칭 기법 연구

  • Received : 2019.11.02
  • Accepted : 2019.11.29
  • Published : 2019.12.31

Abstract

Recently, as the demand for 3D geospatial information increases, the importance of rapid and accurate data construction has increased. Although many studies have been conducted to register UAV (Unmanned Aerial Vehicle) imagery based on LiDAR (Light Detection and Ranging) data, which is capable of precise 3D data construction, studies using LiDAR data embedded in MMS (Mobile Mapping System) are insufficient. Therefore, this study compared and analyzed 9 matching algorithms based on feature points for registering reflectance image converted from LiDAR point cloud intensity data acquired from MMS with image data from UAV. Our results indicated that when the SIFT (Scale Invariant Feature Transform) algorithm was applied, it was able to stable secure a high matching accuracy, and it was confirmed that sufficient conjugate points were extracted even in various road environments. For the registration accuracy analysis, the SIFT algorithm was able to secure the accuracy at about 10 pixels except the case when the overlapping area is low and the same pattern is repeated. This is a reasonable result considering that the distortion of the UAV altitude is included at the time of UAV image capturing. Therefore, the results of this study are expected to be used as a basic research for 3D registration of LiDAR point cloud intensity data and UAV imagery.

최근 3차원 공간정보에 대한 수요가 증가함에 따라 신속하고 정확한 데이터 구축의 중요성이 증대되어 왔다. 정밀한 3차원 데이터 구축이 가능한 LiDAR (Light Detection and Ranging) 데이터를 기준으로 UAV (Unmanned Aerial Vehicle) 영상을 정합하기 위한 연구가 다수 수행되어 왔으나, MMS (Mobile Mapping System)로부터 취득된 LiDAR 점군데이터의 반사강도 영상을 활용한 연구는 미흡한 실정이다. 따라서 본 연구에서는 MMS로부터 취득된 LiDAR 점군데이터를 반사영상으로 변환한 데이터와 UAV 영상 데이터의 정합을 위해 9가지의 특징점 기반매칭 기법을 비교·분석하였다. 분석 결과 SIFT (Scale Invariant Feature Transform) 기법을 적용하였을 때 안정적으로 높은 매칭 정확도를 확보할 수 있었으며, 다양한 도로 환경에서도 충분한 정합점을 추출할 수 있었다. 정합 정확도 분석 결과 SIFT 알고리즘을 적용한 경우 중복도가 낮으며 동일한 패턴이 반복되는 경우를 제외하고는 약 10픽셀 수준으로 정확도를 확보할 수 있었으며, UAV 영상 촬영 당시 UAV 자세에 따른 왜곡이 포함되어 있음을 감안할 때 합리적인 결과라고 할 수 있다. 따라서 본 연구의 분석 결과는 향후 LiDAR 점군데이터와 UAV 영상의 3차원 정합을 위한 기초연구로 활용될 수 있을 것으로 기대된다.

Keywords

References

  1. Abayowa, B.O., Yilmaz, A., and Hardie, R.C. (2015), Automatic registration of optical aerial imagery to a LiDAR point cloud for generation of city models, ISPRS Journal of Photogrammetry and Remote Sensing, Vol. 106, pp. 68-81. https://doi.org/10.1016/j.isprsjprs.2015.05.006
  2. Abedini, A., Hahn, M., and Samadzadegan, F. (2008), An investigation into the registration of LiDAR intensity data and aerial images using the SIFT approach, In International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXVII, pp. 169-176.
  3. Alcantarilla, P.F., Bartoli, A., and Davison, A.J. (2012), KAZE features, In European Conference on Computer Vision, Springer, Berlin, Heidelberg, pp. 214-227.
  4. Bohm, J. and Becker, S. (2007), Automatic marker-free registration of terrestrial laser scans using reflectance, In Proceedings of the 8th Conference on Optical 3D Measurement Techniques, Zurich, Switzerland, pp. 9-12.
  5. Commercializations Promotion Agency for R&D Outcomes (COMPA), (2017), LiDAR Technology and Market Trends, S&T Market Report, Vol. 54, 16p. (in Korean)
  6. Conte, G. and Doherty, P. (2008), An integrated UAV navigation system based on aerial image matching, In 2008 IEEE Aerospace Conference, IEEE, pp. 1-10.
  7. Fernandez, J.C., Singhania, A., Caceres, J., Slatton, K.C., Starek, M., and Kumar, R. (2007), An Overview of Lidar Point Cloud Processing Software, GEM Center Report No. Rep_2007-12-001, University of Florida, 27p.
  8. Fischler, M.A. and Bolles, R.C. (1981), Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography, Communications of the ACM, Vol. 24, No. 6, pp. 381-395. https://doi.org/10.1145/358669.358692
  9. Guan, H., Li, J., Yu, Y., Wang, C., Chapman, M., and Yang, B. (2014). Using mobile laser scanning data for automated extraction of road markings, ISPRS Journal of Photogrammetry and Remote Sensing, Vol. 87, pp. 93-107. https://doi.org/10.1016/j.isprsjprs.2013.11.005
  10. Harris, C.G. and Stephens, M. (1988), A combined corner and edge detector. In Alvey Vision Conference, Vol. 15, No. 50, pp. 10-5244.
  11. Hong, S., Park, I., Lee, J., Lim, K., Choi, Y., and Sohn, H.G. (2017), Utilization of a terrestrial laser scanner for the calibration of mobile mapping systems, Sensors, Vol. 17, No. 3, 24p.
  12. Kim, T. and Im, Y.J. (2003), Automatic satellite image registration by combination of matching and random sample consensus, IEEE transactions on geoscience and remote sensing, Vol. 41, No. 5, pp. 1111-1117. https://doi.org/10.1109/TGRS.2003.811994
  13. Kim, M. (2005), The Study on Road Extraction Using LiDAR Data, Master's thesis, Inha University, Incheon, Korea, 62p. (in Korean with English abstract)
  14. Kim, S., Yoo, H., and Sohn, K. (2012), FAST and BRIEF based real-time feature matching algorithms, In Proceedings of the Korean Society of Broadcast Engineers Conference, pp. 1-4. (in Korean)
  15. Li, Q., Wang, G., Liu, J., and Chen, S. (2009), Robust scaleinvariant feature matching for remote sensing image registration, IEEE Geoscience and Remote Sensing Letters, Vol. 6, No. 2, pp. 287-291. https://doi.org/10.1109/LGRS.2008.2011751
  16. Lindeberg, T. (2015), Image matching using generalized scalespace interest points, Journal of Mathematical Imaging and Vision, Vol. 52, No. 1, pp. 3-36. https://doi.org/10.1007/s10851-014-0541-0
  17. Liu, S., Tong, X., Chen, J., Liu, X., Sun, W., Xie, H., Chen, P., Jin, Y., and Ye, Z. (2016), A linear feature-based approach for the registration of unmanned aerial vehicle remotely-sensed images and airborne LiDAR data, Remote Sensing, Vol. 8, No. 2, 15p.
  18. Lowe, D. (2004), Distinctive image features from scaleinvariant keypoints, International Journal of Computer Vision, Vol. 60, No. 2, pp. 91-110. https://doi.org/10.1023/B:VISI.0000029664.99615.94
  19. Matas, J., Chum, O., Urban, M., and Pajdla, T. (2004), Robust wide-baseline stereo from maximally stable extremal regions, Image and Vision Computing, Vol. 22, pp. 761-767. https://doi.org/10.1016/j.imavis.2004.02.006
  20. Mikolajczyk, K. and Schmid, C. (2005), A performance evaluation of local descriptors, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 27, pp. 1615-1630. https://doi.org/10.1109/TPAMI.2005.188
  21. Mikolajczyk, K., Tuytelaars, T., Schmid, C., Zisserman, A., Matas, J., Schaffalitzky, F., Kadir, T., and Van Gool, L. (2005), A Comparison of Affine Region Detectors, International Journal of Computer Vision, Vol. 65, pp. 43-72. https://doi.org/10.1007/s11263-005-3848-x
  22. Miksik, O. and Mikolajczyk, K. (2012), Evaluation of local detectors and descriptors for fast feature matching, In Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), IEEE, pp. 2681-2684.
  23. Moravec, H. (1980), Obstacle Avoidance and Navigation in the Real World by a Seeing Robot Rover, No. STAN-CS-80-813, Stanford University, California, USA.
  24. Nho, H. (2018), Fast Geocoding Processing for Low-cost Unmanned Aerial Vehicle Imagery, Master's thesis, Yonsei University, Seoul, Korea, 69p.
  25. Palenichka, R.M. and Zaremba, M.B. (2010), Automatic extraction of control points for the registration of optical satellite and LiDAR images, IEEE Transactions on Geoscience and Remote sensing, Vol. 48, No. 7, pp. 2864-2879. https://doi.org/10.1109/TGRS.2010.2043677
  26. Park, S., Kim, J., and Yoo, J. (2015), Fast stitching algorithm by using feature tracking, Journal of Broadcast Engineering, Vol. 20, No. 5, pp. 728-737. (in Korean with English abstract) https://doi.org/10.5909/JBE.2015.20.5.728
  27. Park, J., Kim, P., Cho, Y.K., and Kang, J. (2019), Framework for automated registration of UAV and UGV point clouds using local features in images, Automation in Construction, Vol. 98, pp. 175-182. https://doi.org/10.1016/j.autcon.2018.11.024
  28. Peng, W.H., Lee, M.Y., Li, T.H., Huang, C.H., and Lin, P.C. (2016), Performance comparison of image keypoint detection, description, and matching methods, In 2016 IEEE 5th Global Conference on Consumer Electronics, IEEE, pp. 1-2.
  29. Persad, R.A. and Armenakis, C. (2016), Co-registration of DSMs generated by UAV and terrestrial laser scanning systems, The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XLI-B1, pp. 985-990. https://doi.org/10.5194/isprs-archives-XLI-B1-985-2016
  30. Rosten, E. and Drummond, T. (2006), Machine learning for high speed corner detection, In 9th Euproean Conference on Computer Vision, Vol. 1, pp. 430-443.
  31. Schmind, C., Mohr, R., and Bauckhage, C. (2000), Evaluation of interest point detectors, International Journal of Computer Vision, Vol. 37, No. 2, pp. 151-172. https://doi.org/10.1023/A:1008199403446
  32. Shi, J. and Tomasi, C. (1994), Good Features to Track, CVPR.
  33. Tareen, S.A.K. and Saleem, Z. (2018), A comparative analysis of sift, surf, kaze, akaze, orb, and brisk, In 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), IEEE, pp. 1-10.
  34. Tsai, C.H. and Lin, Y.C. (2017), An accelerated image matching technique for UAV orthoimage registration, ISPRS Journal of Photogrammetry and Remote Sensing, Vol. 128, pp. 130-145. https://doi.org/10.1016/j.isprsjprs.2017.03.017
  35. Vedaldi, A. and Fulkerson, B. (2010), VLfeat: An open and portable library of computer vision algorithms, In Proceedings of the 18th ACM international conference on Multimedia, Firenze, Italy, pp. 25-29.
  36. Verma, S.B. and Chandran, S. (2016), Comparative Study of FAST MSER and Harris for Palmprint Verification System, International Journal of Scientific & Engineering Research, Vol. 7, No. 12, pp. 855-858.
  37. Yang, B. and Chen, C. (2015), Automatic registration of UAVborne sequent images and LiDAR data, ISPRS Journal of Photogrammetry and Remote Sensing, Vol. 101, pp. 262-274. https://doi.org/10.1016/j.isprsjprs.2014.12.025