DOI QR코드

DOI QR Code

Comparative Study on Feature Extraction Schemes for Feature-based Structural Displacement Measurement

특징점 추출 기법에 따른 구조물 동적 변위 측정 성능에 관한 연구

  • Junho Gong (Department of Future & Smart Construction Research, Korea Institute of Civil Engineering and Building Technology)
  • 공준호 (한국건설기술연구원 미래스마트건설연구본부)
  • Received : 2024.05.30
  • Accepted : 2024.06.09
  • Published : 2024.06.30

Abstract

In this study, feature point detection and displacement measurement performance depending on feature extraction algorithms were compared and analyzed according to environmental changes and target types in the feature point-based displacement measurement algorithm. A three-story frame structure was designed for performance evaluation, and the displacement response of the structure was digitized into FHD (1920×1080) resolution. For performance analysis, the initial measurement distance was set to 10m, and increased up to 40m with an increment of 10m. During the experiments, illuminance was fixed to 450lux or 120lux. The artificial and natural targets mounted on the structure were set as regions of interest and used for feature point detection. Various feature detection algorithms were implemented for performance comparisons. As a result of the feature point detection performance analysis, the Shi-Tomasi corner and KAZE algorithm were found that they were robust to the target type, illuminance change, and increase in measurement distance. The displacement measurement accuracy using those two algorithms was also the highest. However, when using natural targets, the displacement measurement accuracy is lower than that of artificial targets. This indicated the limitation in extracting feature points as the resolution of the natural target decreased as the measurement distance increased.

본 연구는 특징점 기반 변위 계측 알고리즘에서 환경 변화 및 타겟의 종류에 따라 특징점 검출 성능을 비교 분석하였고, 특징점 검출 알고리즘에 따른 변위 측정정확도를 비교 분석하기 위해 진행되었다. 성능 평가를 위해 3층 전단 구조물을 설계하였으며, FHD(1920×1080)급 카메라를 활용하여 구조물의 변위 응답을 기록하였다. 촬영거리 증가와 조도 변화에 따른 성능분석을 위해 최초 촬영거리를 10m로 설정하여 10m씩 멀어지면서 최대 40m까지 실험을 수행하였으며, 두 가지 조도 환경(450lux와 120lux)을 조성하였다. 구조물에 설치된 인공 타겟과 자연 타겟(볼트연결부 및 슬래브 단면적)을 관심영역으로 설정하여 Shi-Tomasi corner, SURF, BRISK 및 KAZE 특징점 검출 알고리즘으로 특징점을 검출하였다. 특징점 검출 성능분석 결과 Shi-Tomasi corner와 KAZE 알고리즘이 타겟 종류, 조도변화 및 촬영거리 증가에 강건한 것으로 보여줬으며, 두 알고리즘을 활용한 변위 측정정확도도 가장 높은 것으로 나타났다. 하지만 자연 타겟 활용시 변위 측정정확도는 인공 타겟의 경우보다 낮아지는 것을 보여주며, 밝기 대비가 가장 낮은 슬래브 단면적을 타겟으로 활용시 비전센서 운용거리가 20m로 적용 한계성을 보여줬다. 이는 촬영거리 증가에 따라 자연 타겟의 해상도가 저하되어 특징점을 추출에 한계성을 나타냈다.

Keywords

Acknowledgement

본 연구는 과학기술정보통신부 한국건설기술연구원 연구운영비지원(주요사업)사업으로 수행되었습니다(20240143-001, 미래 건설산업견인 및 신시장 창출을 위한 스마트 건설기술 연구).

References

  1. Spencer Jr, B. F., Hoskere, V., and Narazaki, Y. (2019), Advances in computer vision-based civil infrastructure inspection and monitoring, Engineering, 5(2), 199-222. https://doi.org/10.1016/j.eng.2018.11.030
  2. Celik, O., Dong, C. Z., and Catbas, F. N. (2018), A computer vision approach for the load time history estimation of lively individuals and crowds, Computers & Structures, 200, 32-52. https://doi.org/10.1016/j.compstruc.2018.02.001
  3. Kim, S. W., Jeon, B. G., Kim, N. S., and Park, J. C. (2013), Vision-based monitoring system for evaluating cable tensile forces on a cable-stayed bridge, Structural Health Monitoring, 12(5-6), 440-456. https://doi.org/10.1177/1475921713500513
  4. Cha, Y. J., Chen, J. G., and Buyukozturk, O. (2017), Output-only computer vision based damage detection using phase-based optical flow and unscented Kalman filters, Engineering Structures, 132, 300-313. https://doi.org/10.1016/j.engstruct.2016.11.038
  5. Feng, D., and Feng, M. Q. (2015), Model updating of railway bridge using in situ dynamic displacement measurement under trainloads, Journal of Bridge Engineering, 20(12), 04015019.
  6. Poozesh, P., Sarrafi, A., Mao, Z., and Niezrecki, C. (2017), Modal parameter estimation from optically-measured data using a hybrid output-only system identification method, Measurement, 110, 134-145. https://doi.org/10.1016/j.measurement.2017.06.030
  7. Lee, J. J., and Shinozuka, M. (2006), A vision-based system for remote sensing of bridge displacement, Ndt & E International, 39(5), 425-431. https://doi.org/10.1016/j.ndteint.2005.12.003
  8. Feng, D., Feng, M. Q., Ozer, E., and Fukuda, Y. (2015), A vision-based sensor for noncontact structural displacement measurement, Sensors, 15(7), 16557-16575. https://doi.org/10.3390/s150716557
  9. Yoon, H., Elanwar, H., Choi, H., Golparvar-Fard, M., and Spencer Jr, B. F. (2016), Target-free approach for vision-based structural system identification using consumer-grade cameras, Structural Control and Health Monitoring, 23(12), 1405-1416. https://doi.org/10.1002/stc.1850
  10. Xu, Y., and Brownjohn, J. M. (2018). Review of machine-vision based methodologies for displacement measurement in civil structures. Journal of Civil Structural Health Monitoring, 8, 91-110. https://doi.org/10.1007/s13349-017-0261-4
  11. Kohut, P., Holak, K., Uhl, T., Ortyl, L., Owerko, T., Kuras, P., and Kocierz, R. (2013), Monitoring of a civil structure's state based on noncontact measurements, Structural Health Monitoring, 12(5-6), 411-429. https://doi.org/10.1177/1475921713487397
  12. Luo, L., Feng, M. Q., and Wu, J. (2020), A comprehensive alleviation technique for optical-turbulence-induced errors in vision-based displacement measurement, Structural Control and Health Monitoring, 27(3), e2496.
  13. Fukuda, Y., Feng, M. Q., and Shinozuka, M. (2010), Cost-effective vision-based system for monitoring dynamic response of civil engineering structures, Structural Control and Health Monitoring, 17(8), 918-936. https://doi.org/10.1002/stc.360
  14. Lucas, B. D., and Kanade, T. (1981, August), An iterative image registration technique with an application to stereo vision, Proceeding of 7th international joint conference on Artificial intelligence, Canadam Vol. 2, 674-679.
  15. Tomasi, C., and Kanade, T. (1991), Detection and tracking of point, Int J Comput Vis, 9, 137-154. https://doi.org/10.1007/BF00129684
  16. Shi, J., and Tomasi, C. (1994, June), Good features to track, 1994 Proceedings of IEEE conference on computer vision and pattern recognition, IEEE, Seattle WA USA, 593-600.
  17. Lowe, D. G. (2004), Distinctive image features from scale- invariant keypoints, International Journal of Computer Vision, 60, 91-110. https://doi.org/10.1023/B:VISI.0000029664.99615.94
  18. Bay, H., Tuytelaars, T., and Van Gool, L. (2006), Surf: Speeded up robust features, Computer Vision-ECCV 2006: 9th European Conference on Computer Vision, Springer Berlin Heidelberg, Graz, Austria, 404-417.
  19. Alcantarilla, P. F., Bartoli, A., and Davison, A. J. (2012), KAZE features, Computer Vision-ECCV 2012: 12th European Conference on Computer Vision, Springer Berlin Heidelberg, 214-227.
  20. Choi, Y., Farkoushi, M. G., Hong, S., and Shon, H. G. (2019), Feature-based Matching Algorithms for Registration between LiDAR Point Cloud Intensity Data Acquired from MMS and Image Data from UAV, Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartograph, 37(6), 453-464 (in Korean).
  21. Hong, S. and Shin, H. S. (2020), Comparative Performance Analysis of Feature Detection and Matching Methods for Lunar Terrain Images, Journal of the Korean Society of Civil Engineers, 40(4), 437-444 (in Korean). https://doi.org/10.12652/KSCE.2020.40.4.0437
  22. Lee, T. H., Park, J. T., Lee, S. H., and Park, S. Z. (2022), Performance of Feature-based Stitching Algorithms for Multiple Images Captured by Tunnel Scanning System, Journal of the Korea Institute for Structural Maintenance and Inspection, 26(5), 30-42 (in Korean).
  23. Harris, C., and Stephens, M. (1988, August), A combined corner and edge detector, In Alvey Vision Conference, 15(50), 147-151.
  24. Moravec, H. P. (1980), Obstacle avoidance and navigation in the real world by a seeing robot rover. Stanford University.
  25. Leutenegger, S., Chli, M., and Siegwart, R. Y. (2011, November), BRISK: Binary robust invariant scalable keypoints, In 2011 International Conference on Computer Vision, IEEE, 2548-2555.
  26. Rosten, E., and Drummond, T. (2006), Machine learning for high-speed corner detection. In Computer Vision-ECCV 2006: 9th European Conference on Computer Vision, Graz Austria, Springer Berlin Heidelberg, 430-443.
  27. Torr, P. H., and Zisserman, A. (2000), MLESAC: A new robust estimator with application to estimating image geometry, Computer Vision and Image Understanding, 78(1), 138-156. https://doi.org/10.1006/cviu.1999.0832
  28. Badali, A. P., Zhang, Y., Carr, P., Thomas, P. J., and Hornsey, R. I. (2005, October), Scale factor in digital cameras, In Photonic Applications in Biosensing and Imaging, SPIE, 5969, 556-565.
  29. Hijazi, A., Friedl, A., and Kahler, C. J. (2011), Influence of camera's optical axis non-perpendicularity on measurement accuracy of two-dimensional digital image correlation, Jordan Journal of Mechanical and Industrial Engineering, 5(4), 1-10.