DOI QR코드

DOI QR Code

Parameter Analysis for Super-Resolution Network Model Optimization of LiDAR Intensity Image

LiDAR 반사 강도 영상의 초해상화 신경망 모델 최적화를 위한 파라미터 분석

  • Seungbo Shim (Korea Institute of Civil Engineering and Building Technology, Department of Geotechnical Engineering Research)
  • 심승보 (한국건설기술연구원 지반연구본부)
  • Received : 2023.09.04
  • Accepted : 2023.09.16
  • Published : 2023.10.31

Abstract

LiDAR is used in autonomous driving and various industrial fields to measure the size and distance of an object. In addition, the sensor also provides intensity images based on the amount of reflected light. This has a positive effect on sensor data processing by providing information on the shape of the object. LiDAR guarantees higher performance as the resolution increases but at an increased cost. These conditions also apply to LiDAR intensity images. Expensive equipment is essential to acquire high-resolution LiDAR intensity images. This study developed artificial intelligence to improve low-resolution LiDAR intensity images into high-resolution ones. Therefore, this study performed parameter analysis for the optimal super-resolution neural network model. The super-resolution algorithm was trained and verified using 2,500 LiDAR intensity images. As a result, the resolution of the intensity images were improved. These results can be applied to the autonomous driving field and help improve driving environment recognition and obstacle detection performance

LiDAR는 자율 주행뿐만 아니라 다양한 산업 현장에 적용되어 대상의 크기와 거리를 측정하는 데 사용되고 있다. 이에 더하여 이 센서는 반사된 빛의 양을 바탕으로 반사 강도 영상 또한 제공한다. 이는 측정 대상의 형상에 대한 정보를 제공하여 센서 데이터 처리에 긍정적인 효과를 일으킨다. LiDAR는 고해상도가 될수록 높은 성능을 보장하지만 이는 센서 비용의 증가를 야기하는데, 이 점은 반사 강도 영상에도 해당된다. 높은 해상도의 반사 강도 영상을 취득하기 위해서는 고가의 장비 사용이 필수적이다. 따라서 본 연구에서는 저해상도의 반사 강도 영상을 고해상도의 영상으로 개선하는 인공지능을 개발하였다. 이를 위해서 본 연구에서는 최적의 초해상화 신경망 모델을 위한 파라미터 분석을 수행하였다. 또한, 초해상화 알고리즘을 2,500여 장의 반사 강도 영상에 적용하여 훈련과 검증을 하였다. 결과적으로 반사 강도 영상의 해상도를 향상시켰다. 바라건대 본 연구의 결과가 향후 자율 주행 분야에 적용되어 주행환경 인식과 장애물 탐지 성능 향상에 기여할 수 있기를 기대하는 바이다.

Keywords

Acknowledgement

이 성과는 정부(과학기술정보통신부)의 재원으로 한국연구재단의 지원을 받아 수행된 연구입니다 (No. 2022R1F1A1074663). 지원에 감사합니다.

References

  1. Ashraf, I., Hur, S. and Park, Y.(2017), "An investigation of interpolation techniques to generate 2D intensity image from LIDAR data", IEEE Access, vol. 5, pp.8250-8260. https://doi.org/10.1109/ACCESS.2017.2699686
  2. Bae, H., Jang, K. and An, Y. K.(2021), "Deep super resolution crack network (SrcNet) for improving computer vision-based automated crack detectability in in situ bridges", Structural Health Monitoring, vol. 20, no. 4, pp.1428-1442. https://doi.org/10.1177/1475921720917227
  3. Barsan, I. A., Wang, S., Pokrovsky, A. and Urtasun, R.(2020), Learning to localize using a lidar intensity map, arXiv:2012.10902, [Online] Available: https://arxiv.org/abs/2012.10902
  4. Chen, H., He, X., Teng, Q., Sheriff, R. E., Feng, J. and Xiong, S.(2020), "Super-resolution of real-world rock microcomputed tomography images using cycle-consistent generative adversarial networks", Physical Review E, vol. 101, no. 2, p.023305.
  5. Cheng, X., Hu, X., Tan, K., Wang, L. and Yang, L.(2021), "Automatic detection of shield tunnel leakages based on terrestrial mobile LiDAR intensity images using deep learning", IEEE Access, vol. 9, pp.55300-55310. https://doi.org/10.1109/ACCESS.2021.3070813
  6. Dai, T., Cai, J., Zhang, Y., Xia, S. T. and Zhang, L.(2019), "Second-order attention network for single image super-resolution", Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, pp.11065-11074.
  7. Dong, C., Loy, C. C., He, K. and Tang, X.(2014), "Learning a deep convolutional network for image super-resolution", Proceedings of European Conference of Computer Vision (ECCV), Zurich, Switzerland, pp.184-199.
  8. Huang, Y., Shao, L. and Frangi, A. F.(2017), "Simultaneous super-resolution and cross-modality synthesis of 3D medical images using weakly-supervised joint convolutional sparse coding", Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, pp.6070-6079.
  9. Janssens, N., Huysmans, M. and Swennen, R.(2020), "Computed tomography 3D super-resolution with generative adversarial neural networks: Implications on unsaturated and two-phase fluid flow", Materials, vol. 13, no. 6, p.1397.
  10. Kang, M. S. and An, Y. K.(2020), "Frequency-wavenumber analysis of deep learning-based super resolution 3D GPR images", Remote Sensing, vol. 12, no. 18, p.3056.
  11. Kim, J., Shim, S., Kang, S. J. and Cho, G. C.(2023), "Learning Structure for Concrete Crack Detection Using Robust Super-Resolution with Generative Adversarial Network", Structural Control and Health Monitoring, Early Access.
  12. Ledig, C., Theis, L., Huszar, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z. and Shi, W.(2017), "Photo-realistic single image super-resolution using a generative adversarial network", Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, pp.4681-4690.
  13. Li, Y. and Ibanez-Guzman, J.(2020), "Lidar for autonomous driving: The principles, challenges, and trends for automotive lidar and perception systems", IEEE Signal Processing Magazine, vol. 37, no. 4, pp.50-61. https://doi.org/10.1109/MSP.2020.2973615
  14. Niu, B., Wen, W., Ren, W., Zhang, X., Yang, L., Wang, S., Zhang, K., Cao, X. and Shen, H.(2020), "Single image super-resolution via a holistic attention network", Proceedings of European Conference of Computer Vision, Glasgow, UK, pp.191-207.
  15. Rasti, P., Uiboupin, T., Escalera, S. and Anbarjafari, G.(2016), "Convolutional neural network super resolution for face recognition in surveillance monitoring", Proceedings of International Conference of Articulated Motion and Deformable Objects (AMDO), Palma de Mallorca, Spain, pp.175-184.
  16. Roriz, R., Cabral, J. and Gomes, T.(2021), "Automotive LiDAR technology: A survey", IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 7, pp.6282-6297. https://doi.org/10.1109/TITS.2021.3086804
  17. Shi, W., Caballero, J., Huszar, F., Totz, J., Aitken, A. P., Bishop, R., Rueckert, D. and Wang, Z.(2016), "Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network", Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, pp.1874-1883.
  18. Shim, S. and Song, Y. E.(2020), "Encoder type semantic segmentation algorithm using multi-scale learning type for road surface damage recognition", The Journal of the Korea Institute of Intelligent Transport Systems, vol. 19, no. 2, pp.89-103. https://doi.org/10.12815/kits.2020.19.2.89
  19. Shim, S.(2021), "Detection Algorithm of Road Surface Damage Using Adversarial Learning", The Journal of The Korea Institute of Intelligent Transport Systems, vol. 20, no. 4, pp.95-105. https://doi.org/10.12815/kits.2021.20.4.95
  20. Shim, S., Kim, J., Lee, S. W. and Cho, G. C.(2022), "Road damage detection using super-resolution and semi-supervised learning with generative adversarial network", Automation in Construction, vol. 135, p.104139.
  21. Tatoglu, A. and Pochiraju, K.(2012), "Point cloud segmentation with LIDAR reflection intensity behavior", Proceedings of IEEE International Conference on Robotics and Automation(ICRA), Saint Paul, MN, USA, pp.786-790.
  22. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. and Polosukhin, I.(2017), "Attention is all you need", Proceedings of Advances in Neural Information Processing Systems.
  23. Zeiler, M. D., Krishnan, D., Taylor, G. W. and Fergus, R.(2010), "Deconvolutional networks", Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, pp.2528-2535.
  24. Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B. and Fu, Y.(2018), "Image super-resolution using very deep residual channel attention networks", Proceedings of European Conference on Computer Vision (ECCV), Munich, Germany, pp.286-301.