DOI QR코드

DOI QR Code

소형 UAV의 장애물 충돌 회피를 위한 YOLO 및 IR 센서 기반 장애물 크기 예측 방법

The Obstacle Size Prediction Method Based on YOLO and IR Sensor for Avoiding Obstacle Collision of Small UAVs

  • 이의천 (경상국립대학교 공과대학 항공우주 및 소프트웨어공학부) ;
  • 이종원 (경상국립대학교 공과대학 항공우주 및 소프트웨어공학부) ;
  • 최의진 (경상국립대학교 공과대학 항공우주 및 소프트웨어공학부) ;
  • 이선아 (경상국립대학교 공과대학 항공우주 및 소프트웨어공학부)
  • Uicheon Lee (Department of Aerospace and Software Eng., Gyeongsang National University) ;
  • Jongwon Lee (Department of Aerospace and Software Eng., Gyeongsang National University) ;
  • Euijin Choi (Department of Aerospace and Software Eng., Gyeongsang National University) ;
  • Seonah Lee (Department of Aerospace and Software Eng., Gyeongsang National University)
  • 투고 : 2023.09.02
  • 심사 : 2023.11.13
  • 발행 : 2023.12.31

초록

UAV의 수요가 증가함에 따라 많은 충돌 회피 방법들이 제안됐다. 이러한 방법들은 LiDAR 및 스테레오 카메라를 주축으로 연구되었으나 무겁거나 공간이 부족하여 소형 UAV에 접목이 어려웠기에, 최근에는 객체 인지 모델 및 거리 측정 센서를 복합적으로 사용한 방법들이 제안되고 있다. 하지만 이러한 객체 인지 복합 방법들은 인지한 장애물의 크기 정보를 도출하지 않아 인지 초기에 적정 회피 거리 도출 및 장애물의 좌표화가 어렵다는 단점이 존재한다. 본 논문에서는 단안 카메라-YOLO와 적외선 센서 기반의 장애물 크기 예측 방법을 제안하고, 실험을 통해 40cm의 거리 내에서 86.39%의 정확도를 보임을 확인했다. 또한, 제안한 방법을 적용하여 소형 UAV에 적용하여 장애물 충돌 회피가 가능한지를 확인하였다.

With the growing demand for unmanned aerial vehicles (UAVs), various collision avoidance methods have been proposed, mainly using LiDAR and stereo cameras. However, it is difficult to apply these sensors to small UAVs due to heavy weight or lack of space. The recently proposed methods use a combination of object recognition models and distance sensors, but they lack information on the obstacle size. This disadvantage makes distance determination and obstacle coordination complicated in an early-stage collision avoidance. We propose a method for estimating obstacle sizes using a monocular camera-YOLO and infrared sensor. Our experimental results confirmed that the accuracy was 86.39% within the distance of 40 cm. In addition, the proposed method was applied to a small UAV to confirm whether it was possible to avoid obstacle collisions.

키워드

과제정보

본 과제(결과물)는 교육부와 한국연구재단의 재원으로 지원을 받아 수행된 3단계 산학연협력 선도대학 육성사업(LINC 3.0) 및 중견연구과제(No. NRF-2021R1A2C1094167)의 연구결과입니다.

참고문헌

  1. L. Lu, G. Fasano, A. Carrio, M. Lei, H. Bavle, and P. Campoy, "A comprehensive survey on non-cooperative collision avoidance for Micro Aerial Vehicles: Sensing and obstacle detection," Journal of Field Robotics, vol. 40, no. 6, pp. 1697-1720, 2023.  https://doi.org/10.1002/rob.22189
  2. B. Khmel, H. Ghadia, and S. Bhandari, "Collision avoidance system for a multicopter using stereoscopic vision with target detection and tracking capabilities." AIAA SCITECH 2023 Forum, pp.1147, 2023. 
  3. M. Likhita Nagendla Sai Sumanth, Advaith Ashwin Harish, Remidi Rohith Reddy, K. A. Nethravathi & M. Uttara Kumari, "Obstacle detection in autonomous vehicles using 3D lidar point cloud data." Data Intelligence and Cognitive Informatics, , pp. 745-757, 2022. 
  4. Z. Shang and Z. Shen. "Topology-based UAV path planning for multi-view stereo 3D reconstruction of complex structures." Complex & Intelligent Systems, vol. 9, no. 1, pp. 909-926, 2022.  https://doi.org/10.1007/s40747-022-00831-5
  5. M. Y. Arafaty, M. M. Alam, and S. Moh, "Vision-based navigation techniques for unmanned aerial vehicles: Review and Challenges." Drones, vol. 7, no. 2, p. 89, 2023. 
  6. G. Petrakis, Angelos Antonopoulos, Achilles Tripolitsiotis, Dimitris Trigkakis & Panagiotis Partsinevelos, "Precision mapping through the stereo vision and geometric transformations in unknown environments." Earth Science Informatics, vol. 16, no. 2, pp. 1849-1865, 2023.  https://doi.org/10.1007/s12145-023-00972-2
  7. J. Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi, "You only look once: Unified, real-time object detection." 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 
  8. Chiang, M. L., Tsai, S. H., Huang, C. M., & Tao, K. T, "Adaptive visual servoing for obstacle avoidance of micro unmanned aerial vehicle with optical flow and switched system model". Processes, 9.12: 2126, 2021. 
  9. Badrloo, S., Varshosaz, M., Pirasteh, S., & Li, J. "Image-based obstacle detection methods for the safe navigation of unmanned vehicles: A review", Remote Sensing, 14.15: 3824, 2022. 
  10. Karlsson, S., Kanellakis, C., Mansouri, S. S., & Nikolakopoulos, G. "Monocular vision-based obstacle avoidance scheme for micro aerial vehicle navigation" In: 2021 International Conference on Unmanned Aircraft Systems (ICUAS). IEEE, p. 1321-1327, 2021. 
  11. Lee, H. Y., Ho, H. W., & Zhou, Y, "Deep Learning-based monocular obstacle avoidance for unmanned aerial vehicle navigation in tree plantations: Faster region-based convolutional neural network approach". Journal of Intelligent & Robotic Systems, 101: 1-18, 2021. 
  12. Y. J. Zhang, "Camera calibration." 3-D Computer Vision, pp. 37-65, 2023. 
  13. D. Gunning and D. Aha, "DARPA's Explainable Artificial Intelligence (XAI) Program." AI Magazine, vol. 40, no. 2, pp. 44-58, 2019.  https://doi.org/10.1609/aimag.v40i2.2850
  14. S. H. Han, Min-Su Kwon, Ho-Jin Choi, "Explainable AI (XAI) approach to image captioning." The Journal of Engineering, vol. 2020, no. 13, pp. 589-594, 2020.  https://doi.org/10.1049/joe.2019.1217
  15. B. H. M. Van der Velden, Hugo J. Kuijf, Kenneth G.A. Gilhuijs, Max A. Viergever, "Explainable artificial intelligence (XAI) in deep learning-based medical image analysis," Medical Image Analysis, vol. 79, pp. 102470, 2022. 
  16. B. Li, Y. Gou, H. Zhu, and X. Peng, "Zero-shot image Dehazing." IEEE Transactions on Image Processing, vol. 29, pp. 8457-8466, 2020.  https://doi.org/10.1109/TIP.2020.3016134
  17. G. Parmar, "Zero-shot image-to-image translation." Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Proceedings, 2023. 
  18. Z. Novack, J. McAuley, and S. Garg, "Chils: Zero-shot image classification with hierarchical label sets." International Conference on Machine Learning, pp. 26342-26362, July 2023.