DOI QR코드

DOI QR Code

Educational Indoor Autonomous Mobile Robot System Using a LiDAR and a RGB-D Camera

라이다와 RGB-D 카메라를 이용하는 교육용 실내 자율 주행 로봇 시스템

  • Lee, Soo-Young (School of Mechanical and ICT Convergence Engineering, Sun Moon University) ;
  • Kim, Jae-Young (School of Mechanical and ICT Convergence Engineering, Sun Moon University) ;
  • Cho, Se-Hyoung (School of Mechanical and ICT Convergence Engineering, Sun Moon University) ;
  • Shin, Chang-yong (School of Mechanical and ICT Convergence Engineering, Sun Moon University)
  • Received : 2019.02.26
  • Accepted : 2019.03.14
  • Published : 2019.03.31

Abstract

We implement an educational indoor autonomous mobile robot system that integrates LiDAR sensing information with RGB-D camera image information and exploits the integrated information. This system uses the existing sensing method employing a LiDAR with a small number of scan channels to acquire LiDAR sensing information. To remedy the weakness of the existing LiDAR sensing method, we propose the 3D structure recognition technique using depth images from a RGB-D camera and the deep learning based object recognition algorithm and apply the proposed technique to the system.

본 논문은 라이다 센싱 정보와 RGB-D 카메라 영상 정보를 융합하여 이용하는 교육용 실내 자율주행 로봇 시스템을 구현한다. 이 시스템은 라이다 센싱 정보를 획득하기 위해 기존의 소 채널 라이다 센싱 방식을 이용한다. 또한 소 채널 라이다 센싱 방식의 약점을 보완하기 위해, RGB-D 카메라 깊이 영상과 딥러닝 기반 객체인식 알고리즘을 이용하는 3차원 구조물 인식 방법을 제안하고 이 시스템에 적용한다.

Keywords

JGGJB@_2019_v23n1_44_f0001.png 이미지

Fig. 1. Indoor autonomous navigation robot system. 그림 1. 실내 자율주행 로봇 시스템

JGGJB@_2019_v23n1_44_f0002.png 이미지

Fig. 2. Implemented mobile robot. 그림 2. 구현된 이동 로봇

JGGJB@_2019_v23n1_44_f0003.png 이미지

Fig. 3. Navigation using LiDAR based obstacle recognition. 그림 3. 라이다 기반 장애물 인식을 이용한 내비게이션

JGGJB@_2019_v23n1_44_f0004.png 이미지

Fig. 4. The process of extracting 3D structure information. 그림 4. 3차원 구조물 정보 추출 과정

JGGJB@_2019_v23n1_44_f0005.png 이미지

Fig. 5. Darknet-19 YOLO architecture. 그림 5. Darknet-19 YOLO 구조

JGGJB@_2019_v23n1_44_f0006.png 이미지

Fig. 6. An example showing 3D object recognition procedure. 그림 6. 3차원 구조물 인식 절차를 보이는 예

JGGJB@_2019_v23n1_44_f0007.png 이미지

Fig. 7. Comparison of scan information. 그림 7. 스캔 정보의 비교

JGGJB@_2019_v23n1_44_f0008.png 이미지

Fig. 8. Experiments for indoor autonomous navigation avoiding 3D structure. 그림 8. 3차원 구조물을 회피하는 실내 자율주행 실험

References

  1. K. T. Park and D. H. Kim, "Technology trend of smart mobile robot," Proc. 13th Int. Conf. Control, Automation and Systems (ICCAS 2013), pp. 1149-1151, 2013. DOI: 10.1109/ICCAS.2013.6704090
  2. S. H. Kim, "Trend of robot vision technology for intelligent mobile robot," J. Korea Robot. Soc., vol. 9 no. 1, pp. 26-35, 2012.
  3. I. H. Hwang and K. G. Kim, "Implementation and evaluation of a robot operating system-based virtual Lidar driver," KIISE Trans. Computing Practices, vol. 23, no. 10, pp. 588-593, 2017. DOI: 10.5626/KTCP.2017.23.10.588
  4. J .S. Kim, "RGB-D camera application research trend," J. Korea Robot. Soc., vol. 8, no. 3, pp. 29-36, 2011.
  5. U. S. Pyo, "TurteBot3-ROBOTIS e- Manual," http://emanual.robotis.com/docs/en/platform/turtlebot3/overview/
  6. L. Joseph, "GitHub Learning Robotics using Python Chefbot_ROS_pkg," https://github.com/qboticslabs/Chefbot_ROS_pkg
  7. H. Y. Chen, D. Sun, J. Yang, and W. Shang, "Orientation correction based monocular SLAM for a mobile robot," Proc. the 2008 IEEE/ASME Int. Conf. Advanced Intelligent Mechatronics, pp. 1378-1383, 2008. DOI: 10.1109/AIM.2008.4601863
  8. E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, "ORB: An efficient alternative to SIFT or SURF," Proc. IEEE Int. Conf. Comput. Vision, pp. 2564-2571, 2011. DOI: 10.1109/ICCV.2011.6126544
  9. B. Kwon, D. H. Jeon, J. Y. Kim, J. H. Kim, D. Y. Kim, H. W. Song, and S. H. Lee, "Framework implementation of image-based indoor localization system using parallel distributed computing," J. Korean Inst. Commun. Inf. Sci., vol.41, no.11, pp. 1490-1501, 2016. https://doi.org/10.7840/kics.2016.41.11.1490
  10. C. A. Kapoutsis, C. P. Vavoulidis, and I. Pitas, "Morphological iterative closest point algorithm," IEEE Trans. Image Process., vol. 8, no. 11, pp. 1644-1646, 1999. DOI: 10.1109/83.799892
  11. T. J. Oh, S. W. Chung, K. Y. Jung, P. L. Yoon, J. H. Kim, and H. Myung, "Robot navigation and SLAM technology: Application examples of SLAM technology in various environments," J. Korea Robot. Soc., vol. 15, no. 2, pp. 19-25, 2018.
  12. A. Krizhevshy, I. Sutskever, and G. Hinton, "Imagenet classification with deep convolutional neural networks," Advances in Neural Information Processing Systems, pp. 1097-1105, 2012.
  13. J. C. Redmon, "Darknet Neural Network Framework," http://pjreddie.com/
  14. J. Redmo, S. Divvala, R. Girshick, and A. Farhaid, "You Only Look Once: Unified, real-time object detection," Proc. The IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pp. 779-788, 2016.
  15. S. Madgwick, A. Harrison, and A. Vaidyanathan, "Estimation of IMU and MARG orientation using a gradient descent algorithm," Proc. IEEE Int. Conf. Rehabil. Robot, pp. 1-7, 2011. DOI: 10.1109/ICORR.2011.5975346
  16. G. Grisetti, C. Stachnniss, and W. Burgard, "Open SLAM gmapping," https://openslamorg.github.io/gmapping.html
  17. J. Redmon and A. Farhadi, "YOLO9000: Better, faster, stronger," Proc. The IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pp. 7263-7271, 2017.