DOI QR코드

DOI QR Code

Depthmap Generation with Registration of LIDAR and Color Images with Different Field-of-View

다른 화각을 가진 라이다와 칼라 영상 정보의 정합 및 깊이맵 생성

  • Choi, Jaehoon (Department of Computer Engineering, Keimyung University) ;
  • Lee, Deokwoo (Department of Computer Engineering, Keimyung University)
  • 최재훈 (계명대학교 공과대학 컴퓨터공학전공) ;
  • 이덕우 (계명대학교 공과대학 컴퓨터공학전공)
  • Received : 2020.03.12
  • Accepted : 2020.06.05
  • Published : 2020.06.30

Abstract

This paper proposes an approach to the fusion of two heterogeneous sensors with two different fields-of-view (FOV): LIDAR and an RGB camera. Registration between data captured by LIDAR and an RGB camera provided the fusion results. Registration was completed once a depthmap corresponding to a 2-dimensional RGB image was generated. For this fusion, RPLIDAR-A3 (manufactured by Slamtec) and a general digital camera were used to acquire depth and image data, respectively. LIDAR sensor provided distance information between the sensor and objects in a scene nearby the sensor, and an RGB camera provided a 2-dimensional image with color information. Fusion of 2D image and depth information enabled us to achieve better performance with applications of object detection and tracking. For instance, automatic driver assistance systems, robotics or other systems that require visual information processing might find the work in this paper useful. Since the LIDAR only provides depth value, processing and generation of a depthmap that corresponds to an RGB image is recommended. To validate the proposed approach, experimental results are provided.

본 논문에서는 라이다(LIDAR) 센서와 일반 카메라 (RGB 센서)가 획득한 영상들을 정합하고, 일반 카메라가 획득한 컬러 영상에 해당하는 깊이맵을 생성하는 방법을 제시한다. 본 연구에서는 Slamtec사의 RPLIDAR A3 와 일반 디지털 카메라를 활용하고, 두 종류의 센서가 획득 및 제공하는 정보의 특징 및 형태는 서로 다르다. 라이다 센서가 제공하는 정보는 라이다부터 객체 또는 주변 물체들까지의 거리이고, 디지털 카메라가 제공하는 정보는 2차원 영상의 Red, Green, Blue 값이다. 두 개의 서로 다른 종류의 센서를 활용하여 정보를 정합할 경우 객체 검출 및 추적에서 더 좋은 성능을 보일 수 있는 가능성이 있고, 자율주행 자동차, 로봇 등 시각정보처리 기술이 필요한 영역에서 활용도가 높은 것으로 기대한다. 두 종류의 센서가 제공하는 정보들을 정합하기 위해서는 각 센서가 획득한 정보를 가공하고, 정합에 적합하도록 처리하는 과정이 필요하다. 본 논문에서는 두 센서가 획득하는 정보들을 정합한 결과를 제공할 수 있는 전처리 방법을 실험 결과와 함께 제시한다.

Keywords

References

  1. Y-H. Woo, K-T. Choi and J-G. Lee, "Hybrid Multiple Object Tracking System Design for Real-Time Application", Journal of KIIT, Vol.17, No.11, pp.1-8, Nov. 2019, DOI : http://dx.doi.org/10.14801/jkiit.2019.17.11.1
  2. A. Brunetti, D. Buongiorono, G. F. Trotta and V. Bevilacqua "Computer vision and deep learning techniques for pedestrian detection and tracking: A survey", Neurocomputing, Vol.300, pp.17-33, July. 2018, DOI : https://doi.org/10.1016/j.neucom.2018.01.092
  3. F. Garcia, A. de la Escalera and J. M. Armingol, "Enhanced obstacle detection based on Data Fusion for ADAS applications", 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013), IEEE, The Hague, Netherlands, Jan. 2014, DOI : https://doi.org/10.1109/ITSC.2013.6728422
  4. D. Kim, W-H. Yun and J. Lee, "Tiny Frontal Face Detection for Robots", 2010 3rd International Conference on Human-Centric Computing, IEEE, Cebu, Philippines, Aug. 2010, DOI : https://doi.org/10.1109/HUMANCOM.2010.5563343
  5. L. Wang, W. Ouyang, X. Wang and H. Lu, "Visual Tracking With Fully Convolutional Networks", The IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, pp.3119-3127, Dec. 2015, DOI : https://doi.org/10.1109/ICCV.2015.357
  6. A. Jalal, S. Kamal and D. Kim, "Individual detection-tracking-recognition using depth activity images", 12th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Goyang, South Korea, pp. 450-455, Dec. 2015, DOI : https://doi.org/10.1109/URAI.2015.7358903
  7. R. Or-El, G. Rosman, A. Wetzler, R. Kimmel and A. Bruckstein, "RGBD-Fusion: Real-Time High Precision Depth Recovery", The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, USA, pp.5407-5416, Jun. 2015, DOI : https://doi.org/10.1109/CVPR.2015.7299179
  8. W. Choi, C. Pantofaru and S. Savarese, "Detecting and tracking people using an RGB-D camera via multiple detector fusion", IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Barcelona, Spain, pp.1076-1083, Jan. 2012, DOI : https://doi.org/10.1109/ICCVW.2011.6130370
  9. J. Han, L. Shao, D. Xu and J. Shotton, "Enhanced Computer Vision With Microsoft Kinect Sensor: A Review", IEEE Transactions on Cybernetics, Vol.43, No.5, pp.1318-1334, 2013, DOI : https://doi.org/10.1109/TCYB.2013.2265378
  10. J. Schlosser, C. K. Chow and Z. Kira, "Fusing LIDAR and images for pedestrian detection using convolutional neural networks", International Conference on Robotics and Automation (ICRA), May. 2016, DOI : https://doi.org/10.1109/ICRA.2016.7487370
  11. C. Premebida, J. Carreira, J. Batista and U. Nunes, "Pedestrian detection combining RGB and dense LIDAR data", IEEE/RSJ International Conference on Intelligent Robots and Systems, Sept. 2014, DOI : https://doi.org/10.1109/IROS.2014.6943141
  12. S-I. Oh and H-B. Kang, "Object Detection and Classification by Decision-Level Fusion for Intelligent Vehicle Systems", Sensors, Vol. 17, pp. 1-21, DOI : https://doi.org/10.3390/s17010207