DOI QR코드

DOI QR Code

Semantic Object Detection based on LiDAR Distance-based Clustering Techniques for Lightweight Embedded Processors

경량형 임베디드 프로세서를 위한 라이다 거리 기반 클러스터링 기법을 활용한 의미론적 물체 인식

  • Jung, Dongkyu (School of Electronic Engineering, Kyungpook National University) ;
  • Park, Daejin (School of Electronic Engineering, Kyungpook National University)
  • Received : 2022.08.03
  • Accepted : 2022.09.05
  • Published : 2022.10.31

Abstract

The accuracy of peripheral object recognition algorithms using 3D data sensors such as LiDAR in autonomous vehicles has been increasing through many studies, but this requires high performance hardware and complex structures. This object recognition algorithm acts as a large load on the main processor of an autonomous vehicle that requires performing and managing many processors while driving. To reduce this load and simultaneously exploit the advantages of 3D sensor data, we propose 2D data-based recognition using the ROI generated by extracting physical properties from 3D sensor data. In the environment where the brightness value was reduced by 50% in the basic image, it showed 5.3% higher accuracy and 28.57% lower performance time than the existing 2D-based model. Instead of having a 2.46 percent lower accuracy than the 3D-based model in the base image, it has a 6.25 percent reduction in performance time.

자율주행차량에서 LiDAR와 같은 3D 데이터 센서를 사용한 주변 물체인식 알고리즘의 정확도는 많은 연구를 통해 상승하고 있으나 그에 따라 높은 성능의 하드웨어와 복잡한 구조를 요구하게 되었다. 이러한 물체인식 알고리즘은 주행 중 많은 프로세서를 수행하고 관리해야 하는 자율주행차량의 메인 프로세서에 큰 부하로 작용한다. 이러한 부하를 감소시킴과 동시에 3D 센서 데이터의 장점을 활용하기 위하여, 3D 센서 데이터에서 물리적 특성을 추출하고 이를 이용하여 생성한 ROI를 이용하여 2D 데이터 기반 인식을 제안한다. 기본 이미지에서 밝기 값을 50% 감소시킨 환경에서 기존 2D 기반 모델 대비 5.3% 높은 정확도와 28.57% 감소한 수행 시간을 보였다. 기본 이미지에서 3D 기반 모델 대비 2.46% 낮은 정확도를 가지는 대신 6.25% 감소한 수행 시간을 가진다.

Keywords

Acknowledgement

This study was supported by the BK21 FOUR project funded by the Ministry of Education, Korea (4199990113966, 10%), and the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2018R1A6A1A03025109, 10%, NRF-2022R1I1A3069260, 10%) and by Ministry of Science and ICT (2020M3H2A1078119). This work was partly supported by an Institute of Information and communications Technology Planning and Evaluation (IITP) grant funded by the Korean government (MSIT) (No. 2021-0-00944, Metamorphic approach of unstructured validation/verification for analyzing binary code, 40%) and (No. 2022-0-00816, OpenAPI-based hw/sw platform for edge devices and cloud server, integrated with the on-demand code streaming engine powered by AI, 20%) and (No. 2022-0-01170, PIM Semiconductor Design Research Center, 10%).

References

  1. Y. Li and J. Ibanez-Guzman, "Lidar for Autonomous Driving: The Principles, Challenges, and Trends for Automotive Lidar and Perception Systems," IEEE Signal Processing Magazine, vol. 37, no. 4, pp. 50-61, Jun. 2020.
  2. S. Lee, D. Lee, P. Choi, and D. Park, "Efficient Power Reduction Technique of LiDAR Sensor for Controlling Detection Accuracy Based on Vehicle Speed," IEMEK Journal of Embedded Systems and Applications, vol. 15, no . 5, pp. 215-225, Oct. 2020. https://doi.org/10.14372/IEMEK.2020.15.5.215
  3. J. Friederich and P. Zschech, "Review and Systematization of Solutions for 3D Object Detection," in Proceedings of 15th International Conference on Wirtscharftsinformatik (WI 2020), Potsdam, Germany, pp. 1699-1711, 2020.
  4. S. Liu, J. Tang, Z. Zhang, and J. L. Gaudiot, "Computer Architectures for Autonomous Driving," Computer, vol. 50, no. 8, pp. 18-25, Aug. 2017.
  5. G. Mandal, D. Bhattacharya, and P. Parthasarathi, "Real-Time Vision-Based Vehicle-to-Vehicle Distance Estimation on Two-Lane Single Carriageway Using a Low-Cost 2D Camera at Night," in Proceedings of International Conference on Computational Intelligence, Security and Internet of Things 2020, Tripura, India, pp. 54-65, 2020.
  6. A. Geiger, P. Lenz, and R. Urtasun, "Are we ready for autonomous driving? the KITTI vision benchmark suite," in 2012 IEEE conference on computer vision and pattern recognition, Providence: RI, USA, pp. 3354-3361, 2012.
  7. Y. Liao, J. Xie, and A. Geiger, "KITTI-360: A Novel Dataset and Benchmarks for Urban Scene Understanding in 2D and 3D," IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1-1, Jun. 2022.
  8. R. B. Rusu and S. Cousins, "3D is here: Point Cloud Library (PCL)," in IEEE international conference on robotics and automation, Shangai, China, pp. 1-4, 2011.
  9. P. An, J. Liang, K. Yu, B. Fang, and J. Ma, "Deep structural information fusion for 3D object detection on LiDAR-camera system," Computer Vision and Image Understanding, vol. 214, no. 103295, Nov. 2021.
  10. S. Pang, D. Morris, and H. Radha, "CLOCs: Camera-LiDAR Object Candidates Fusion for 3D Object Detection," in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS), Las Vegas: NV, USA, pp. 10386-10393, 2020.
  11. H. Yun and D. Park, "Effcient Object Recognition by Masking Semantic Pixel Difference Region of Vision Snapshot for Lightweight Embedded System," Journal of the Korea Institute of Information and Communication Engineering, vol. 26, no. 6, pp. 813-826, Jun. 2022. https://doi.org/10.6109/JKIICE.2022.26.6.813
  12. K. G. Derpanis, "Overview of the RANSAC Algorithm," Image Rochester NY, vol. 4, no. 1, pp. 2-3, May 2010.
  13. M. Greenspan and M. Yurick, "Approximate k-d tree search for efficient ICP," in Fourth International Conference on 3-D Digital Imaging and Modeling, 2003. 3DIM 2003. Proceeding, Banff: AB, Canada, pp. 442-448, 2003.
  14. D. Zhao, X. Hu, S. Xiong, J. Tian, J. Xiang, J. Zhou, and H. Li, "K-means clustering and kNN classification based on negative databases," Applied Soft Computing, vol. 110, no. 107732, Jul. 2021.
  15. X. Lu, X. Kang, S. Nishide, and F. Ren, "Object detection based on SSD-ResNet," in 2019 IEEE 6th International Conference on Cloud Computing and Intelligence Systems (CCIS), Singapore, pp. 89-92, 2019.
  16. A. Bochkovskiy, C. -Y. Wang, and H. -Y. M. Liao, "YOLOv4: Optimal Speed and Accuracy of Object Detection," arXiv preprint arXiv:2004.10934, Apr. 2020.
  17. A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom, "PointPillars: Fast Encoders for Object Detection From Point Clouds," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, Long Beach: CA, USA, pp. 12697-12705, 2019.