• Title/Summary/Keyword: camera LIDAR fusion

Search Result 12, Processing Time 0.026 seconds

Camera and LIDAR Combined System for On-Road Vehicle Detection (도로 상의 자동차 탐지를 위한 카메라와 LIDAR 복합 시스템)

  • Hwang, Jae-Pil;Park, Seong-Keun;Kim, Eun-Tai;Kang, Hyung-Jin
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.4
    • /
    • pp.390-395
    • /
    • 2009
  • In this paper, we design an on-road vehicle detection system based on the combination of a camera and a LIDAR system. In the proposed system, the candidate area is selected from the LIDAR data using a grouping algorithm. Then, the selected candidate area is scanned by an SVM to find an actual vehicle. The morphological edged images are used as features in a camera. The principal components of the edged images called eigencar are employed to train the SVM. We conducted experiments to show that the on-road vehicle detection system developed in this paper demonstrates about 80% accuracy and runs with 20 scans per second on LIDAR and 10 frames per second on camera.

Depthmap Generation with Registration of LIDAR and Color Images with Different Field-of-View (다른 화각을 가진 라이다와 칼라 영상 정보의 정합 및 깊이맵 생성)

  • Choi, Jaehoon;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.28-34
    • /
    • 2020
  • This paper proposes an approach to the fusion of two heterogeneous sensors with two different fields-of-view (FOV): LIDAR and an RGB camera. Registration between data captured by LIDAR and an RGB camera provided the fusion results. Registration was completed once a depthmap corresponding to a 2-dimensional RGB image was generated. For this fusion, RPLIDAR-A3 (manufactured by Slamtec) and a general digital camera were used to acquire depth and image data, respectively. LIDAR sensor provided distance information between the sensor and objects in a scene nearby the sensor, and an RGB camera provided a 2-dimensional image with color information. Fusion of 2D image and depth information enabled us to achieve better performance with applications of object detection and tracking. For instance, automatic driver assistance systems, robotics or other systems that require visual information processing might find the work in this paper useful. Since the LIDAR only provides depth value, processing and generation of a depthmap that corresponds to an RGB image is recommended. To validate the proposed approach, experimental results are provided.

Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model (카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법)

  • Yi-ji Im;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1099-1110
    • /
    • 2023
  • The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.

Classification of Objects using CNN-Based Vision and Lidar Fusion in Autonomous Vehicle Environment

  • G.komali ;A.Sri Nagesh
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.67-72
    • /
    • 2023
  • In the past decade, Autonomous Vehicle Systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the classification of objects at short and long distances. This paper presents classification of objects using CNN based vision and Light Detection and Ranging (LIDAR) fusion in autonomous vehicles in the environment. This method is based on convolutional neural network (CNN) and image up sampling theory. By creating a point cloud of LIDAR data up sampling and converting into pixel-level depth information, depth information is connected with Red Green Blue data and fed into a deep CNN. The proposed method can obtain informative feature representation for object classification in autonomous vehicle environment using the integrated vision and LIDAR data. This method is adopted to guarantee both object classification accuracy and minimal loss. Experimental results show the effectiveness and efficiency of presented approach for objects classification.

Parking Space Detection based on Camera and LIDAR Sensor Fusion (카메라와 라이다 센서 융합에 기반한 개선된 주차 공간 검출 시스템)

  • Park, Kyujin;Im, Gyubeom;Kim, Minsung;Park, Jaeheung
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.3
    • /
    • pp.170-178
    • /
    • 2019
  • This paper proposes a parking space detection method for autonomous parking by using the Around View Monitor (AVM) image and Light Detection and Ranging (LIDAR) sensor fusion. This method consists of removing obstacles except for the parking line, detecting the parking line, and template matching method to detect the parking space location information in the parking lot. In order to remove the obstacles, we correct and converge LIDAR information considering the distortion phenomenon in AVM image. Based on the assumption that the obstacles are removed, the line filter that reflects the thickness of the parking line and the improved radon transformation are applied to detect the parking line clearly. The parking space location information is detected by applying template matching with the modified parking space template and the detected parking lines are used to return location information of parking space. Finally, we propose a novel parking space detection system that returns relative distance and relative angle from the current vehicle to the parking space.

Automatic Building Extraction Using LIDAR and Aerial Image (LIDAR 데이터와 수치항공사진을 이용한 건물 자동추출)

  • Jeong, Jae-Wook;Jang, Hwi-Jeong;Kim, Yu-Seok;Cho, Woo-Sug
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.13 no.3 s.33
    • /
    • pp.59-67
    • /
    • 2005
  • Building information is primary source in many applications such as mapping, telecommunication, car navigation and virtual city modeling. While aerial CCD images which are captured by passive sensor(digital camera) provide horizontal positioning in high accuracy, it is far difficult to process them in automatic fashion due to their inherent properties such as perspective projection and occlusion. On the other hand, LIDAR system offers 3D information about each surface rapidly and accurately in the form of irregularly distributed point clouds. Contrary to the optical images, it is much difficult to obtain semantic information such as building boundary and object segmentation. Photogrammetry and LIDAR have their own major advantages and drawbacks for reconstructing earth surfaces. The purpose of this investigation is to automatically obtain spatial information of 3D buildings by fusing LIDAR data with aerial CCD image. The experimental results show that most of the complex buildings are efficiently extracted by the proposed method and signalize that fusing LIDAR data and aerial CCD image improves feasibility of the automatic detection and extraction of buildings in automatic fashion.

  • PDF

Object detection and distance measurement system with sensor fusion (센서 융합을 통한 물체 거리 측정 및 인식 시스템)

  • Lee, Tae-Min;Kim, Jung-Hwan;Lim, Joonhong
    • Journal of IKEEE
    • /
    • v.24 no.1
    • /
    • pp.232-237
    • /
    • 2020
  • In this paper, we propose an efficient sensor fusion method for autonomous vehicle recognition and distance measurement. Typical sensors used in autonomous vehicles are radar, lidar and camera. Among these, the lidar sensor is used to create a map around the vehicle. This has the disadvantage, however, of poor performance in weather conditions and the high cost of the sensor. In this paper, to compensate for these shortcomings, the distance is measured with a radar sensor that is relatively inexpensive and free of snow, rain and fog. The camera sensor with excellent object recognition rate is fused to measure object distance. The converged video is transmitted to a smartphone in real time through an IP server and can be used for an autonomous driving assistance system that determines the current vehicle situation from inside and outside.

A Framework for Building Reconstruction Based on Data Fusion of Terrestrial Sensory Data

  • Lee, Impyeong;Choi, Yunsoo
    • Korean Journal of Geomatics
    • /
    • v.4 no.2
    • /
    • pp.39-45
    • /
    • 2004
  • Building reconstruction attempts to generate geometric and radiometric models of existing buildings usually from sensory data, which have been traditionally aerial or satellite images, more recently airborne LIDAR data, or the combination of these data. Extensive studies on building reconstruction from these data have developed some competitive algorithms with reasonable performance and some degree of automation. Nevertheless, the level of details and completeness of the reconstructed building models often cannot reach the high standards that is now or will be required by various applications in future. Hence, the use of terrestrial sensory data that can provide higher resolution and more complete coverage has been intensively emphasized. We developed a fusion framework for building reconstruction from terrestrial sensory data, that is, points from a laser scanner, images from digital camera, and absolute coordinates from a total station. The proposed approach was then applied to reconstructing a building model from real data sets acquired from a large complex existing building. Based on the experimental results, we assured that the proposed approach cam achieve high resolution and accuracy in building reconstruction. The proposed approach can effectively contribute in developing an operational system producing large urban models for 3D GIS with reasonable resources.

  • PDF

Attention based Feature-Fusion Network for 3D Object Detection (3차원 객체 탐지를 위한 어텐션 기반 특징 융합 네트워크)

  • Sang-Hyun Ryoo;Dae-Yeol Kang;Seung-Jun Hwang;Sung-Jun Park;Joong-Hwan Baek
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.2
    • /
    • pp.190-196
    • /
    • 2023
  • Recently, following the development of LIDAR technology which can detect distance from the object, the interest for LIDAR based 3D object detection network is getting higher. Previous networks generate inaccurate localization results due to spatial information loss during voxelization and downsampling. In this study, we propose an attention-based convergence method and a camera-LIDAR convergence system to acquire high-level features and high positional accuracy. First, by introducing the attention method into the Voxel-RCNN structure, which is a grid-based 3D object detection network, the multi-scale sparse 3D convolution feature is effectively fused to improve the performance of 3D object detection. Additionally, we propose the late-fusion mechanism for fusing outcomes in 3D object detection network and 2D object detection network to delete false positive. Comparative experiments with existing algorithms are performed using the KITTI data set, which is widely used in the field of autonomous driving. The proposed method showed performance improvement in both 2D object detection on BEV and 3D object detection. In particular, the precision was improved by about 0.54% for the car moderate class compared to Voxel-RCNN.

Longitudinal Motion Planning of Autonomous Vehicle for Pedestrian Collision Avoidance (보행자 충돌 회피를 위한 자율주행 차량의 종방향 거동 계획)

  • Kim, Yujin;Moon, Jongsik;Jeong, Yonghwan;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.11 no.3
    • /
    • pp.37-42
    • /
    • 2019
  • This paper presents an autonomous acceleration planning algorithm for pedestrian collision avoidance at urban. Various scenarios between pedestrians and a vehicle are designed to maneuver the planning algorithm. To simulate the scenarios, we analyze pedestrian's behavior and identify limitations of fusion sensors, lidar and vision camera. Acceleration is optimally determined by considering TTC (Time To Collision) and pedestrian's intention. Pedestrian's crossing intention is estimated for quick control decision to minimize full-braking situation, based on their velocity and position change. Feasibility of the proposed algorithm is verified by simulations using Carsim and Simulink, and comparisons with actual driving data.