• Title/Summary/Keyword: radar-vision fusion

Search Result 16, Processing Time 0.023 seconds

A Study on IMM-PDAF based Sensor Fusion Method for Compensating Lateral Errors of Detected Vehicles Using Radar and Vision Sensors (레이더와 비전 센서를 이용하여 선행차량의 횡방향 운동상태를 보정하기 위한 IMM-PDAF 기반 센서융합 기법 연구)

  • Jang, Sung-woo;Kang, Yeon-sik
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.8
    • /
    • pp.633-642
    • /
    • 2016
  • It is important for advanced active safety systems and autonomous driving cars to get the accurate estimates of the nearby vehicles in order to increase their safety and performance. This paper proposes a sensor fusion method for radar and vision sensors to accurately estimate the state of the preceding vehicles. In particular, we performed a study on compensating for the lateral state error on automotive radar sensors by using a vision sensor. The proposed method is based on the Interactive Multiple Model(IMM) algorithm, which stochastically integrates the multiple Kalman Filters with the multiple models depending on lateral-compensation mode and radar-single sensor mode. In addition, a Probabilistic Data Association Filter(PDAF) is utilized as a data association method to improve the reliability of the estimates under a cluttered radar environment. A two-step correction method is used in the Kalman filter, which efficiently associates both the radar and vision measurements into single state estimates. Finally, the proposed method is validated through off-line simulations using measurements obtained from a field test in an actual road environment.

Development of Data Logging Platform of Multiple Commercial Radars for Sensor Fusion With AVM Cameras (AVM 카메라와 융합을 위한 다중 상용 레이더 데이터 획득 플랫폼 개발)

  • Jin, Youngseok;Jeon, Hyeongcheol;Shin, Young-Nam;Hyun, Eugin
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.13 no.4
    • /
    • pp.169-178
    • /
    • 2018
  • Currently, various sensors have been used for advanced driver assistance systems. In order to overcome the limitations of individual sensors, sensor fusion has recently attracted the attention in the field of intelligence vehicles. Thus, vision and radar based sensor fusion has become a popular concept. The typical method of sensor fusion involves vision sensor that recognizes targets based on ROIs (Regions Of Interest) generated by radar sensors. Especially, because AVM (Around View Monitor) cameras due to their wide-angle lenses have limitations of detection performance over near distance and around the edges of the angle of view, for high performance of sensor fusion using AVM cameras and radar sensors the exact ROI extraction of the radar sensor is very important. In order to resolve this problem, we proposed a sensor fusion scheme based on commercial radar modules of the vendor Delphi. First, we configured multiple radar data logging systems together with AVM cameras. We also designed radar post-processing algorithms to extract the exact ROIs. Finally, using the developed hardware and software platforms, we verified the post-data processing algorithm under indoor and outdoor environments.

Radar and Vision Sensor Fusion for Primary Vehicle Detection (레이더와 비전센서 융합을 통한 전방 차량 인식 알고리즘 개발)

  • Yang, Seung-Han;Song, Bong-Sob;Um, Jae-Young
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.7
    • /
    • pp.639-645
    • /
    • 2010
  • This paper presents the sensor fusion algorithm that recognizes a primary vehicle by fusing radar and monocular vision data. In general, most of commercial radars may lose tracking of the primary vehicle, i.e., the closest preceding vehicle in the same lane, when it stops or goes with other preceding vehicles in the adjacent lane with similar velocity and range. In order to improve the performance degradation of radar, vehicle detection information from vision sensor and path prediction predicted by ego vehicle sensors will be combined for target classification. Then, the target classification will work with probabilistic association filters to track a primary vehicle. Finally the performance of the proposed sensor fusion algorithm is validated using field test data on highway.

Development of a Monitoring and Verification Tool for Sensor Fusion (센서융합 검증을 위한 실시간 모니터링 및 검증 도구 개발)

  • Kim, Hyunwoo;Shin, Seunghwan;Bae, Sangjin
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.3
    • /
    • pp.123-129
    • /
    • 2014
  • SCC (Smart Cruise Control) and AEBS (Autonomous Emergency Braking System) are using various types of sensors data, so it is important to consider about sensor data reliability. In this paper, data from radar and vision sensor is fused by applying a Bayesian sensor fusion technique to improve the reliability of sensors data. Then, it presents a sensor fusion verification tool developed to monitor acquired sensors data and to verify sensor fusion results, efficiently. A parallel computing method was applied to reduce verification time and a series of simulation results of this method are discussed in detail.

A Study of Sensor Fusion using Radar Sensor and Vision Sensor in Moving Object Detection (레이더 센서와 비전 센서를 활용한 다중 센서 융합 기반 움직임 검지에 관한 연구)

  • Kim, Se Jin;Byun, Ki Hun;Won, In Su;Kwon, Jang Woo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.16 no.2
    • /
    • pp.140-152
    • /
    • 2017
  • This Paper is for A study of sensor fusion using Radar sensor and Vision sensor in moving object detection. Radar sensor has some problems to detect object. When the sensor moves by wind or that kind of thing, it can happen to detect wrong object like building or tress. And vision sensor is very useful for all area. And it is also used so much. but there are some weakness that is influenced easily by the light of the area, shaking of the sensor device, and weather and so on. So in this paper I want to suggest to fuse these sensor to detect object. Each sensor can fill the other's weakness, so this kind of sensor fusion makes object detection much powerful.

Asynchronous Sensor Fusion using Multi-rate Kalman Filter (다중주기 칼만 필터를 이용한 비동기 센서 융합)

  • Son, Young Seop;Kim, Wonhee;Lee, Seung-Hi;Chung, Chung Choo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.11
    • /
    • pp.1551-1558
    • /
    • 2014
  • We propose a multi-rate sensor fusion of vision and radar using Kalman filter to solve problems of asynchronized and multi-rate sampling periods in object vehicle tracking. A model based prediction of object vehicles is performed with a decentralized multi-rate Kalman filter for each sensor (vision and radar sensors.) To obtain the improvement in the performance of position prediction, different weighting is applied to each sensor's predicted object position from the multi-rate Kalman filter. The proposed method can provide estimated position of the object vehicles at every sampling time of ECU. The Mahalanobis distance is used to make correspondence among the measured and predicted objects. Through the experimental results, we validate that the post-processed fusion data give us improved tracking performance. The proposed method obtained two times improvement in the object tracking performance compared to single sensor method (camera or radar sensor) in the view point of roots mean square error.

Radar, Vision, Lidar Fusion-based Environment Sensor Fault Detection Algorithm for Automated Vehicles (레이더, 비전, 라이더 융합 기반 자율주행 환경 인지 센서 고장 진단)

  • Choi, Seungrhi;Jeong, Yonghwan;Lee, Myungsu;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.9 no.4
    • /
    • pp.32-37
    • /
    • 2017
  • For automated vehicles, the integrity and fault tolerance of environment perception sensor have been an important issue. This paper presents radar, vision, lidar(laser radar) fusion-based fault detection algorithm for autonomous vehicles. In this paper, characteristics of each sensor are shown. And the error of states of moving targets estimated by each sensor is analyzed to present the method to detect fault of environment sensors by characteristic of this error. Each estimation of moving targets isperformed by EKF/IMM method. To guarantee the reliability of fault detection algorithm of environment sensor, various driving data in several types of road is analyzed.

Preceding Vehicle Detection and Tracking with Motion Estimation by Radar-vision Sensor Fusion (레이더와 비전센서 융합기반의 움직임추정을 이용한 전방차량 검출 및 추적)

  • Jang, Jaehwan;Kim, Gyeonghwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.12
    • /
    • pp.265-274
    • /
    • 2012
  • In this paper, we propose a method for preceding vehicle detection and tracking with motion estimation by radar-vision sensor fusion. The motion estimation proposed results in not only correction of inaccurate lateral position error observed on a radar target, but also adaptive detection and tracking of a preceding vehicle by compensating the changes in the geometric relation between the ego-vehicle and the ground due to the driving. Furthermore, the feature-based motion estimation employed to lessen computational burden reduces the number of deployment of the vehicle validation procedure. Experimental results prove that the correction by the proposed motion estimation improves the performance of the vehicle detection and makes the tracking accurate with high temporal consistency under various road conditions.

Multiple Vehicle Recognition based on Radar and Vision Sensor Fusion for Lane Change Assistance (차선 변경 지원을 위한 레이더 및 비전센서 융합기반 다중 차량 인식)

  • Kim, Heong-Tae;Song, Bongsob;Lee, Hoon;Jang, Hyungsun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.2
    • /
    • pp.121-129
    • /
    • 2015
  • This paper presents a multiple vehicle recognition algorithm based on radar and vision sensor fusion for lane change assistance. To determine whether the lane change is possible, it is necessary to recognize not only a primary vehicle which is located in-lane, but also other adjacent vehicles in the left and/or right lanes. With the given sensor configuration, two challenging problems are considered. One is that the guardrail detected by the front radar might be recognized as a left or right vehicle due to its genetic characteristics. This problem can be solved by a guardrail recognition algorithm based on motion and shape attributes. The other problem is that the recognition of rear vehicles in the left or right lanes might be wrong, especially on curved roads due to the low accuracy of the lateral position measured by rear radars, as well as due to a lack of knowledge of road curvature in the backward direction. In order to solve this problem, it is proposed that the road curvature measured by the front vision sensor is used to derive the road curvature toward the rear direction. Finally, the proposed algorithm for multiple vehicle recognition is validated via field test data on real roads.

Information Fusion of Cameras and Laser Radars for Perception Systems of Autonomous Vehicles (영상 및 레이저레이더 정보융합을 통한 자율주행자동차의 주행환경인식 및 추적방법)

  • Lee, Minchae;Han, Jaehyun;Jang, Chulhoon;Sunwoo, Myoungho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.1
    • /
    • pp.35-45
    • /
    • 2013
  • A autonomous vehicle requires improved and robust perception systems than conventional perception systems of intelligent vehicles. In particular, single sensor based perception systems have been widely studied by using cameras and laser radar sensors which are the most representative sensors for perception by providing object information such as distance information and object features. The distance information of the laser radar sensor is used for road environment perception of road structures, vehicles, and pedestrians. The image information of the camera is used for visual recognition such as lanes, crosswalks, and traffic signs. However, single sensor based perception systems suffer from false positives and true negatives which are caused by sensor limitations and road environments. Accordingly, information fusion systems are essentially required to ensure the robustness and stability of perception systems in harsh environments. This paper describes a perception system for autonomous vehicles, which performs information fusion to recognize road environments. Particularly, vision and laser radar sensors are fused together to detect lanes, crosswalks, and obstacles. The proposed perception system was validated on various roads and environmental conditions with an autonomous vehicle.