DOI QR코드

DOI QR Code

다중주기 칼만 필터를 이용한 비동기 센서 융합

Asynchronous Sensor Fusion using Multi-rate Kalman Filter

  • Son, Young Seop (Dept. of Electrical Engineering, Hanyang Universty, Korea and Global R&D Center, MANDO Corp.) ;
  • Kim, Wonhee (Dept. of Electrical Engineering, Dong-A University) ;
  • Lee, Seung-Hi (Div. of Electrical and Biomedical Engineering, Hanyang University) ;
  • Chung, Chung Choo (Div. of Electrical and Biomedical Engineering, Hanyang University)
  • 투고 : 2014.07.02
  • 심사 : 2014.10.24
  • 발행 : 2014.11.01

초록

We propose a multi-rate sensor fusion of vision and radar using Kalman filter to solve problems of asynchronized and multi-rate sampling periods in object vehicle tracking. A model based prediction of object vehicles is performed with a decentralized multi-rate Kalman filter for each sensor (vision and radar sensors.) To obtain the improvement in the performance of position prediction, different weighting is applied to each sensor's predicted object position from the multi-rate Kalman filter. The proposed method can provide estimated position of the object vehicles at every sampling time of ECU. The Mahalanobis distance is used to make correspondence among the measured and predicted objects. Through the experimental results, we validate that the post-processed fusion data give us improved tracking performance. The proposed method obtained two times improvement in the object tracking performance compared to single sensor method (camera or radar sensor) in the view point of roots mean square error.

키워드

1. 서 론

HRobust and reliable object tracking solution is crucial to the performance of collision avoidance, path generation, adaptive cruise control. Object tracking with the use of radar and vision processing systems has been researched [1]-[5]. Radar and vision have the complementary characteristics in directional measurement’s accuracy : the vision system provides accurate lateral information, whilst the radar system gives accurate range information of objects. The vehicle detection and tracking methods using only vision sensor were developed [6], [7]. The methods were designed based on an edge-based constraint filter. However, the vision system cannot guarantee satisfactory performance under various environmental conditions such as light change in the day and night times, poor weather conditions, and lack of light source. On the other hand, the radar provides an unnecessary object information resulting from reflections on crash barriers and other cars [1]. The radar and vision sensor systems commonly used in industrial applications have uncertain and slow update rate compared to the vehicle electronic control unit (ECU). Thus it may be difficult for the radar and vision sensor systems to be synchronized together with lateral and longitudinal control systems. Furthermore, the slow update rate may limit the high performance of the vehicle control system.

The fusion of two sensors can complement each sensor’s properties. The conventional approach to the sensor fusion is the method which is not synchronized and updates the fused data at the slow sensor. Thus the fusion structure of two sensors may result in slow object detection then unavoidable collision may arise. In [2], the vision data was only used to compensate for the object vehicle information from the radar data. They used a radar to detect relevant object vehicles, then used a vision senor to validate object vehicles’ lateral position, dimensions, and boundary of targets. Amditis et al. presented a multi-sensor collision avoidance system [8]. They solved the sensors’ asynchronization problem through time based generation but they fused raw data from sensors at only commonly updated instance. Richter et al. present a tracking algorithm through combination of asynchronized observation data using a movement model [4]. They integrated the longitudinal and lateral velocity measurements to obtain object positions at the required instance. The major problem with this integration method is that bias errors and noises in the sensors will result in serious drift in the position estimates.

This paper investigates multi-object vehicle tracking using a fusion of asynchronous sensors such as radar and vision sensors. We propose a multi-rate sensor fusion using a Kalman filter for object vehicle tracking. The multi-rate Kalman filter is developed proximately to synchronize the sensors with a higher sampling rate synchronized to the ECU. A model based prediction of object vehicles’ future behavior is performed by designing the decentralized multi-rate Kalman filter for each sensor (vision and radar sensors.) To obtain an improvement in the estimated object position, different weighting is applied to each sensor’s predicted object position from the multi-rate Kalman filter: large weighting on the lateral position from vision and longitudinal position from radar. The proposed method allows us to implement prediction and correction steps at the ECU sampling rate [9] [10]. It is experimentally validated that the proposed method provides the improved estimation of object positions at every sampling time of the ECU. For object tracking, it is necessary for each sensor to link between measured objects and predicted ones at every time when sensor information is updated. In this paper, we use the Mahalanobis distance in determining correspondence among the measured and estimated [11]. Through the experimental results, we validate that the post-processed fusion data has improved object tracking performance. The proposed post-processing method can be helpful in providing collision warning and/or take collision avoidance action by predicting the position and motion of an object vehicle even with intermittent sensor information. We obtained two times improvement in the object tracking performance compared to the single sensor method (camera or radar sensor) in the view point of roots mean square error.

 

2. System Structure

In the proposed method, vision and radar sensors are used. Fig. 1 shows the schematic block diagram of the proposed multi-rate sensor fusion system for object tracking. The structure is composed of six function modules: (a) two sensor modules, (b) two validity determination modules for filtering object vehicles within specified ranges (c) two correspondence determination modules for matching the correspondence between measured data and object data recorded in previous frame, (d) two multi-rate Kalman filters for prediction of object vehicle’s position, (e) a module treating interaction correspondence algorithm for finding the commonly measured object by vision and radar, and (f) a sensor fusion module that improves the position estimation by fusing the weighted position information from vision and radar.

Fig. 1Structure of multi-rate sensor fusion system for object tracking

 

3. Object Modeling

Input data set of the post-processor are identity (ID) and information of object from each sensor. In this paper, variable with superscript [lv] denotes a variable from vision corresponding to the object with ID of lv while variable with superscript [lr] denotes a variable from radar as follows

The relative distance, angle, and velocity from the ego vehicle to the measured object are obtained from a radar. From Polar-to-Cartesian conversion of the (distance, angle) information, we can obtain longitudinal distance, x, lateral distance, y, longitudinal relative velocity, ẋ, and lateral relative velocity, ẏ. The vision processor gives the relative position (x,y) of the object in Cartesian coordinate. Fig. 2 shows the coordinates of objects position with respect to the ego vehicle, and the both sensors’ configurations of the range and field of view.

Fig. 2Configuration of the vision and radar sensors

Generally, the relative velocity of each object with respect to the ego vehicle does not vary significantly within a short time. Thus, we can describe the motion of object vehicles from vision processing system using the nonmaneuver target dynamic model [12] in terms of the state vector as follows

and its system matrix Al is given by

In regard to measurement matrix, we have wv denotes the measurement noise of vision which is assumed to be zero mean Gaussian white noise with covariance Qv , N(0, Qv).

We can describe the object vehicles from radar in terms of the state vector as follows

where the superscript [lr] denotes the object with an ID of lr and Al is the same as in (2). For radar, we have output matrix Cr = [I4×4] or Cr = [I2×2 02×2]. ωr denotes measurement noise of radar which is assumed to be zero mean Gaussian white noise with covariance Qr, N(0, Qr). Then, for the sampling time Tc of the car ECU, one can obtain the zero order hold equivalent discrete-time model matrices (Φl, Cv, Cr) from (Al, Cv, Cr).

 

4. Multi-rate Sensing and Processing

In this section, the multi-rate Kalman filters for both radar and vision systems are proposed to achieve the prediction of the object’ motion at the fast sampling rate of ECU.

4.1 Design of Multi-rate State Estimator

Fig. 3 (a) shows the update period of each ECU, radar, and vision. The radar provides the object information at a sampling period of Trad while the vision processing system measures that at a sampling period of Tcam. As Fig. 3 (a), the sampling times are asynchronous. The vision processing system provides an approximately one sample delayed information in the view point of ECU sampling rate. We thus need to design a multi-rate state estimator for the each object dynamics (2) and (3) to estimate objects’ position information at the same sampling period as Tc of the car ECU.

Fig. 3Update period of ECU, and vision and radar sensers

Assumption 1: The update periods of radar and camera are fixed integer multiples of Tc such as

Tcam = Rmv Tc Trad = Rmr Tc where Rmv, Rmr∈ℕ and Rmv, Rmr ≥ 1 .◇

Then, under the Assumption 1, we can represent a time instant, t, such as

t = (kv + i/Rmv) Tcam = (kr + j/Rmr) Trad

for

kv, kr = 0, 1,⋯ i = 0, 1,⋯, Rmv −1 j = 0, 1,⋯, Rmr −1,

where kv and kr denote the vision and radar update instants, respectively. i and j indicate the control update instants for the vision and radar, respectively. We design two multi-rate state estimators: one for the vision sensor and the other for the radar sensor. The multi-rate state estimator for the vision consists of two procedures. One is a prediction based on model (2) as (4)-(a):

The prediction error is corrected as (4)-(b) by a multi-rate state estimator gain, Lv, based on the given measurement data at (kv, 0) instance such as The prediction error is computed based on the predicted states at (kv, 0). This computation enables the correction states to avoid oscillation. Similarly we have the following prediction and correction for the radar system as

We apply a discrete-time lifting procedure, then (4) and (5) lead to a lifted model

where

Here, we define

Then, we can reduce the multi-rate problem to the single-rate, then and determine Lv and Lr from

that guarantees and to be convergent [9], [13]. From the designed decentralized multi-rate Kalman filters, we can obtain the synchronized information of object’s position at every sampling time of Tc as shown in Fig. 3 (b). The update periods of a radar and camera are generally uncertain and vary at every update instant. The same authors showed that multi-rate state estimator are robust against Assumption 1, that is, uncertain update period [14].

4.2 Correspondence between Measurements and object Vehicle

For the object detection and tracking, the determination of correspondence between measured objects and object list is a necessary procedure. Vision processor and radar systems give each list of measured objects at every update instant. In the validity determination step, the only data corresponding to objects within the pre-defined range are passed. In this paper, variable with superscript [mv] denotes variable from vision corresponding to the object with ID of mv while variable with superscript [mr] denotes variable from radar.

Here,

and, in general, nv′≠nr′. In regard to the measurement correspondence that finds a measurement - object vehicle link, we are to use the Mahalanobis distance [11] for the vision’s correspondence algorithm in Fig. 1 as follows:

and in the radar’s module such as

where Qv and Qr are the measurement noise covariances of vision and radar systems, respectively. Given object with ID of lv, the measurement ID of mv which minimizes the Mahalanobis distance such as:

argminmv∈Ωv′dM,v (mv,lv)

that corresponds to a candidate measurement for the object vehicle with ID of lv. In the case of radar, the same principle is adopted to determine the ID of mv corresponding to the vehicle with ID of lr such as:

argmin mr ∈ Ωr′dM,r (mr,lr)

If the Mahalanobis distance exceeds a threshold value for all existing object vehicles, then we find that either a new object that corresponds to the measurement appears or disappears [11]. If there are remaining measurements which could not be assigned to any existing predicted object lists, they are candidates for new object. Among the prediction list, if there is a vehicle which is not linked to the measurements, it may run in the out of range or filed of view.

4.3 Vision and Radar Sensor Fusion

A prior procedure to sensor fusion is determining the interaction correspondence between objects from radar and vision. Due to the different range of view and field of view between the radar and camera, the number of object vehicles may be different. The sensor fusion should be applied to the object vehicles measured by the both sensors. Radar treats all of crash barriers and reflected ones as objects, while the vision system can give a list of objects including only object vehicles by using such as the edge-based constraint filter [6]. In general, n′v < n′r in (7). Through Mahalanobis distance decision analysis, we can get a list of objects that are commonly included in both sensor’s list. We define the Mahalanobis distance for fusion such as

where C′v = Cr. Then, given object ID in radar, lr, the ID that minimizes (8) is designated as common ID, l, for fusion as following:

The radar and vision have complementary characteristics: The radar has a long range of view but its angle resolution is relatively low compared to vision system. Thus the radar sensor has large measurement uncertainty in lateral direction due to the radar beam forming problem while it allows for good estimations of distance and relative velocities of objects in the longitudinal direction of the ego vehicle. Vision’ application in the field of object detection results in large longitudinal position error due to the digitization error and short range of view. On the other hand, the camera provides the object vehicles’ positions in lateral direction from the ego vehicle with high accuracy compared to the radar.

With the considerations of these complementary characteristics, we propose a sensor fusion scheme to combine the estimated states: a high weighting for the range data, a higher weighting for the lateral data from the vision sensor from the radar sensor as following:

where is a diagonal weighting matrix and

 

5. Experimental Results

The proposed multi-rate estimators and sensor fusion methods were validated via experiments. The vision processing was updated every 60 ~ 80msec, while the radar signal was updated every 50msec. The vision had range of view of 120m, , and field of view of 38 deg while the radar had range of view, , of 200 m and field of view of 26 deg as shown in Fig. 2. In order to evaluate the estimation performance of each method, we adopt the laser scanner as the reference since the laser scanner has relatively high accuracy object tracking performance within 0.04 m.

The vehicle ECU operated at a time period of 10 msec. Fig. 4 shows object’s position data expressed in terms of local coordinate systems where (0,0,0) coincides with the ego vehicle at 3[sec]. At that moment, there were totally 6 object vehicles including one ego vehicle on a straight road with three lanes. Fig. 4 (a) shows raw data from vision and radar. Fig. 4 (b) shows measured data, , and estimated data, . Performing the validity determination procedure resulted in the only valid data that were within the range of ±5 m according x-axis and 120 m according y-axis, respectively. The estimated object’s position was obtained through multi-rate Kalman filtering. Fig. 4 (c) shows the result data after interaction correspondence and sensor fusion procedures. In the procedure of interacting correspondence determination, the common objects from both sensors were retained and they were given ID of l as the same as lv by the ID determination rule (9). It was observed that the fusion data was positioned near the radar data compared to the vision data according to y-axis. This fusion result can be predictable because we weighted more weighting on the radar’s longitudinal data with consideration for directional accuracy.

Fig. 4Experimental result at 3.0 [sec]

Fig. 5 (a) and (b) show the lateral and longitudinal information of l = 1, respectively. In Fig. 5, the green solid line shows fusion data while blue, red and black solid line show vision, radar and laser scanner's data, respectively. The object information was updated at every 10 msec as shown in Fig. 5. The roots mean square errors (RMSEs) of three methods are shown in Table. 1. It was observed that the proposed method, fusion, had the smallest RMSE among three methods.

Table 1Comparison of three methods

Fig. 5Experimental result of l = 1

 

5. Conclusions

We propose a multi-rate sensor fusion of vision and radar using Kalman filter to solve problems of asynchronized and multi-rate sampling periods in object vehicle tracking. A model based prediction of object vehicles is performed with a decentralized multi-rate Kalman filter for each sensor (vision and radar sensors.) We developed weighting to each sensor’s predicted object position from the multi-rate Kalman filter. The Mahalanobis distance was used to make correspondence among the measured and predicted objects.

Through the experimental results, we validated that the post-processed fusion data give us improved tracking performance. We expect that the proposed multi-rate object tracking method with the optimal active steering controller [15] and the road lane estimation [16] will significantly contribute to realize the autonomous intelligent vehicles.

참고문헌

  1. A. Gern, U. Franke, and P. Levi, "Advanced lane recognition-fusing vision and radar," in Proceedings of IEEE Intelligent Vehicles Symposium, 2000, pp. 45-51.
  2. U. Hofmann, A. Rieder, and E. Dickmanns, "Radar and vision data fusion for hybrid adaptive cruise control on highways," Machine Vision and Applications, vol. 14, no. 1, pp. 42-49, 2003. https://doi.org/10.1007/s00138-002-0093-y
  3. T. Kato, Y. Ninomiya, and I. Masaki, "An obstacle detection method by fusion of radar and motion stereo," IEEE Transactions on Intelligent Transportation Systems, vol. 3, no. 3, pp. 182-188, 2002. https://doi.org/10.1109/TITS.2002.802932
  4. E. Richter, R. Schubert, and G. Wanielik, "Radar and vision based data fusion-advanced filtering techniques for a multi object vehicle tracking system," in Proceedings of IEEE Intelligent Vehicles Symposium, 2008, pp. 120-125.
  5. B. Steux, C. Laurgeau, L. Salesse, and D. Wautier, "Fade : A vehicle detection and tracking system featuring monocular color vision and radar data fusion," in Proceedings of IEEE Intelligent Vehicle Symposium, 2002, pp. 632-639.
  6. N. Srinivasa, "Vision-based vehicle detection and tracking method for forward collision warning in automobiles," in Proceedings of IEEE Intelligent Vehicle Symposium, 2002, pp. 626-631.
  7. B. Coifman, D. Beymer, P. McLauchlan, and J. Malik, "A real-time computer vision system for vehicle tracking and traffic surveillance," Transportation Research Part C: Emerging Technologies, vol. 6, no. 4, pp. 271-288, 1998. https://doi.org/10.1016/S0968-090X(98)00019-9
  8. A. Amditis, A. Polychronopoulos, I. Karaseitanidis, G. Katsoulis, and E. Bekiaris, "Multiple sensor collision avoidance system for automotive applications using an IMM approach for obstacle tracking," in Proceedings of the International Conference on Information Fusion, vol. 2, 2002, pp. 812-817.
  9. S.-H. Lee, Y. O. Lee, and C. C. Chung, "Multi-rate active steering control for autonomous vehicle lane changes," in Proceedings of IEEE Intelligent Vehicles Symposium, 2012, pp. 772-777.
  10. D. Lee and M. Tomizuka, "Multi-rate optimal state estimation with sensor fusion," in American Control Conference, vol. 4, 2003, pp. 2887-2892.
  11. S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics. The MIT Press, 2006.
  12. X. R. Li and V. P. Jilkov, "A survey of maneuvering target tracking. Part 1: Dynamic models," IEEE Transactions on Aerospace and Electronics Systems, vol. 39, no. 4, pp. 1333-1364, 2003. https://doi.org/10.1109/TAES.2003.1261132
  13. S. Lee, C. Chung, and S. Suh, "Multi-rate digital control for high track density magnetic disk drives," IEEE Transactions on Magnetics, vol. 39, no. 2, pp. 832-837, 2003. https://doi.org/10.1109/TMAG.2003.808935
  14. S.-H. Lee, Y. O. Lee, Y. Son, and C. C. Chung, "Robust active steering control of autonomous vehicles: A state space disturbance observer approach," in Proceedings of International Conference on Control, Automation and Systems, 2011, pp. 596-598.
  15. S.-H. Lee, Y. O. Lee, B-A. Kim, and C. C. Chung, "Proximate model predictive control strategy for autonomous vehicle lateral control," in Proceedings of American Control Conference, 2012, pp. 3605-3610.
  16. S.-H. Lee, Y. O. Lee, Y. Son, and C. C. Chung, "Road lane estimation using vehicle states and optical lane recognition," in Proceedings of International Conference on Control, Automation and Systems, 2011, pp. 501-506.

피인용 문헌

  1. Lane Information Fusion Scheme using Multiple Lane Sensors vol.52, pp.12, 2015, https://doi.org/10.5573/ieie.2015.52.12.142