Browse > Article

Requirements Analysis of Image-Based Positioning Algorithm for Vehicles  

Lee, Yong (Dept. of Geoinformatics, University of Seoul)
Kwon, Jay Hyoun (Dept. of Geoinformatics, University of Seoul)
Publication Information
Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography / v.37, no.5, 2019 , pp. 397-402 More about this Journal
Recently, with the emergence of autonomous vehicles and the increasing interest in safety, a variety of research has been being actively conducted to precisely estimate the position of a vehicle by fusing sensors. Previously, researches were conducted to determine the location of moving objects using GNSS (Global Navigation Satellite Systems) and/or IMU (Inertial Measurement Unit). However, precise positioning of a moving vehicle has lately been performed by fusing data obtained from various sensors, such as LiDAR (Light Detection and Ranging), on-board vehicle sensors, and cameras. This study is designed to enhance kinematic vehicle positioning performance by using feature-based recognition. Therefore, an analysis of the required precision of the observations obtained from the images has carried out in this study. Velocity and attitude observations, which are assumed to be obtained from images, were generated by simulation. Various magnitudes of errors were added to the generated velocities and attitudes. By applying these observations to the positioning algorithm, the effects of the additional velocity and attitude information on positioning accuracy in GNSS signal blockages were analyzed based on Kalman filter. The results have shown that yaw information with a precision smaller than 0.5 degrees should be used to improve existing positioning algorithms by more than 10%.
Vehicle Positioning System; Multi Sensor Integration; GNSS Signal Blockages; Kalman Filter;
Citations & Related Records
Times Cited By KSCI : 2  (Citation Analysis)
연도 인용수 순위
1 Grimes, M. and LeCun, Y. (2009), Efficient off-road localization using visually corrected odometry. IEEE International Conference on Robotics and Automation, 12-17 May, Kobe, Japan, pp. 2649-2654.
2 Han, J.H. (2016), Performance Analysis of Positioning through the Integration of GNSS and On-board Vehicle Sensors, Ph.D. dissertation, University of Seoul, Seoul, Korea, 162p.
3 Jung, J., Kwon, J.H., and Lee, Y. (2017), Development of Image-based Assistant Algorithm for Vehicle Positioning by Detecting Road Facilities. Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography, Vol. 35, No. 5, pp. 339-347.   DOI
4 Lee, Y. and Kwon J.H. (2018), Performance analysis of the GNSS/MEMS-IMU/on-Board vehicle sensor/magnetometer-based positioning system during GNSS signal blockage, International Journal of Urban Sciences, Vol. 23, No. 3, pp. 1-10.   DOI
5 Li, T., Zhang, H., Gao, Z., Niu, X., and El-Sheimy, N. (2019), Tight Fusion of a Monocular Camera, MEMS-IMU, and Single-Frequency Multi-GNSS RTK for Precise Navigation in GNSS-Challenged Environments. Remote Sensing, Vol. 11, No. 6, pp. 1-24.
6 Park, J.S., Lee, Y., and Kwon, J.H. (2018), Performance Analysis of Vision-based Positioning Assistance Algorithm, Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography, Vol. 37, No. 3, pp. 101-108. (in Korean with English abstract)   DOI
7 Tsai, F., Chang, H., and Su, A.Y.S. (2014), Combining MEMSbased IMU Data and Vision-based Trajectory Estimation, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 14-16 May, Suzhou, China, pp. 267-271.
8 Wei, P., Cagle, L., Reza, T., Ball, J., and Gafford, J. (2018), LiDAR and camera detection fusion in a real-time industrial multi-sensor collision avoidance system. Electronics, Vol. 7, No. 6, pp. 1-32.