Abstract
Recently, with the emergence of autonomous vehicles and the increasing interest in safety, a variety of research has been being actively conducted to precisely estimate the position of a vehicle by fusing sensors. Previously, researches were conducted to determine the location of moving objects using GNSS (Global Navigation Satellite Systems) and/or IMU (Inertial Measurement Unit). However, precise positioning of a moving vehicle has lately been performed by fusing data obtained from various sensors, such as LiDAR (Light Detection and Ranging), on-board vehicle sensors, and cameras. This study is designed to enhance kinematic vehicle positioning performance by using feature-based recognition. Therefore, an analysis of the required precision of the observations obtained from the images has carried out in this study. Velocity and attitude observations, which are assumed to be obtained from images, were generated by simulation. Various magnitudes of errors were added to the generated velocities and attitudes. By applying these observations to the positioning algorithm, the effects of the additional velocity and attitude information on positioning accuracy in GNSS signal blockages were analyzed based on Kalman filter. The results have shown that yaw information with a precision smaller than 0.5 degrees should be used to improve existing positioning algorithms by more than 10%.