DOI QR코드

DOI QR Code

Development of a Vehicle Positioning Algorithm Using In-vehicle Sensors and Single Photo Resection and its Performance Evaluation

차량 내장 센서와 단영상 후방 교차법을 이용한 차량 위치 결정 알고리즘 개발 및 성능 평가

  • 김호준 (서울시립대학교 공간정보공학과) ;
  • 이임평 (서울시립대학교 공간정보공학과)
  • Received : 2017.03.23
  • Accepted : 2017.06.19
  • Published : 2017.06.30

Abstract

For the efficient and stable operation of autonomous vehicles or advanced driver assistance systems being actively studied nowadays, it is important to determine the positions of the vehicle accurately and economically. A satellite based navigation system is mainly used for positioning, but it has a limitation in signal blockage areas. To overcome this limitation, sensor fusion methods including additional sensors such as an inertial navigation system have been mainly proposed but the high sensor cost has been a problem. In this work, we develop a vehicle position estimation algorithm using in-vehicle sensors and a low-cost imaging sensor without any expensive additional sensor. We determine the vehicle positions using the velocity and yaw-rate of a car from the in-vehicle sensors and the position and attitude of the camera based on the single photo resection process. For the evaluation, we built a prototype system, acquired test data using the system, and estimated the trajectory. The proposed algorithm shows the accuracy of about 40% higher than an in-vehicle sensor only method.

최근 활발히 연구가 진행 중인 자율주행차량이나 첨단운전자보조시스템의 효율적이고 안정적인 동작을 위해서 차량 위치를 정확하게 결정하는 것이 중요하다. 주로 사용되는 위성 기반 항법은 신호 수신이 어려운 영역에서 위치 정확도가 매우 떨어지는 한계가 있다. 이를 극복하기 위해 INS 등 추가센서를 이용하는 방안이 모색되고 있지만 높은 비용이 문제가 된다. 이에 본 연구는 고가의 센서를 추가하지 않고 차량에 이미 내장된 센서와 저가의 영상센서를 통합하여 차량의 위치를 정확하게 추정하는 알고리즘을 개발하였다. 차량 내장 센서로부터 제공되는 속력, 각속도와 단영상후방교차법로 결정된 카메라의 위치, 자세를 함께 활용하여 차량의 위치를 추정하였다. 알고리즘의 성능평가를 위해 시범 시스템을 구축하였고, 시험 데이터를 취득하여 주행경로를 추정하였다. 차량 내장 센서만을 이용하였을 경우에 비해 단영상후방교차법 결과를 함께 이용하였을 경우 약 40% 높은 정확도로 차량 경로추정이 가능하였다.

Keywords

Acknowledgement

Supported by : 한국연구재단

References

  1. Bayoud, F. A., 2005, Vision-aided inertial navigation using a geomatics approach, Proc. of ION GNSS-2005, IEEE, Long Beach, USA, pp. 2485-2493.
  2. Byun, Y. S., Mok, J. K. and Kim, Y. C., 2011, Kinematic model of 4ws vehicle for dead-reckoning, Proc. of the Information and Control Symposium, CICS, Gyeongju, Korea, pp. 360-361.
  3. Chang, H., Georgy, J. and El-Sheimy, N., 2013, Monitoring Cycling Performance Using a Low Cost Multi-Sensors Navigation Solution, Proc. of the 8th The International Symposium on Mobile Mapping Technology, ISPRS, Tainan, Taiwan.
  4. Dissanayake, G., Sukkarieh, S. and Nebot, E. and Durrant-Whyte, H., 2001, The aiding of a low-cost strapdown inertial measurement unit using vehicle model constraints for land vehicle applications, IEEE Transactions on Robotics and Automation, Vol. 17, No. 5, pp. 731-747. https://doi.org/10.1109/70.964672
  5. Franke, U., Pfeiffer, D., Rabe, C., Knoeppel, C., Enzweiler, M., Stein, F. and Herrtwich, R., 2013, Making bertha see, Proc. of the IEEE International Conference on Computer Vision Workshops, IEEE, Sydney, Australia, pp. 214-221.
  6. Geiger, A., Lenz, P., Stiller, C. and Urtasun, R., 2013, Vision meets robotics: The KITTI dataset, The International Journal of Robotics Research, Vol. 32, No. 11, pp. 1231-1237. https://doi.org/10.1177/0278364913491297
  7. Guizzo, E., 2011, How google's self-driving car works, IEEE Spectrum Online, http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/how-google-self-driving-car-works
  8. Habib, A., Asmamaw, A., Kelley, D. and May, M., 2000, Linear features in photogrammetry, Research report, Department of civil and Environmental Engineering and Geodetic Science, Ohio State University, USA, pp. 9-11.
  9. Jo, K. C., Chu, K. Y., Lee, K. Y. and Sunwoo, M. H., 2010, Integration of multiple vehicle models with an IMM filter for vehicle localization, Proc. of the 2010 IEEE Intelligent Vehicles Symposium, IEEE, San Diego, California, pp. 746-751.
  10. Kim, S. B., Bazin, J. C., Lee, H. K., Choi, K. H. and Park, S. Y., 2011, Ground vehicle navigation in harsh urban conditions by integration inertial navigation system, global positioning system, odometer and vision data, IET radar, sonar&navigation, Vol. 5, No. 8, pp. 814-823. https://doi.org/10.1049/iet-rsn.2011.0100
  11. Kim, H. J., Choi, K. A. and Lee, I. P., 2013, Development and evaluation of a system to determine positions and attitudes using in-vehicle sensors, Journal of Korea Spatial Information Society, Vol. 21, No. 6, pp. 57-67. https://doi.org/10.12672/ksis.2013.21.6.057
  12. Lothe, P., Bourgeois, S., Dekeyser, F., Royer, E. and Dhome, M., 2009, Towards geographical referencing of monocular slam reconstruction using 3d city models: Application to real-time accurate vision-based localization, Proc. of the CVPR 2009, IEEE, Miami, USA, pp. 2882-2889.
  13. Milford, M. and Wyeth, G., 2010, Persistent navigation and mapping using a biologically inspired SLAM system, The International Journal of Robotics Research, Vol. 29, No. 9, pp. 1131-1153. https://doi.org/10.1177/0278364909340592
  14. Pink, O., 2008, Visual map matching and localization using a global feature map, Proc. of the CVPRW 2008, IEEE, Alaska, USA, pp. 1-7.
  15. Royer, E., Lhuillier, M., Dhome, M. and Lavest, J. M., 2007, Monocular vision for mobile robot localization and autonomous navigation, International Journal of Computer Vision, Vol. 74, No. 3, pp. 237-260. https://doi.org/10.1007/s11263-006-0023-y
  16. Scaramuzza, D and Fraundorfer, F., 2012, Visual odometry [tutorial], Robotics & Automation Magazine, IEEE, Vol. 18, No. 4, pp. 80-92.
  17. Soloviev, A. and Venable, D., 2010, Integration of GPS and vision measurements for navigation in GPS challenged environments, Proc. of Position Locaion and Navigation Symposium, IEEE, California, USA, pp. 826-833.
  18. Ziegler, J., Bender, P., Schreiber, M., Lategahn, H., Strauss, T., Stiller, C., Dang, T., Franke, U., Appenrodt, N., Keller, C.G. and Kaus, E., 2014, Making bertha drive-An Autonomous Journey on a Historic Route, Intelligent Transportation Systems Magazine, Vol. 6, No. 2, pp. 8-20.