DOI QR코드

DOI QR Code

Lane Model Extraction Based on Combination of Color and Edge Information from Car Black-box Images

차량용 블랙박스 영상으로부터 색상과 에지정보의 조합에 기반한 차선모델 추출

  • Liang, Han (Dept. of Civil Engineering, Kyungpook National University) ;
  • Seo, Suyoung (Dept. of Civil Engineering, Kyungpook National University)
  • Received : 2021.01.04
  • Accepted : 2021.02.08
  • Published : 2021.02.28

Abstract

This paper presents a procedure to extract lane line models using a set of proposed methods. Firstly, an image warping method based on homography is proposed to transform a target image into an image which is efficient to find lane pixels within a certain region in the image. Secondly, a method to use the combination of the results of edge detection and HSL (Hue, Saturation, and Lightness) transform is proposed to detect lane candidate pixels with reliability. Thirdly, erroneous candidate lane pixels are eliminated using a selection area method. Fourthly, a method to fit lane pixels to quadratic polynomials is proposed. In order to test the validity of the proposed procedure, a set of black-box images captured under varying illumination and noise conditions were used. The experimental results show that the proposed procedure could overcome the problems of color-only and edge-only based methods and extract lane pixels and model the lane line geometry effectively within less than 0.6 seconds per frame under a low-cost computing environment.

본 연구는 일련의 방법을 조합하여 차선의 라인모델을 추출하는 과정을 제안한다. 첫째로, 호모그래피(homography)에 기반한 영상와핑(warping)을 통하여 영상내에서 차선영역내에 존재하는 픽셀들을 검출하기 용이하도록 변환하는 것을 제안한다. 두 번째로, 에지검출과 HSL (Hue, Saturation, and Lightness) 변환을 이용하여 차선후보픽셀들을 안정적으로 추출하는 방법을 제안한다. 세 번째로, 선택영역을 활용하는 방법을 통하여 차선후보픽셀들 중에서 오류픽셀들을 제거하는 방법을 제안한다. 네 번째로, 차선픽셀들을 이차다항식 함수로 모델링하는 방법을 제안한다. 제안한 방법의 유효성을 검증하기 위하여, 다양한 조명조건과 노이즈 하에서 취득한 차량용 블랙박스영상을 이용하였다. 실험결과는 제안한 방법은 색상이나 에지만을 기반으로 하는 방법들에서 나타나는 문제점들을 극복하고 비교적 저가의 컴퓨팅 환경하에서 다양한 조건의 영상에 대하여 프레임 당 약 0.6초 이내에 차선픽셀들을 추출하고 차선라인모델을 생성할 수 있음을 보여준다.

Keywords

References

  1. Audibert, H.K.Y. and Ponce, J. (2010), General road detection from a single image, IEEE Transactions on Image Processing, Vol. 19, No. 8, pp. 2211-2220. https://doi.org/10.1109/TIP.2010.2045715
  2. Borkar, A., Hayes, M., and Smith, M. (2012), A novel lane detection system with efficient ground truth generation, IEEE Transactions on Intelligent Transportation Systems, Vol. 13, No. 1, pp. 365-374. https://doi.org/10.1109/TITS.2011.2173196
  3. Cheng, H.Y., Jeng, B.S., Tseng, P.T., and Fan, K.C. (2006), Lane detection with moving vehicles in the traffic scenes, IEEE Transactions on Intelligent Transportation Systems, Vol. 7, No. 4, pp. 571-582. https://doi.org/10.1109/TITS.2006.883940
  4. Chiu, K.-Y. and Lin, S.-F. (2005), Lane detection using colorbased segmentation, The 2005 IEEE Intelligent Vehicles Symposium, 6-8 June, Las Vegas, NV, USA, pp. 706-711.
  5. Hillel, A.B., Lerner, R., Levi, D., and Raz, G. (2014), Recent progress in road and lane detection: a survey, Machine vision and applications, Vol. 25, pp. 727-745. https://doi.org/10.1007/s00138-011-0404-2
  6. Hoang, T.M., Hong, H.G., Vokhidov, H., and Park, K.R. (2016), Road lane detection by discriminating dashed and solid road lanes using a visible light camera sensor, Sensors, Vol. 16,1313. https://doi.org/10.3390/s16081313
  7. Jiang, R., Klette, R., Vaudrey, T., and Wang, S. (2009), New lane model and distance transform for lane detection and tracking, International Conference on Computer Analysis of Images and Patterns, Springer, Berlin, Heidelberg, pp. 1044-1052.
  8. Jiang, Y., Gao, F., and Xu, G. (2010), Computer vision-based multiple-lane detection on straight road and in a curve, 2010 IEEE International Conference on Image Analysis and Signal Processing, 9-11 April, Zhejiang, China, pp. 114-117.
  9. Jung, H., Min, J., and Kim, J. (2013), An efficient lane detection algorithm for lane departure detection, 2013 IEEE Intelligent Vehicles Symposium (IV), IEEE, 23-26 June, Gold Coast, QLD, Australia, pp. 976-981.
  10. Kuk, G., An, H., Ki, H., and Cho, I. (2010), Fast lane detection and tracking based on hough transform with reduced memory requirement, 13th International IEEE Conference on Intelligent Transportation Systems, 19-22 Sept, Funchal, Portugal, pp. 1344-1349.
  11. Li, H., Feng, M., and Wang, X. (2012), Inverse perspective mapping based urban road markings detection, Proceedings of the International Conference on Cloud Computing and Intelligent Systems, 30 Oct.-1 Nov., Hangzhou, China, pp.1178-1182.
  12. Lopez, A., Serrat, J., Canero, C., Lumbreras, F., and Graf, T. (2010), Robust lane markings detection and road geometry computation, International Journal of Automotive Technology. Vol. 11, No. 3, pp. 395-407. https://doi.org/10.1007/s12239-010-0049-6
  13. Narote, S.P., Bhujbal, P.N., Narote, A.S., and Dhane, D.M. (2018), A review of recent advances in lane detection and departure warning system, Pattern Recognition, 73, January 2018, pp. 216-234. https://doi.org/10.1016/j.patcog.2017.08.014
  14. Neven, D., De Brabandere, B., Georgoulis, S., Proesmans, M., and Van Gool, L. (2018), Towards end-to-end lane detection: an instance segmentation approach, 2018 IEEE Intelligent Vehicles Symposium, IV, 26-30 June, Changshu, China, pp. 286-291.
  15. Nguyen, V., Kim, H., Jun, S., and Boo, K. (2018), A study on real-time detection method of lane and vehicle for lane change assistant system using vision system on highway, Engineering Science and Technology, an International Journal, Vol. 21, pp. 822-833. https://doi.org/10.1016/j.jestch.2018.06.006
  16. Rotaru, C., Graf, T., and Zhang, J. (2008), Color image segmentation in HSI space for automotive applications, Journal of Real-Time Image Processing, 3 (4), pp. 311-322. https://doi.org/10.1007/s11554-008-0078-9
  17. Sezgin, M. and Sankur, B. (2004), Survey over image thresholding techniques and quantitative performance evaluation, Journal of Electronic Imaging, Vol. 13, No. 1, pp. 146-168. https://doi.org/10.1117/1.1631315
  18. Wang, Y., Teoh, E.K., and Shen, D. (2004), Lane detection and tracking using b-snake, Image and Vision Computing, Vol. 22, No. 4, pp. 269-280. https://doi.org/10.1016/j.imavis.2003.10.003
  19. Xiao, D., Li, J., and Li, K. (2019), Robust precise dynamic point reconstruction from multi-view, IEEE Access, Vol. 7, pp. 22408-22420. https://doi.org/10.1109/access.2019.2896096
  20. Yoo, H., Yang, U., and Sohn, K. (2013), Gradient-enhancing conversion for illumination-robust lane detection, IEEE Transactions on Intelligent Transportation Systems, Vol. 14, No. 3, pp. 1083-1094. https://doi.org/10.1109/TITS.2013.2252427
  21. Youjin, T., Wei, C., Xingguang, L., and Lei, C. (2018), A robust lane detection method based on vanishing point estimation, Procedia Computer Science, Vol. 131, pp. 354-360. https://doi.org/10.1016/j.procs.2018.04.174