DOI QR코드

DOI QR Code

Research on the Efficiency of Classification of Traffic Signs Using Transfer Learning

전수 학습을 이용한 도로교통표지 데이터 분류 효율성 향상 연구

  • Kim, June Seok (Department of GIS Engineering, Namseoul University) ;
  • Hong, Il Young (Department of Spatial Information Engineering, Namseoul University)
  • Received : 2019.05.23
  • Accepted : 2019.06.18
  • Published : 2019.06.30

Abstract

In this study, we investigated the application of deep learning to the manufacturing process of traffic and road signs which are constituting the road layer in map production with 1 / 1,000 digital topographic map. Automated classification of road traffic sign images was carried out through construction of training data for images acquired by using transfer learning which is used in image classification of deep learning. As a result of the analysis, the signs of attention, regulation, direction and assistance were irregular due to various factors such as the quality of the photographed images and sign shape, but in the case of the guide sign, the accuracy was higher than 97%. In the digital mapping, it is expected that the automatic image classification method using transfer learning will increase the utilization in data acquisition and classification of various layers including traffic safety signs.

본 연구에서는 1/1,000 수치지형도 및 정밀도로지도 제작에 있어서 도로 레이어를 구성하고 있는 교통안전표지 및 도로표지의 제작 공정에 있어서 딥러닝의 적용방안을 탐색하였다. 딥러닝의 이미지 분류에서 활용하는 전수학습을 이용하여 취득한 영상에 대한 학습자료 구축을 통해 도로 표지정보의 자동분류를 수행하였다. 분석결과 주의, 규제, 지시, 보조는 촬영된 이미지의 품질 및 형태 등 여러 가지 요소에 의해 정확도가 불규칙하게 나타났지만, 안내표지의 경우는 정확도가 97% 이상으로 높게 나타났다. 수치지도제작에 있어 전수학습을 이용한 이미지 자동분류 방식은 교통안전표지를 포함한 다양한 레이어들에 대한 자료 취득과 분류에 있어서 활용이 증가할 것으로 기대한다.

Keywords

GCRHBD_2019_v37n3_119_f0001.png 이미지

Fig. 1. Research flow chart

GCRHBD_2019_v37n3_119_f0002.png 이미지

Fig. 2. Types of traffc signs

GCRHBD_2019_v37n3_119_f0003.png 이미지

Fig. 3. Types of road signs

GCRHBD_2019_v37n3_119_f0004.png 이미지

Fig. 4. Ground photography

GCRHBD_2019_v37n3_119_f0005.png 이미지

Fig. 5. Data preprocessing

GCRHBD_2019_v37n3_119_f0006.png 이미지

Fig. 6. Model accuracy

GCRHBD_2019_v37n3_119_f0007.png 이미지

Fig. 7. Predicted images for dataset 1

GCRHBD_2019_v37n3_119_f0008.png 이미지

Fig. 8. Predicted images for dataset 2

Table 1. Parameter values for ImageDataGenerator

GCRHBD_2019_v37n3_119_t0001.png 이미지

Table 2. Training dataset quantity

GCRHBD_2019_v37n3_119_t0002.png 이미지

Table 3. Predicted result for dataset 1

GCRHBD_2019_v37n3_119_t0003.png 이미지

Table 4. Predicted result for dataset 2

GCRHBD_2019_v37n3_119_t0004.png 이미지

References

  1. Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., and Kudlur, M. (2016), Tensorflow: A system for largescale machine learning, In 12th Symposium on Operating Systems Design and Implementation, 2-4 November, Savannah, USA, pp. 265-283.
  2. Famili, A., Shen, W. M., Weber, R., and Simoudis, E. (1997), Data preprocessing and intelligent data analysis, Intelligent Data Analysis, Vol. 1, No. 1, pp. 3-23. https://doi.org/10.3233/IDA-1997-1102
  3. Hong, I.Y. (2017), Land use classification using LBSN data and machine learning technique, Journal of the Korean Cartographic Association, Vol. 17, No. 3, pp. 59-67. (in Korean with English abstract) https://doi.org/10.16879/jkca.2017.17.3.059
  4. Kim, J.Y. (2015), Introducing google tensorflow, Journal of the Korean Computer Information Society, Vol. 23, No. 2, pp. 9-15. https://doi.org/10.9708/jksci.2015.20.7.009
  5. Lecun, Y., Bengio, Y., and Hinton, G. (2015), Deep learning, Nature, Vol. 521, No. 7553, 436. https://doi.org/10.1038/nature14539
  6. Lee, C.H. and Hong, I.Y. (2017), Investigation of topographic characteristics of parcels using UAV and machine learning, Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography, Vol. 35, No. 5, pp. 349-356. https://doi.org/10.7848/KSGPC.2017.35.5.349
  7. Lee, J.G., Lee, T.H., and Yoon, S.R. (2014), Machine learning for big data analytics, Journal of Korean Telecommunications Association(Information and Communication), Vol. 31, No. 11, pp. 14-26.
  8. Liu, W., Wang, Z., Liu, X., Zeng, N., Liu, Y., and Alsaadi, F.E. (2017), A survey of deep neural network architectures and their applications, Neurocomputing, Vol. 234, pp. 11-26. https://doi.org/10.1016/j.neucom.2016.12.038
  9. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., and Rabinovich, A. (2015), Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 7-12 June, Boston, USA, pp. 1-9.
  10. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. (2016), Rethinking the Inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 27-30 June, Las Vegas, USA, pp. 2818-2826.
  11. Xia, X., Xu, C., and Nan, B. (2017), Inception-v3 for flower classification, In 2017 2nd International Conference on Image, Vision and Computing (ICIVC), 2-4 June, Chengdu, China, pp. 783-787.