DOI QR코드

DOI QR Code

Automatic Generation of Land Cover Map Using Residual U-Net

Residual U-Net을 이용한 토지피복지도 자동 제작 연구

  • 유수홍 (연세대학교 대학원 건설환경공학과) ;
  • 이지상 (연세대학교 대학원 건설환경공학과 통합과정) ;
  • 배준수 (연세대학교 대학원 건설환경공학과 통합과정) ;
  • 손홍규 (연세대학교 대학원 건설환경공학과)
  • Received : 2020.04.08
  • Accepted : 2020.06.19
  • Published : 2020.10.01

Abstract

Land cover maps are derived from satellite and aerial images by the Ministry of Environment for the entire Korea since 1998. Even with their wide application in many sectors, their usage in research community is limited. The main reason for this is the map compilation cycle varies too much over the different regions. The situation requires us a new and quicker methodology for generating land cover maps. This study was conducted to automatically generate land cover map using aerial ortho-images and Landsat 8 satellite images. The input aerial and Landsat 8 image data were trained by Residual U-Net, one of the deep learning-based segmentation techniques. Study was carried out by dividing three groups. First and second group include part of level-II (medium) categories and third uses group level-III (large) classification category defined in land cover map. In the first group, the results using all 7 classes showed 86.6 % of classification accuracy The other two groups, which include level-II class, showed 71 % of classification accuracy. Based on the results of the study, the deep learning-based research for generating automatic level-III classification was presented.

환경부에서는 위성영상과 항공영상을 이용하여 토지피복지도를 1998년부터 제작하여 배포하고 있으나, 권역별 제작 주기가 달라 활용성이 저하된다. 이에, 본 연구에서는 항공정사영상과 Landsat 8 위성영상을 이용하여, 토지피복지도를 자동으로 생성하기 위한 연구를 수행하였다. 토지피복지도를 자동적으로 제작하기 위하여 딥러닝 기반 세그먼테이션 방법의 하나인 Residual U-Net을 활용하였다. 토지피복지도의 제작 시기와 가장 근접한 시기의 항공 및 위성영상을 신경망을 통하여 학습하고, 학습결과를 3가지 실험군으로 나누어 토지피복지도와 비교하여 정확도 평가를 수행하였다. 첫 번째 군으로 대분류 7개 전체를 활용한 결과의 경우, 선행연구에서 대분류 4개에만 적용된 결과보다도 향상된 86.6 %의 분류 정확도를 나타내었다. 중분류를 일부 포함한 2개의 실험군의 경우에는 71 %의 정확도를 나타내었다. 본 연구 결과를 바탕으로 신경망을 활용한 대분류 항목에 대한 자동 분류 가능성을 제시하였으며, 중분류 및 세분류에 대한 기초연구로 활용이 가능할 것으로 판단된다.

Keywords

References

  1. Audebert, N., Le Saux, B. and Lefevre, S. (2016). "Semantic segmentation of earth observation data using multimodal and multi-scale deep networks." Asian Conference on Computer Vision, Springer, Taipei, Taiwan, pp. 180-196.
  2. Buslaev, A., Seferbekov, S., Iglovikov, V. and Shvets, A. (2018). "Fully convolutional network for automatic road extraction from satellite imagery." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, Utah, pp. 207-210.
  3. Filin, O. and Zapara, A. (2018). "Road detection with EOSResUNet and post vectorizing algorithm." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, Utah, pp. 201-205.
  4. Ghosh, A., Ehrlich, M., Shah, S., Davis, L. and Chellappa, R. (2018). "Stacked U-Nets for ground material segmentation in remote sensing imagery." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, Utah, pp. 257-261.
  5. Jo, W. H., Lim, Y. H. and Park, K. H. (2019). "Deep learning based land cover classification using convolutional neural network: a case study of Korea." The Korean Geographical Society, Vol. 54, No. 1, pp. 1-16 (in Korean).
  6. Kampffmeyer, M., Salberg, A. B. and Jenssen, R. (2016). "Semantic segmentation of small objects and modeling of uncertainty in urban remote sensing images using deep convolutional neural networks." Proceedings of the IEEE conference on computer vision and pattern recognition workshops, Las Vegas, pp. 1-9.
  7. Lee, S. H. and Kim, J. S. (2019). "Land cover classification using sematic image segmentation with deep learning." Korean Journal of Remote Sensing, Vol. 35, No. 2, pp. 279-288 (in Korean). https://doi.org/10.7780/KJRS.2019.35.2.7
  8. National Environment Information Network System (2019). Land cover map, Available at: http://www.neins.go.kr/gis/mnu01/doc03a.asp (Accessed: February 03, 2020) (in Korean).
  9. Ronneberger, O., Fischer, P. and Brox, T. (2015). "U-net: Convolutional networks for biomedical image segmentation." International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, Germany, pp. 234-241.
  10. Seo, K. H., Oh, C. W., Kim, D, Lee, M. Y. and Yang, Y. J. (2019). "An empirical study on automatic building extraction from aerial images using a deep learning algorithm." Proceedings of Korean Society for Geospatial Information Science, Republic of Korea, pp. 243-252 (in Korean).
  11. Szegedy, C., Ioffe, S., Vanhoucke, V. and Alemi, A. A. (2017). "Inception-v4, inception-resnet and the impact of residual connections on learning." Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, pp. 4278-4284.
  12. Xu, Y., Wu, L., Xie, Z. and Chen, Z. (2018). "Building extraction in very high resolution remote sensing imagery using deep learning and guided filters." Remote Sensing, Vol. 10, No. 1, 144. https://doi.org/10.3390/rs10010144
  13. Zhang, Z., Liu, Q. and Wang, Y. (2018). "Road extraction by deep residual u-net." IEEE Geoscience and Remote Sensing Letters, Vol. 15, No. 5, pp. 749-753. https://doi.org/10.1109/lgrs.2018.2802944