DOI QR코드

DOI QR Code

딥러닝 기반의 Semantic Segmentation을 위한 Residual U-Net에 관한 연구

A Study on Residual U-Net for Semantic Segmentation based on Deep Learning

  • 신석용 (광운대학교 플라즈마바이오디스플레이학과) ;
  • 이상훈 (광운대학교 인제니움학부) ;
  • 한현호 (울산대학교 교양대학)
  • Shin, Seokyong (Department of Plasma Bio Display, Kwangwoon University) ;
  • Lee, SangHun (Ingenium College of Liberal Arts, Kwangwoon University) ;
  • Han, HyunHo (College of General Education, University of Ulsan)
  • 투고 : 2021.04.21
  • 심사 : 2021.06.20
  • 발행 : 2021.06.28

초록

본 논문에서는 U-Net 기반의 semantic segmentation 방법에서 정확도를 향상시키기 위해 residual learning을 활용한 인코더-디코더 구조의 모델을 제안하였다. U-Net은 딥러닝 기반의 semantic segmentation 방법이며 자율주행 자동차, 의료 영상 분석과 같은 응용 분야에서 주로 사용된다. 기존 U-Net은 인코더의 얕은 구조로 인해 특징 압축 과정에서 손실이 발생한다. 특징 손실은 객체의 클래스 분류에 필요한 context 정보 부족을 초래하고 segmentation 정확도를 감소시키는 문제가 있다. 이를 개선하기 위해 제안하는 방법은 기존 U-Net에 특징 손실과 기울기 소실 문제를 방지하는데 효과적인 residual learning을 활용한 인코더를 통해 context 정보를 효율적으로 추출하였다. 또한, 인코더에서 down-sampling 연산을 줄여 특징맵에 포함된 공간 정보의 손실을 개선하였다. 제안하는 방법은 Cityscapes 데이터셋 실험에서 기존 U-Net 방법에 비해 segmentation 결과가 약 12% 향상되었다.

In this paper, we proposed an encoder-decoder model utilizing residual learning to improve the accuracy of the U-Net-based semantic segmentation method. U-Net is a deep learning-based semantic segmentation method and is mainly used in applications such as autonomous vehicles and medical image analysis. The conventional U-Net occurs loss in feature compression process due to the shallow structure of the encoder. The loss of features causes a lack of context information necessary for classifying objects and has a problem of reducing segmentation accuracy. To improve this, The proposed method efficiently extracted context information through an encoder using residual learning, which is effective in preventing feature loss and gradient vanishing problems in the conventional U-Net. Furthermore, we reduced down-sampling operations in the encoder to reduce the loss of spatial information included in the feature maps. The proposed method showed an improved segmentation result of about 12% compared to the conventional U-Net in the Cityscapes dataset experiment.

키워드

참고문헌

  1. Shin, S., Han, H., & Lee, S. H. (2021). Improved YOLOv3 with duplex FPN for object detection based on deep learning. The International Journal of Electrical Engineering & Education, 002072092098352. https://doi.org/10.1177/0020720920983524
  2. Kirillov, A., He, K., Girshick, R., Rother, C., & Dollar, P. (2019). Panoptic Segmentation. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019-June, 9396-9405. https://doi.org/10.1109/CVPR.2019.00963
  3. Shelhamer, E., Long, J., & Darrell, T. (2017). Fully Convolutional Networks for Semantic Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4), 640-651. https://doi.org/10.1109/TPAMI.2016.2572683
  4. Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9351, Issue Cvd, pp. 234-241). https://doi.org/10.1007/978-3-319-24574-4_28
  5. Badrinarayanan, V., Kendall, A., & Cipolla, R. (2017). SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(12), 2481-2495. https://doi.org/10.1109/TPAMI.2016.2644615
  6. Sovetkin, E., Achterberg, E. J., Weber, T., & Pieters, B. E. (2021). Encoder-Decoder Semantic Segmentation Models for Electroluminescence Images of Thin-Film Photovoltaic Modules. IEEE Journal of Photovoltaics, 11(2), 444-452. https://doi.org/10.1109/JPHOTOV.2020.3041240
  7. Estrada, S., Conjeti, S., Ahmad, M., Navab, N., & Reuter, M. (2018). Competition vs. Concatenation in Skip Connections of Fully Convolutional Networks (pp. 214-222). https://doi.org/10.1007/978-3-030-00919-9_25
  8. Howard, A., Sandler, M., Chen, B., Wang, W., Chen, L.-C., Tan, M., Chu, G., Vasudevan, V., Zhu, Y., Pang, R., Adam, H., & Le, Q. (2019). Searching for MobileNetV3. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 1314-1324. https://doi.org/10.1109/ICCV.2019.00140
  9. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016-Decem, 770-778. https://doi.org/10.1109/CVPR.2016.90
  10. Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., & Schiele, B. (2016). The Cityscapes Dataset for Semantic Urban Scene Understanding. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016-Decem, 3213-3223. https://doi.org/10.1109/CVPR.2016.350
  11. Paszke, A., Chaurasia, A., Kim, S., & Culurciello, E. (2016). ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation. 1-10. http://arxiv.org/abs/1606.02147
  12. Treml, M., Arjona-medina, J., Unterthiner, T., Durgesh, R., Friedmann, F., Schuberth, P., Mayr, A., Heusel, M., Hofmarcher, M., Widrich, M., Nessler, B., & Hochreiter, S. (2016). Speeding up Semantic Segmentation for Autonomous Driving. NIPS 2016 Workshop MLITS, Nips, 1-7. https://openreview.net/pdf?id=S1uHiFyyg%0Ahttps://openreview.net/forum?id=S1uHiFyyg
  13. Zheng, S., Jayasumana, S., Romera-Paredes, B., Vineet, V., Su, Z., Du, D., Huang, C., & Torr, P. H. S. (2015). Conditional Random Fields as Recurrent Neural Networks. 2015 IEEE International Conference on Computer Vision (ICCV), 2015 Inter, 1529-1537. https://doi.org/10.1109/ICCV.2015.179
  14. Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., & Yuille, A. L. (2018). DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4), 834-848. https://doi.org/10.1109/TPAMI.2017.2699184
  15. Liu, Z., Li, X., Luo, P., Loy, C.-C., & Tang, X. (2015). Semantic Image Segmentation via Deep Parsing Network. 2015 IEEE International Conference on Computer Vision (ICCV), 2015 Inter, 1377-1385. https://doi.org/10.1109/ICCV.2015.162