DOI QR코드

DOI QR Code

Restoration of Missing Data in Satellite-Observed Sea Surface Temperature using Deep Learning Techniques

딥러닝 기법을 활용한 위성 관측 해수면 온도 자료의 결측부 복원에 관한 연구

  • Received : 2023.09.08
  • Accepted : 2023.10.27
  • Published : 2023.10.31

Abstract

Satellites represent cutting-edge technology, of ering significant advantages in spatial and temporal observations. National agencies worldwide harness satellite data to respond to marine accidents and analyze ocean fluctuations effectively. However, challenges arise with high-resolution satellite-based sea surface temperature data (Operational Sea Surface Temperature and Sea Ice Analysis, OSTIA), where gaps or empty areas may occur due to satellite instrumentation, geographical errors, and cloud cover. These issues can take several hours to rectify. This study addressed the issue of missing OSTIA data by employing LaMa, the latest deep learning-based algorithm. We evaluated its performance by comparing it to three existing image processing techniques. The results of this evaluation, using the coefficient of determination (R2) and mean absolute error (MAE) values, demonstrated the superior performance of the LaMa algorithm. It consistently achieved R2 values of 0.9 or higher and kept MAE values under 0.5 ℃ or less. This outperformed the traditional methods, including bilinear interpolation, bicubic interpolation, and DeepFill v1 techniques. We plan to evaluate the feasibility of integrating the LaMa technique into an operational satellite data provision system.

인공위성은 최첨단 기술로써 시공간적 관측제약이 적어 해양 사고에 효과적 대응과 해양 변동 특성 분석 등으로 각국의 국가기관들이 위성 정보를 활용하고 있다. 하지만 고해상도 위성 관측 기반 해수면 온도 자료(Operational Sea Surface Temperature and Sea Ice Analysis, OSTIA)는 위성의 기기적, 또는 지리적 오류와 구름으로 인해 낮게 관측되거나 공백으로 처리되며 이를 복원하기까지 수 시간이 소요된다. 본 연구는 최신 딥러닝 기반 알고리즘인 LaMa 기법을 활용하여 결측된 OSTIA 자료를 복원하고, 그 성능을 기존에 이용되어 온 세 가지 영상처리 기법들의 성능과 비교하여 평가하였다. 결정계수(R2)와 평균절대오차(MAE) 값을 이용하여 각 기법의 위성 영상 복원 성능을 평가한 결과, LaMa 알고리즘을 적용하였을 때의 R2과 MAE 값이 각각 0.9 이상, 0.5℃ 이하로, 기존에 사용되어 온 쌍 선형보간법, 쌍삼차보간법, DeepFill v1 기법을 적용한 것보다 더 우수한 성능을 보였다. 향후에는 현업 위성 자료 제공 시스템에 LaMa 기법을 적용하여 그 가능성을 평가해 보고자 한다.

Keywords

Acknowledgement

이 논문은 해양수산과학기술진흥원의 '해양위성영상 분석 활용 기술 개발(20210046)' 연구사업의 지원을 받아 수행되었습니다.

References

  1. Chi, L., B. Jiang, and Y. Mu(2020), Fast fourier convolution. Advances in Neural Information Processing Systems, 33, pp. 4479-4488.
  2. Cipolina-Kun, L., S. Caenazzo, and G. Mazzei(2022), Comparison of CoModGans, LaMa and GLIDE for Art Inpainting Completing MC Escher's Print Gallery. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 716-724.
  3. Drucker, H. and Y. Le Cun(1992), Improving generalization performance using double backpropagation. IEEE transactions on neural networks, 3(6), pp. 991-997. https://doi.org/10.1109/72.165600
  4. Gao, Y., F. Zou, W. Yang, and J. Chen(2022), The Reconstruction Method of SAR Image Ambiguous Area based on Deep Learning. 2022 3rd China International SAR Symposium (CISS), pp. 1-6.
  5. Goodfellow, I. J., J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio(2014), Generative adversarial nets. In Advances in neural information processing systems, pp. 2672-2680.
  6. He, K., X. Zhang, S. Ren, and J. Sun(2016), Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778.
  7. Huang, W., Y. Deng, S. Hui, and J. Wang(2023), Adaptive-Attention Completing Network for Remote Sensing Image. Remote Sensing, 15(5), 1321.
  8. Isola, P., J. Y. Zhu, T. Zhou, and A. A. Efros(2017), Image-to-image translation with conditional adversarial networks. Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1125-1134.
  9. Johnson, J., A. Alahi, and L. Fei-Fei(2016), Perceptual losses for real-time style transfer and super-resolution. Computer Vision-ECCV 2016, pp. 694-711.
  10. Kim, E. J.(2018), Global Earth Observation Satellite Industry Prospects. Current Industrial and Technological Trends in Aerospace, 16(1), pp. 22-29.
  11. Kim, J., S. Kim, and I. Jang(2022), Shadow Removal via Cascade Large Mask Inpainting. Pacific Conference on Computer Graphics and Applications, pp. 49-50.
  12. Mescheder, L., A. Geiger, and S. Nowozin(2018), Which training methods for GANs do actually converge?. International conference on machine learning, pp. 3481-3490.
  13. Moriasi, D. N., M. W. Gitau, N. Pai, and P. Daggupati(2015), Hydrologic and water quality models: Performance measures and evaluation criteria. Transactions of the ASABE, 58(6), pp. 1763-1785. https://doi.org/10.13031/trans.58.10715
  14. Park, K., F. Sakaida, and H. Kawamura(2008), Oceanic Skin-Bulk Temperature Difference through the Comparison of Satellite-Observed Sea Surface Temperature and In-Situ Measurements. Korean Journal of Remote Sensing, 24(4), pp. 273-287. https://doi.org/10.7780/KJRS.2008.24.4.273
  15. Ross, A. and F. Doshi-Velez(2018), Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. Proceedings of the AAAI conference on artificial intelligence 32(1), pp. 1660-1669.
  16. Suvorov, R., E. Logacheva, A. Mashikhin, A. Remizova, A. Ashukha, A. Silvestrov, N. Kong, H. Goka, K. Park, and V. Lempitsky(2022), Resolution-robust large mask inpainting with fourier convolutions. Proceedings of the IEEE/CVF winter conference on applications of computer vision, 2149-2159.
  17. Tahmasebi, P., S. Kamrava, T. Bai, and M. Sahimi(2020), Machine learning in geo-and environmental sciences: From small to large scale. Advances in Water Resources, 142, 103619.
  18. Wang, T. C., M. Y. Liu, J. Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro(2018), High-resolution image synthesis and semantic manipulation with conditional gans. Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8798-8807.
  19. Yu, J., Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang (2018), Generative image inpainting with contextual attention. Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5505-5514.