Browse > Article
http://dx.doi.org/10.7780/kjrs.2018.34.6.3.8

The Method for Colorizing SAR Images of Kompsat-5 Using Cycle GAN with Multi-scale Discriminators  

Ku, Wonhoe (Department of Aerospace System Engineering, Korea University of Science and Technology)
Chun, Daewon (Department of Aerospace System Engineering, Korea University of Science and Technology)
Publication Information
Korean Journal of Remote Sensing / v.34, no.6_3, 2018 , pp. 1415-1425 More about this Journal
Abstract
Kompsat-5 is the first Earth Observation Satellite which is equipped with an SAR in Korea. SAR images are generated by receiving signals reflected from an object by microwaves emitted from a SAR antenna. Because the wavelengths of microwaves are longer than the size of particles in the atmosphere, it can penetrate clouds and fog, and high-resolution images can be obtained without distinction between day and night. However, there is no color information in SAR images. To overcome these limitations of SAR images, colorization of SAR images using Cycle GAN, a deep learning model developed for domain translation, was conducted. Training of Cycle GAN is unstable due to the unsupervised learning based on unpaired dataset. Therefore, we proposed MS Cycle GAN applying multi-scale discriminator to solve the training instability of Cycle GAN and to improve the performance of colorization in this paper. To compare colorization performance of MS Cycle GAN and Cycle GAN, generated images by both models were compared qualitatively and quantitatively. Training Cycle GAN with multi-scale discriminator shows the losses of generators and discriminators are significantly reduced compared to the conventional Cycle GAN, and we identified that generated images by MS Cycle GAN are well-matched with the characteristics of regions such as leaves, rivers, and land.
Keywords
SAR; KOMPSAT-5; Colorization; Deep Learning; Satellite Image Processing;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Argenti, F., A. Lapini, T. Bianchi, and L. Alparone, 2013. A tutorial on speckle reduction in synthetic aperture radar images, IEEE Geoscience and remote sensing magazine, 1(3): 6-35.   DOI
2 Deng, Q., Y. Chen, W. Zhang, and J. Yang, 2008. Colorization for Polarimetric SAR image based on scattering mechanisms, Proc. of 2008 Congress on Image and Signal Processing, Las Vegas, NV, Mar. 30-Apr. 4, vol. 1, pp. 697-701.
3 Durugkar, I., I. Gemp and S. Mahadevan, 2016. Generative multi-adversarial networks, arXiv preprint arXiv:1611.01673.
4 Goodfellow, I., J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farely, S. Ozaire, A. Courvile, and Y. Bengio, 2014. Generative Adversarial Nets, Proc. of 2014 Advances in Neural Information Processing Systems, Montreal, Quebec, Dec. 8-13, vol. 1, pp. 2672-2680.
5 Goodfellow, I., 2016. NIPS 2016 tutorial: Generative adversarial networks, arXiv preprint arXiv: 1701.00160.
6 Google, 2018. https://lp.google-mkto.com/Google-imagery.html, Accessed on Dec. 12, 2018.
7 Kala, Z., K. Mikolajczyk, and J. Matas, 2010. Forwardbackward error: Automatic detection of tracking failures, Proc. of 2010 International Conference on Pattern Recognition, Istanbul, Turkey, Aug. 23-Aug. 26, vol. 1, pp. 2756-2759.
8 LeCun, Y., 1998. The MNIST database of handwritten digits, http://yann.lecun.com/exdb/mnist/, Accessed on Nov. 29, 2018.
9 Mirza, M. and O. Simon, 2014. Conditional generative adversarial nets, arXiv preprint arXiv:1411.1784.
10 NASA, 2013. https://eospso.nasa.gov/missions/seasat-1, Accessed on Oct. 30, 2018.
11 Xudong, M., H. Xie, R. Lau, Z. Wang, and S. P. Smolley, 2017. Proc. of 2017 International Conference on Computer Vision and Pattern Recognition, Honolulu, HI, Jul. 22-Jul. 25, vol. 1, pp. 2812-2821.
12 Isola, P., J. Zhu, T. Zhou, and A. A. Efros, 2017. Imageto-Image Translation with Conditional Adversarial Networks, Proc. of 2017 International Conference on Computer Vision and Pattern Recognition, Honolulu, HI, Jul. 22-Jul. 25, vol. 1, pp. 5967-5976.
13 Radford, A., L. Metz, and S. Chintala, 2015. Unsupervised representation learning with deep convolutional generative adversarial networks, arXiv preprint arXiv:1511.06434.
14 Song, Q., F. Xu, and Y. Jin, 2018. Radar Image Colorization: Converting Single-Polarization to Fully Polarimetric Using Deep Neural Networks, IEEE Access, 6: 1647-1661.   DOI
15 He, K., X. Zhang, S. Ren, and J. Sun, 2016. Deep residual learning for image recognition, Proc. of 2016 International Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, Jun. 26-Jul. 1, vol. 1, pp. 770-778.
16 Space-Track, 2013. https://www.space-track.org/#catalog, Accessed on Nov. 29, 2018.
17 Sundaram, N., T. Brox, and K. Keutzer, 2010. Dense point trajectories by GPU-accelerated large displacement optical flow, Proc. of 2010 European conference on Computer Vision, Heraklion, Greece, Sep. 5-Sep. 11, vol. 1, pp. 438-451.
18 Wang, T. C., M. Y. Liu, J. Y. Zhu, A. Tao, J. Kautz, and B. Cantanzaro, 2017. High-resolution image synthesis and semantic manipulation with conditional gans, arXiv preprint arXiv: 1711.11585.
19 Zhu, J. Y., T. Park, P. Isola, and A. A. Efros, 2017. Unpaired Image-to- Image Translation using Cycle-Consistent Adversarial Networks, Proc. of 2017 International Conference on Computer Vision, Venice, Italy, Oct. 22-Oct. 29, vol. 1, pp. 2242-2251.