DOI QR코드

DOI QR Code

A Study for Generation of Artificial Lunar Topography Image Dataset Using a Deep Learning Based Style Transfer Technique

딥러닝 기반 스타일 변환 기법을 활용한 인공 달 지형 영상 데이터 생성 방안에 관한 연구

  • Na, Jong-Ho (Department of Future & Smart Construction Research, Korea Institute of Civil Engineering and Building Technology) ;
  • Lee, Su-Deuk (Department of Future & Smart Construction Research, Korea Institute of Civil Engineering and Building Technology) ;
  • Shin, Hyu-Soung (Department of Future & Smart Construction Research, Korea Institute of Civil Engineering and Building Technology)
  • 나종호 (한국건설기술연구원 미래스마트건설연구본부) ;
  • 이수득 (한국건설기술연구원 미래스마트건설연구본부) ;
  • 신휴성 (한국건설기술연구원 미래스마트건설연구본부)
  • Received : 2022.02.14
  • Accepted : 2022.03.17
  • Published : 2022.04.30

Abstract

The lunar exploration autonomous vehicle operates based on the lunar topography information obtained from real-time image characterization. For highly accurate topography characterization, a large number of training images with various background conditions are required. Since the real lunar topography images are difficult to obtain, it should be helpful to be able to generate mimic lunar image data artificially on the basis of the planetary analogs site images and real lunar images available. In this study, we aim to artificially create lunar topography images by using the location information-based style transfer algorithm known as Wavelet Correct Transform (WCT2). We conducted comparative experiments using lunar analog site images and real lunar topography images taken during China's and America's lunar-exploring projects (i.e., Chang'e and Apollo) to assess the efficacy of our suggested approach. The results show that the proposed techniques can create realistic images, which preserve the topography information of the analog site image while still showing the same condition as an image taken on lunar surface. The proposed algorithm also outperforms a conventional algorithm, Deep Photo Style Transfer (DPST) in terms of temporal and visual aspects. For future work, we intend to use the generated styled image data in combination with real image data for training lunar topography objects to be applied for topographic detection and segmentation. It is expected that this approach can significantly improve the performance of detection and segmentation models on real lunar topography images.

달 현지 탐사를 위해 무인 이동체가 활용되고 있으며, 달 지상 관심 지역의 지형 특성을 정확하게 파악하여 실시간으로 정보화 하는 작업이 요구된다. 하지만, 정확도 높은 지형/지물 객체 인식 및 영역 분할을 위해서는 다양한 배경조건의 영상 학습데이터가 필요하며 이러한 학습데이터를 구축하는 과정은 많은 인력과 시간이 요구된다. 특히 대상이 쉽게 접근하기 힘든 달이기에 실제 현지 영상의 확보 또한 한계가 있어, 사실에 기반하지만 유사도 높은 영상 데이터를 인위적으로 생성시킬 필요성이 대두된다. 본 연구에서는 가용한 중국의 달 탐사 Yutu 무인 이동체 및 미국의 Apollo 유인 착륙선에서 촬영한 영상을 통해 위치정보 기반 스타일 변환 기법(Style Transfer) 모델을 적용하여 실제 달 표면과 유사한 합성 영상을 인위적으로 생성하였다. 여기서, 유사 목적으로 활용될 수 있는 두 개의 공개 알고리즘(DPST, WCT2)를 구현하여 적용해 보았으며, 적용 결과를 시간적, 시각적 측면으로 비교하여 성능을 평가하였다. 평가 결과, 실험 이미지의 형태 정보를 보존하면서 시각적으로도 매우 사실적인 영상을 생성할 수 있음을 확인하였다. 향후 본 실험의 결과를 바탕으로 생성된 영상 데이터를 지형객체 자동 분류 및 인식을 위한 인공지능 학습용 영상 데이터로 추가 학습된다면 실제 달 표면 영상에서도 강인한 객체 인식 모델 구현이 가능할 것이라 판단된다.

Keywords

Acknowledgement

본 연구는 과학기술정보통신부 한국건설기술연구원 연구운영비지원(주요사업)사업으로 수행되었습니다(과제번호20220124-001, (22주요-대2-BIG)극한건설 환경 구현 인프라 및 TRL6 이상급극한건설 핵심기술개발(7/9)).

References

  1. Basilevsky, A.T., Abdrakhimov, A.M, Head, J.W., Pieters, C.M., Wu, Y. and Xiao, L., 2015, "Geologic characteristics of the Luna 17/Lunokhod 1 and Chang'E-3/Yutu landing sites, Northwest Mare Imbrium of the Moon". Planetary and Space Science, 117: 385-400. https://doi.org/10.1016/j.pss.2015.08.006
  2. Bengio, Y., 2009, "Learning deep architectures for AI," Now Publishers Inc.
  3. Deng, J., Dong, W., Socher, R., Li, L. and FeiFei, L., 2009, "Imagenet: A large-scale hierarchical image database." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 248-255.
  4. Di, K., Liu, Z., Wan, W., Peng, M., Liu, B., Wang, Y., Gou, S. and Yue, Z., 2020, "Geospatial technologies for Chang'e-3 and Chang'e-4 lunar rover missions," Geo-spatial Information Science, 23(1): 87-97. https://doi.org/10.1080/10095020.2020.1718002
  5. Gatys, L.A., Ecker, A.S., Bethge, M., 2016, " Image style transfer using convolutional neural networks," In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4990-4998.
  6. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B. and Hochreiter, S., 2017, "Gans trained by a two time-scale update rule converge to a local nash equilibrium," In Proceedings of the NIPS, vol. 30.
  7. Hong, S. and Shin, H., 2018, "Trend analysis of lunar exploration missions for lunar base construction," Journal of the Korea Academia-Industrial Cooperation Society, 19(7): 144-152. https://doi.org/10.5762/KAIS.2018.19.7.144
  8. Hong, S., Bangunharcana, A., Park, J.M., Choi, M. and Shin, H.S., 2021, "Visual SLAM-Based Robotic Mapping Method for Planetary Construction," Sensors, 21(22): 7715-7732. https://doi.org/10.3390/s21227715
  9. Hong, S., Chung, T., Park, J. and Shin, H., 2019, "Research on development of construction spatial information technology, using rover's camera system," Journal of the Korea Academia-Industrial cooperation Society, 20(7): 630-637. https://doi.org/10.5762/KAIS.2019.20.7.630
  10. Ju, G., 2016, "Development status of domestic & overseas space exploration & associated technology," Journal of the Korean Society for Aeronautical & Space Sciences, 44(8): 741-757. https://doi.org/10.5139/JKSAS.2016.44.8.741
  11. Labelme, 2021, https://github.com/wkentaro/labelme.
  12. Lee, H., Park, J., Hong, S. and Shin, H., 2020, "Low Light Image Enhancement to Construct Terrain Information from Permanently Shadowed Region on the Moon," Journal of Korean Society for Geospatial Information Science, 28(4): 41-48.
  13. Lee, K., Hong, S., Park, J. and Shin, H., 2019, "Deep-learning based automatic recognition and digitization of targeted objects/regions on analogue lunar surface", KSCE Conference of Civil and Environmental Engineering Research, pp. 1297-1298.
  14. Li, Y., Fang, C., Yang, J., Wang, Z., Lu, X. and Yang, M.H., 2017, "Universal style transfer viafeature transforms," In Proceedings of the NIPS, pp. 386-396.
  15. Luan, F., Paris, S., Shechtman, E. and Bala, K., 2017, "Deep photo style transfer," In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4990-4998.
  16. NASA, 2021, Apollo 11 Mission Overview, https://www.nasa.gov/mission_pages/apollo/missions/apollo11.html.
  17. Park, J., Hong, S., Choi, K., Kim, C. and Shin, H., 2020, "Experiment on calibration of multi-camera system mounted on rover for extreme region exploration." Journal of Korean Society for Geospatial Information Science, 28(2): 21-28.
  18. Simonyan, K. and Zisserman, A., 2014, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXIV,:1409.1556.
  19. Stankovic, R.S. and Falkowski, B.J., 2003, "The Haar wavelet transform: its status and achievements," Computers & Electrical Engineering, 29(1):25-44. https://doi.org/10.1016/S0045-7906(01)00011-8
  20. Williams, T. and Li, R., 2018, "Wavelet pooling for convolutional neural networks," In International Conference on Learning Representations.
  21. Yijun, L., Chen, F., Jimei, Y., Zhaowen, W., Xin, L. and Ming-Hsuan, Y., 2017, "Universal style transfer via feature transforms," In Proceedings of the Neural Information Proceeding Systems.
  22. Yoo, J., Uh, Y., Chun, S., Kang, B. and Ha, J.W., 2019, "Photorealistic style transfer via wavelet transforms." In Proceedings of the IEEE International Conference on Computer Vision , pp. 9036-9045.