DOI QR코드

DOI QR Code

Machine Classification in Ship Engine Rooms Using Transfer Learning

전이 학습을 이용한 선박 기관실 기기의 분류에 관한 연구

  • Park, Kyung-Min (Division of Coast guard, Mokpo National Maritime University)
  • 박경민 (목포해양대학교 해양경찰학부)
  • Received : 2020.12.16
  • Accepted : 2021.04.27
  • Published : 2021.04.30

Abstract

Ship engine rooms have improved automation systems owing to the advancement of technology. However, there are many variables at sea, such as wind, waves, vibration, and equipment aging, which cause loosening, cutting, and leakage, which are not measured by automated systems. There are cases in which only one engineer is available for patrolling. This entails many risk factors in the engine room, where rotating equipment is operating at high temperature and high pressure. When the engineer patrols, he uses his five senses, with particular high dependence on vision. We hereby present a preliminary study to implement an engine-room patrol robot that detects and informs the machine room while a robot patrols the engine room. Images of ship engine-room equipment were classified using a convolutional neural network (CNN). After constructing the image dataset of the ship engine room, the network was trained with a pre-trained CNN model. Classification performance of the trained model showed high reproducibility. Images were visualized with a class activation map. Although it cannot be generalized because the amount of data was limited, it is thought that if the data of each ship were learned through transfer learning, a model suitable for the characteristics of each ship could be constructed with little time and cost expenditure.

선박 기관실은 기술의 발전으로 인해 자동화 시스템이 향상되었지만, 해상에서는 바람, 파도, 진동, 기기 노후화 등의 다양한 변수가 많아 자동화 시스템에서 계측되지 않는 풀림, 절단, 누유, 누수 등이 발생하므로 기관사는 주기적으로 순찰을 한다. 순찰 시에는 1명의 기관사만 순찰하는 경우도 있으며, 이는 고온고압 및 회전기기가 운전 중인 기관실에서 많은 위험요소를 가지고 있다. 기관사가 순찰 시에는 오감을 활용하며, 특히 시각에 의존한다. 본 논문에서는 로봇이 기관실을 순찰하며 기기의 특이사항을 검출하고 알려주는 기관실 순찰 로봇을 구현하기 위한 선행연구로서 선박 기관실 기기의 이미지를 합성곱 신경망을 이용하여 분류하였다. 선박 기관실의 이미지 데이터 셋을 구성한 후 사전 훈련된 합성곱 신경망 모델로 학습하였다. 학습한 모델의 분류 성능은 높은 재현율을 보였으며, 클래스 활성화 맵으로 이미지를 시각화 하였다. 데이터의 양이 제한적이어서 일반화할 수는 없지만, 각 선박의 데이터를 전이학습으로 학습시키면 적은 시간과 비용으로 각 선박의 특성에 맞는 모델을 구축할 수 있을 것으로 사료된다.

Keywords

Acknowledgement

본 논문은 2020년도 목포해양대학교 교내연구비의 지원을 받아 수행한 연구결과임.

References

  1. Bolei, Z., A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba(2016), Learning Deep Features for Discriminative Localization, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2921-2929.
  2. Chollet, F.(2017), Xception: Deep Learning with Depthwise Separable Convolutions, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1251-1258.
  3. Fang, W., L. Ding, P. E. Love, H. Luo, H. Li, F. P. Mora, B. Zhong, and C. Zhou(2020), Computer Vision applications in construction safety assurance, Automation in Construction, Vol. 110, 103013. https://doi.org/10.1016/j.autcon.2019.103013
  4. Guo, T., J. Dong, H. Li, and Y. Gao(2017), Simple convolutional neural network on image classification, 2017 IEEE 2nd International Conference on Big Data Analysis (ICBDA), Beijing, pp. 721-724.
  5. Kang, J. H., H. H. Lee, W. J. Lee, J. W. Kim, and C. H. Park(2019), Analysis of Real Ship Operation Data using a Smart Ship Platform, Journal of the Korean Society of Marine Environment & Safety, Vol. 25, No. 6, pp. 649-657. https://doi.org/10.7837/kosomes.2019.25.6.649
  6. Li, H., Q. Zhao, X. Li, and X. Zhang(2019). Object detection based on color and shape features for service robot in semi-structured indoor environment. International Journal of Intelligent Robotics and Applications, Vol. 3, pp. 430-442. https://doi.org/10.1007/s41315-019-00113-3
  7. Park, K. M. and C. O. Bae(2019), A Study on Fire Detection in Ship Engine Rooms using Convolutional Neural Network, Journal of the Korean Society of Marine Environment & Safety, Vol. 25, No. 4, pp. 476-481. https://doi.org/10.7837/kosomes.2019.25.4.476
  8. Sandler, M., A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen(2018), MobileNetV2: Inverted Residuals and Linear Bottlenecks, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4510-4520.
  9. Simonyan, K. and A. Zisserman(2015), Very deep convolutional networks for large-scale image recognition. Computer Vision and Pattern Recognition, arXiv:1409.1556.
  10. Szegedy, C., V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna(2016), Rethinking the Inception Architecture for Computer Vision, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2818-2826.
  11. Yosinski, Y., J. Clune, Y. Bengio, and H. Lipson(2014), How transferable are features in deep neural networks?, Proceeding of the Advances in Neural Information Processing System, Montreal, Canada, pp. 3320-3328.
  12. Zhang, Y., H. Wang, and F. Xu(2017), Object detection and recognition of intelligent service robot based on deep learning, 2017 IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM), Ningbo, pp. 171-176.