Browse > Article
http://dx.doi.org/10.7837/kosomes.2021.27.2.363

Machine Classification in Ship Engine Rooms Using Transfer Learning  

Park, Kyung-Min (Division of Coast guard, Mokpo National Maritime University)
Publication Information
Journal of the Korean Society of Marine Environment & Safety / v.27, no.2, 2021 , pp. 363-368 More about this Journal
Abstract
Ship engine rooms have improved automation systems owing to the advancement of technology. However, there are many variables at sea, such as wind, waves, vibration, and equipment aging, which cause loosening, cutting, and leakage, which are not measured by automated systems. There are cases in which only one engineer is available for patrolling. This entails many risk factors in the engine room, where rotating equipment is operating at high temperature and high pressure. When the engineer patrols, he uses his five senses, with particular high dependence on vision. We hereby present a preliminary study to implement an engine-room patrol robot that detects and informs the machine room while a robot patrols the engine room. Images of ship engine-room equipment were classified using a convolutional neural network (CNN). After constructing the image dataset of the ship engine room, the network was trained with a pre-trained CNN model. Classification performance of the trained model showed high reproducibility. Images were visualized with a class activation map. Although it cannot be generalized because the amount of data was limited, it is thought that if the data of each ship were learned through transfer learning, a model suitable for the characteristics of each ship could be constructed with little time and cost expenditure.
Keywords
Patrol robot; Ship engine room equipment; Convolution neural network; Classification; Transfer learning;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Fang, W., L. Ding, P. E. Love, H. Luo, H. Li, F. P. Mora, B. Zhong, and C. Zhou(2020), Computer Vision applications in construction safety assurance, Automation in Construction, Vol. 110, 103013.   DOI
2 Guo, T., J. Dong, H. Li, and Y. Gao(2017), Simple convolutional neural network on image classification, 2017 IEEE 2nd International Conference on Big Data Analysis (ICBDA), Beijing, pp. 721-724.
3 Kang, J. H., H. H. Lee, W. J. Lee, J. W. Kim, and C. H. Park(2019), Analysis of Real Ship Operation Data using a Smart Ship Platform, Journal of the Korean Society of Marine Environment & Safety, Vol. 25, No. 6, pp. 649-657.   DOI
4 Park, K. M. and C. O. Bae(2019), A Study on Fire Detection in Ship Engine Rooms using Convolutional Neural Network, Journal of the Korean Society of Marine Environment & Safety, Vol. 25, No. 4, pp. 476-481.   DOI
5 Bolei, Z., A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba(2016), Learning Deep Features for Discriminative Localization, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2921-2929.
6 Yosinski, Y., J. Clune, Y. Bengio, and H. Lipson(2014), How transferable are features in deep neural networks?, Proceeding of the Advances in Neural Information Processing System, Montreal, Canada, pp. 3320-3328.
7 Simonyan, K. and A. Zisserman(2015), Very deep convolutional networks for large-scale image recognition. Computer Vision and Pattern Recognition, arXiv:1409.1556.
8 Szegedy, C., V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna(2016), Rethinking the Inception Architecture for Computer Vision, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2818-2826.
9 Zhang, Y., H. Wang, and F. Xu(2017), Object detection and recognition of intelligent service robot based on deep learning, 2017 IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM), Ningbo, pp. 171-176.
10 Sandler, M., A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen(2018), MobileNetV2: Inverted Residuals and Linear Bottlenecks, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4510-4520.
11 Li, H., Q. Zhao, X. Li, and X. Zhang(2019). Object detection based on color and shape features for service robot in semi-structured indoor environment. International Journal of Intelligent Robotics and Applications, Vol. 3, pp. 430-442.   DOI
12 Chollet, F.(2017), Xception: Deep Learning with Depthwise Separable Convolutions, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1251-1258.