DOI QR코드

DOI QR Code

A Study on the Detection of Fallen Workers in Shipyard Using Deep Learning

딥러닝을 이용한 조선소에서 쓰러진 작업자의 검출에 관한 연구

  • Park, Kyung-Min (Division of Coast guard, Mokpo National Maritime University) ;
  • Kim, Seon-Deok (Division of Coast guard, Mokpo National Maritime University) ;
  • Bae, Cherl-O (Division of Coast guard, Mokpo National Maritime University)
  • 박경민 (목포해양대학교 해양경찰학부) ;
  • 김선덕 (목포해양대학교 해양경찰학부) ;
  • 배철오 (목포해양대학교 해양경찰학부)
  • Received : 2020.03.18
  • Accepted : 2020.10.28
  • Published : 2020.10.31

Abstract

In large ships with complex structures, it is difficult to locate workers. In particular, it is not easy to detect when a worker falls down, making it difficult to respond quickly. Thus, research is being conducted to detect fallen workers using a camera or by attaching a device to the body. Existing image-based fall detection systems have been designed to detect a person's body parts; hence, it is difficult to detect them in various ships and postures. In this study, the entire fall area was extracted and deep learning was used to detect the fallen shipworker based on the image. The data necessary for learning were obtained by recording falling states at the shipyard. The amount of learning data was augmented by flipping, resizing, and rotating the image. Performance evaluation was conducted with precision, reproducibility, accuracy, and a low error rate. The larger the amount of data, the better the precision. In the future, reinforcing various data is expected to improve the effectiveness of camera-based fall detection models, and thus improve safety.

선박은 크고, 복잡한 구조로 되어 있기 때문에 다른 작업자의 위치를 알아내기 어려우며, 특히 작업자가 쓰러진 경우에는 발견하기가 쉽지 않아 신속한 대처가 어렵다. 그리하여, 신체에 디바이스를 부착하는 방법이나 카메라를 이용하여 쓰러짐을 검출하기 위한 연구가 진행되고 있다. 기존의 영상기반 쓰러짐 검출은 사람의 신체부위를 검출하여 쓰러짐을 판단하였으나, 조선소에서는 다양한 복장과 자세로 작업으로 인해 검출하기가 어렵다. 본 논문에서는 쓰러짐 영역 전체를 추출하여 딥러닝 학습으로 선박 작업자의 쓰러짐을 이미지 기반으로 검출하였다. 학습에 필요한 데이터는 조선소의 건조중인 선박에서 쓰러진 모습을 연출하여 획득하였으며, 이미지를 좌우대칭, 크기조절, 회전하여 학습 데이터의 수를 증가하였다. 성능평가는 정밀도, 재현율, 정확도 그리고 오차율로 평가하였으며, 데이터의 수가 많을수록 정밀도가 향상되었다. 다양한 데이터를 보강하면 카메라를 이용한 쓰러짐 검출 모델의 실효성이 향상됨으로서 안전 분야에 기여할 수 있을 것으로 사료된다.

Keywords

References

  1. Bian, Z. P., J. Hou, L. P. Chau, and M. T. Nadia(2015), IEEE Journal of Biomedical and Health Informatics, 19, pp. 430-439. https://doi.org/10.1109/JBHI.2014.2319372
  2. Cao, Z., T. Simon, S. E. Wei, and Y. Sheikh(2017), Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields, arXiv:1611.08050.
  3. He, K., X. Zhang, S. Ren, and J. Sun(2015), Deep Residual Learning for Image Recognition, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778.
  4. Joo, M., M. Choo, Y. Baek, N. Kim, A. Choi, S. Im, J. Lee, H. Kim, and H. Lee(2018), Korea Coast Guard's Human Biological Materials Storage Project for Identifying Bodies Recovered from the Sea: A Model Suggestion, Journal of the Korean Society of Marine Environment & Safety, Vol. 24, No. 2, pp. 171-178. https://doi.org/10.7837/kosomes.2018.24.2.171
  5. KMST(2018), https://www.kmst.go.kr/.
  6. Li, Q., J. A. Stankovic, M. Hanson, A. Barth, and J. Lach (2009), Accurate, Fast Fall Detection Using Gyroscope and Accelerometer-Derived Posture Information, In Proceedings of the 6th International Workshop on Wearable and Implantable Body Sensor Networks, pp. 138-143.
  7. Lu, N., Y. Wu, L. Feng, and J. Song(2018), Deep Learning for Fall Detection: 3D-CNN Combined with LSTM on Video Kinematic Data, Vol. 23, No. 1, pp. 314-323. https://doi.org/10.1109/jbhi.2018.2808281
  8. Liu, W., D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg(2016), SSD: Single Shot MultiBox Detector, Proceedings of the European Conference on Computer Vision, pp. 21-37.
  9. Mastorakis, G. and M. Dimitrios(2014), Fall detection system using Kinect's infrared sensors, Journal of Real-Time Image Processing, Vol. 9, No. 4, pp. 635-646. https://doi.org/10.1007/s11554-012-0246-9
  10. Mundher, Z. A. and J. Zhong(2014), A Real_Time Fall Detection System in Elderly Care Using Mobile Robot and Kinect Sensor, International Journal of Materials, Mechanics and Manufacturing, Vol. 2, No. 2, pp. 133-138. https://doi.org/10.7763/IJMMM.2014.V2.115
  11. Park, K. M. and C. O. Bae(2019), A Study on Fire Detection in Ship Engine Rooms Using Convolutional Neural Network, Journal of the Korean Society of Marine Environment & Safety, Vol. 25, No. 4, pp. 476-481. https://doi.org/10.7837/kosomes.2019.25.4.476
  12. Redmon, J., S. Divvala, R. Girshick, and A. Farhadi(2016), You Only Look Once: Unified, Real-Time Object Detection, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779-788.
  13. Ren, S., K. He, R. Girshick, and J. Sun(2015), Faster R-CNN: Towards real-time object detection with region proposal networks, In Neural Information Processing Systems.
  14. Solbach, M. D. and J. K. Tsotsos(2017), Vision-based Fallen Person Detection for the Elderly, The IEEE International Conference on Computer Vision (ICCV) workshops, pp. 1433-1422.
  15. Szegedy, C., S. Ioffe, V. Vanhouche, and A. A. Alemi(2017), Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning, Proceeding of the Thirty-First AAAI Conference on artificial Intelligence, pp. 4278-4284.
  16. Szegedy, C., V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna (2016), Rethinking the Inception Architecture for Computer Vision, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2818-2826.