Automatic Recognition of Symbol Objects in P&IDs using Artificial Intelligence

인공지능 기반 플랜트 도면 내 심볼 객체 자동화 검출

  • Shin, Ho-Jin (School of Chemical Engineering and Materials Science, Chung-Ang University) ;
  • Jeon, Eun-Mi (School of Chemical Engineering and Materials Science, Chung-Ang University) ;
  • Kwon, Do-kyung (School of Computer Science and Engineering, Chung-Ang University) ;
  • Kwon, Jun-Seok (School of Computer Science and Engineering, Chung-Ang University) ;
  • Lee, Chul-Jin (School of Chemical Engineering and Materials Science, Chung-Ang University)
  • 신호진 (중앙대학교 화학신소재공학부) ;
  • 전은미 (중앙대학교 화학신소재공학부) ;
  • 권도경 (중앙대학교 소프트웨어학부) ;
  • 권준석 (중앙대학교 소프트웨어학부) ;
  • 이철진 (중앙대학교 화학신소재공학부)
  • Published : 2021.10.30

Abstract

P&ID((Piping and Instrument Diagram) is a key drawing in the engineering industry because it contains information about the units and instrumentation of the plant. Until now, simple repetitive tasks like listing symbols in P&ID drawings have been done manually, consuming lots of time and manpower. Currently, a deep learning model based on CNN(Convolutional Neural Network) is studied for drawing object detection, but the detection time is about 30 minutes and the accuracy is about 90%, indicating performance that is not sufficient to be implemented in the real word. In this study, the detection of symbols in a drawing is performed using 1-stage object detection algorithms that process both region proposal and detection. Specifically, build the training data using the image labeling tool, and show the results of recognizing the symbol in the drawing which are trained in the deep learning model.

P&ID(Piping and Instrument Diagram)는 플랜트의 장치 및 계장 정보를 집약적으로 담고 있는, 엔지니어링 핵심도면이다. 한 장의 P&ID에는 심볼로 표현된 수백 여개의 정보들이 존재하며, 이에 대한 디지털 전산화 작업이 수작업으로 진행되고 있어 많은 인력과 시간이 소요된다. 기존 연구들은 CNN 모델을 이용하여 도면 객체 검출에 성공하였으나, 도면 한 장당 약 30분, 인식률은 90% 정도로 현장에서 구현하기에는 부족한 성능이다. 따라서 본 연구에서는 영역 검출과 객체 인식을 동시에 처리하는 1-stage 객체 검출 알고리즘을 제안하였다. 이미지 레이블링 오픈소스 툴을 이용하여 학습 데이터를 구축하고 딥러닝 모델 학습을 통해 도면 내 심볼 이미지 인식 방법을 제안한다.

Keywords

Acknowledgement

본 연구는 서울시 산학연 협력사업(20191471)의 지원을 받아 수행된 연구임.

References

  1. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V. and Rabinovich, A., 2015. "Going deeper with convolutions." In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-9.
  2. Redmon, J., Divvala, S., Girshick, R. and Farhadi, A., 2016. "You only look once: Unified, real-time object detection." In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779-788.
  3. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y. and Berg, A.C., 2016, October. "Ssd: Single shot multibox detector." In European conference on computer vision, pp. 21-37. Springer, Cham.
  4. Girshick, R., Donahue, J., Darrell, T. and Malik, J., 2014. "Rich feature hierarchies for accurate object detection and semantic segmentation." In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580-587.
  5. Yu, E.S., Cha, J.M., Lee, T., Kim, J. and Mun, D., 2019. "Features Recognition from Piping and Instrumentation Diagrams in Image Format Using a Deep Learning Network". Energies, 12(23), p.4425. https://doi.org/10.3390/en12234425
  6. Krizhevsky, A., Sutskever, I. and Hinton, G.E., 2012. "Imagenet classification with deep convolutional neural networks." In Advances in neural information processing systems, pp. 1097-1105.
  7. Redmon, J. and Farhadi, A., 2017. "YOLO9000: better, faster, stronger." In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7263-7271.
  8. Kang, S.O., Lee, E.B. and Baek, H.K., 2019. "A Digitization and Conversion Tool for Imaged Drawings to Intelligent Piping and Instrumentation Diagrams (P&ID)". Energies, 12(13), p.2593. https://doi.org/10.3390/en12132593