DOI QR코드

DOI QR Code

웨어러블 AR 기기를 이용한 객체인식 기반의 건설 현장 정보 시각화 구현

Augmented Reality Framework to Visualize Information about Construction Resources Based on Object Detection

  • ;
  • ;
  • 이용주 (명지대학교 토목환경공학과) ;
  • 박만우 (명지대학교 토목환경공학과) ;
  • 송은석 (한국도로공사 스마트건설사업단)
  • 투고 : 2021.08.24
  • 심사 : 2021.09.14
  • 발행 : 2021.09.30

초록

The augmented reality (AR) has recently became an attractive technology in construction industry, which can play a critical role in realizing smart construction concepts. The AR has a great potential to help construction workers access digitalized information about design and construction more flexibly and efficiently. Though several AR applications have been introduced for on-site made to enhance on-site and off-site tasks, few are utilized in actual construction fields. This paper proposes a new AR framework that provides on-site managers with an opportunity to easily access the information about construction resources such as workers and equipment. The framework records videos with the camera installed on a wearable AR device and streams the video in a server equipped with high-performance processors, which runs an object detection algorithm on the streamed video in real time. The detection results are sent back to the AR device so that menu buttons are visualized on the detected objects in the user's view. A user is allowed to access the information about a worker or equipment appeared in one's view, by touching the menu button visualized on the resource. This paper details implementing parts of the framework, which requires the data transmission between the AR device and the server. It also discusses thoroughly about accompanied issues and the feasibility of the proposed framework.

키워드

과제정보

본 연구는 국토교통부/국토교통과학기술진흥원의 지원으로 수행되었음(스마트 건설기술 개발사업 : 과제번호 20SMIP-A157351-02)

참고문헌

  1. Asadi, K., Chen, P., Han, K., Wu, T., Lobaton, E. (2019). Real-time Scene Segmentation Using a Light Deep Neural Network Architecture for Autonomous Robot Navigation on Construction Sites, The 2019 ASCE International Conference on Computing in Civil Engineering, arXiv:1901.08630 [cs.RO], https://arxiv.org/abs/1901.08630.
  2. Behringer, R., Klinker, G., Mizell, D. (1999). Augmented Reality: Placing Artificial Objects in Real Scenes, CRC Press.
  3. Bochkovskiy, A., Wang, C.-Y., Liao, H.-Y. M. (2020). YOLOv4: Optimal Speed and Accu racy of Object Detection, Computer Vision and Pattern Recognition, arXiv:2004.10934 [cs.CV], https://arxiv.org/abs/2004.10934.
  4. Christine, B. (2019). Bentley's SYNCHRO XR and Microsoft HoloLens 2 bring the benefits of mixed reality to construction sites, https://www.bentley.com/ko/about-us/news/2019/february/24/synchro-xr-with-hololens-2-release (May. 21. 2021).
  5. Elsbach, K. D., Kramer, R. M. (2015). Handbook of Qualitative Organizational Research: Innovative Pathways and Methods, Routledge.
  6. Fang, Q., Li, H., Luo, X., Ding, L., Rose, T. M., An, W., Yu, Y. (2018). A deep learning-based method for detecting non-certified work on construction sites, Advanced Engineering Informatics, 35, pp. 56-68. https://doi.org/10.1016/j.aei.2018.01.001
  7. Fang, W., Ding, L., Zhong, B., Love, P. E.D., Luo, H. (2018). Automated detection of workers and heavy equipment on construction sites: A convolutional neural network approach, Advanced Engineering Informatics, 37, pp. 139-149. https://doi.org/10.1016/j.aei.2018.05.003
  8. Fang, Y., Ding, L., Luo, H., Love, P. E.D. (2018). Falls from heights: A computer vision-based approach for safety harness detection, Automation in Construction, 91, pp. 53-61. https://doi.org/10.1016/j.autcon.2018.02.018
  9. Girshick, R., Donahue, J., Darrell, T., Malik., J. (2014). Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 580-587.
  10. Ippolito, B. (2018). simplejson - JSON encoder and decoder, https://simplejson.readthedocs.io/en/latest/ (May. 06. 2021).
  11. Jiang, Z., Zhao, L., Li, S., Jia, Y. (2020). Real-time object detection method based on improved YOLOv4-tiny, Computer Vision and Pattern Recognition, arXiv:2011.04244 [cs.CV], https://arxiv.org/abs/2011.04244.
  12. Jiao, Y., Zhang, S., Li, Y., Wang, Y., Yang, B. (2013). Towards cloud Augmented Reality for construction application by BIM and SNS integration, Automation in Construction, 33, pp. 37-47. https://doi.org/10.1016/j.autcon.2012.09.018
  13. Kim, H., Bang, S., Jeong, H., Ham, Y., Kim, H. (2018). Analyzing context and productivity of tunnel earthmoving processes using imaging and simulation, Automation in Construction, 92, pp. 188-198. https://doi.org/10.1016/j.autcon.2018.04.002
  14. Kim, J., Sung, J.-Y., Park, S. (2020). Comparison of Faster-RCNN, YOLO, and SSD for Real-Time Vehicle Type Recognition, 2020 IEEE International Conference on Consumer Electronics - Asia (ICCE-Asia), 2020, pp. 1-4.
  15. Lee, Y.-J., Kim, J.-Y., Pham, H., Park, M.-W. (2020). Augmented Reality Framework for Efficient Access to Schedule Information on Construction Sites, Journal of KIBIM, 10(4), pp. 60-69. https://doi.org/10.13161/KIBIM.2020.10.4.060
  16. Lin, T.-Y., Maire, M., Belongie, S., Bourdev, L., Girshick, R., Hays, J., Perona, P., Ramanan, D., Zitnick, C. L., Dollar, P. (2014). Microsoft COCO: Common Objects in Context, ECCV 2014: Computer Vision - ECCV 2014, pp. 740-755.
  17. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., Berg, A. C. (2016). SSD: Single Shot MultiBox Detector, Computer Vision - ECCV 2016, pp. 21-37.
  18. Luo, H., Xiong, C., Fang, W., Love, P. E.D., Zhang, B., Ouyang, X. (2018). Convolutional neural networks: Computer vision-based workforce activity assessment in construction, Automation in Construction, 94, pp. 282-289. https://doi.org/10.1016/j.autcon.2018.06.007
  19. Microsoft. (2019). Introducing MRTK for Unity, https://docs.microsoft.com/ko-kr/windows/mixed-reality/develop/unity/mrtk-getting-started (May. 04. 2021).
  20. Redmon, J., Divvala, S., Girshick, S., Farhadi, A. (2016). You Only Look Once: Unified, Real-Time Object Detection, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 779-788.
  21. Ren, S., He, K., Girshick, R., Sun, Jian. (2016). Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6), pp. 1137-1149. https://doi.org/10.1109/TPAMI.2016.2577031
  22. Sredojev, B., Samardzija, D., Posarac, D. (2015). WebRTC technology overview and signaling solution design and implementation, 38th International Convention on Information and Communication Technology Electronics and Microelectronics (MIPRO), 2015, pp. 1006-1009.
  23. vGis. (2021). BIM and GIS Data In Augmented Reality, https://www.vgis.io/esri-augmented-reality-gis-ar-for-utilities-municipalities-locate-and-municipal-service-companies/ (May. 22. 2021).
  24. Xiao, B., Kang, S.-C. (2021). Development of an Image Data Set of Construction Machines for Deep Learning Object Detection, Journal of Computing in Civil Engineering, 35(2), https://doi.org/10.1061/(ASCE)CP.1943-5487.0000945.
  25. Yang, Z., Yuan, Y., Zhang, M., Zhao, X., Zhang, Y., Tian, B. (2019). Safety Distance Identification for Crane Drivers Based on Mask R-CNN, Sensors, 19(12), https://doi.org/10.3390/s19122789.
  26. Zollmann, S., Hoppe, C., Kluckner, S., Poglitsch, C., Bischof, H., Reitmayr, G. (2014). Augmented Reality for Construction Site Monitoring and Documentation, Proceedings of the IEEE, 102(2), pp. 137-154. https://doi.org/10.1109/JPROC.2013.2294314