DOI QR코드

DOI QR Code

3D Visualization and Work Status Analysis of Construction Site Objects

  • Junghoon Kim (Department of Civil and Environmental Engineering, Seoul National University) ;
  • Insoo Jeong (Department of Civil and Environmental Engineering, Seoul National University) ;
  • Seungmo Lim (Department of Civil and Environmental Engineering, Seoul National University) ;
  • Jeongbin Hwang (Institute of Construction and Environmental Engineering (ICEE)) ;
  • Seokho Chi (Department of Civil and Environmental Engineering, Seoul National University, the Institute of Construction and Environmental Engineering (ICEE))
  • Published : 2024.07.29

Abstract

Construction site monitoring is pivotal for overseeing project progress to ensure that projects are completed as planned, within budget, and in compliance with applicable laws and safety standards. Additionally, it seeks to improve operational efficiency for better project execution. To achieve this, many researchers have utilized computer vision technologies to conduct automatic site monitoring and analyze the operational status of equipment. However, most existing studies estimate real-world 3D information (e.g., object tracking, work status analysis) based only on 2D pixel-based information of images. This approach presents a substantial challenge in the dynamic environments of construction sites, necessitating the manual recalibration of analytical rules and thresholds based on the specific placement and the field of view of cameras. To address these challenges, this study introduces a novel method for 3D visualization and status analysis of construction site objects using 3D reconstruction technology. This method enables the analysis of equipment's operational status by acquiring 3D spatial information of equipment from single-camera images, utilizing the Sam-Track model for object segmentation and the One-2-3-45 model for 3D reconstruction. The framework consists of three main processes: (i) single image-based 3D reconstruction, (ii) 3D visualization, and (iii) work status analysis. Experimental results from a construction site video demonstrated the method's feasibility and satisfactory performance, achieving high accuracy in status analysis for excavators (93.33%) and dump trucks (98.33%). This research provides a more consistent method for analyzing working status, making it suitable for practical field applications and offering new directions for research in vision-based 3D information analysis. Future studies will apply this method to longer videos and diverse construction sites, comparing its performance with existing 2D pixel-based methods.

Keywords

Acknowledgement

This work was supported by the National R&D Project for Smart Construction Technology (23SMIPA158708-04) funded by the Korea Agency for Infrastructure Technology Advancement under the Ministry of Land, Infrastructure, and Transport. This work was also supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. RS-2023-00241758 and No. 2021R1A2C2003696).

References

  1. A. Panas, J. P. Pantouvakis, "Evaluating research methodology in construction productivity studies", The Built & Human Environment Review, vol. 3, no. 1, pp. 63-85, 2010.
  2. K. M. El-Gohary, R. F. Aziz, H. A. Adbel-Khalek, "Engineering approach using ANN to improve and predict construction labor productivity under different influences", Journal of Construction Engineering and Management, vol. 143, no. 8, 04017045, 2017.
  3. E. Rezazaadeh Azar, B. McCabe, "Automated visual recognition of dump trucks in construction videos", Journal of Computing in Civil Engineering, vol. 26, no. 6, pp. 769-781, 2012.
  4. M. Memarzadeh, M. Golparvar-Fard, J. C. Niebles, "Automated 2D detection of construction equipment and workers from site video streams using histograms of oriented gradients and colors", Automation in Construction, vol. 32, pp. 24-37,, 2013.
  5. W. Ji, L. Tang, D. Li, W. Yang, Q. Liao, "Video-based construction vehicles detection and its application in intelligent monitoring system", CAAI Transactions on Intelligence Technology, vol. 1, no. 2, pp. 162-172, 2016.
  6. B. Xiao, S.C. Kang, "Vision-based method integrating deep learning detection for tracking multiple construction machines", Journal of Computing in Civil Engineering, vol. 35, no. 2, 04020071, 2021.
  7. J. Gong, C. H. Caldas, "Computer vision-based video interpretation model for automated productivity analysis of construction operations", Journal of Computing in Civil Engineering, vol. 24, no. 3, pp. 252-263, 2010.
  8. H. Kim, S. Bang, H. Jeong, Y. Ham, H. Kim, "Analyzing context and productivity of tunnel earthmoving processes using imaging and simulation", Automation in Construction, vol. 92, pp. 188-198, 2018.
  9. K. M. Rashid, J. Louis, "Times-series data augmentation and deep learning for construction equipment activity recognition", Advanced Engineering Informatics, vol. 42, 100944, 2019.
  10. D. Roberts, M. Golparvar-Fard, "End-to-end vision-based detection, tracking and activity analysis of earthmoving equipment filmed at ground level", Automation in Construction, vol. 105, 102811, 2019
  11. J. Kim, S. Chi, J. Seo, "Interaction analysis for vision-based activity identification of earthmoving excavators and dump trucks", Automation in Construction, vol. 87, pp. 297-308, 2018.
  12. M.W. Park, C. Koch, I. Brilakis, "Correlating multiple 2D vision trackers for 3D object tracking on construction sites", Proceedings of Advancing Project Management for the 21st Century, pp. 560-567, 2010.
  13. X. Luo, H. Li, D. Cao, F. Dai, J. Seo, S. Lee, "Recognizing diverse construction activities in site images via relevance networks of construction-related objects detected by convolutional neural networks", Journal of Computing in Civil Engineering, vol. 32, no. 3, 04018012, 2018.
  14. A. Assadzadeh, M. Arashpour, I. Brilakis, T. Ngo, E. Konstantinou, "Vision-based excavator pose estimation using synthetically generated datasets with domain randomization", Automation in Construction, vol. 134, 104089, 2022.
  15. A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A.C. Berg, W.Y. Lo, P. Dollar, "Segment Anything", arXiv preprint:2304.02643, 2023.
  16. Y. Cheng, L. Li, Y. Xu, X. Li, Z. Yang, W. Wang, Y. Yang, "Segment and track anything", arXiv preprint:2305.06558, 2023.
  17. M. Liu, R. Shi, L. Chen, Z. Zhang, C. Xu, X. Wei, H. Chen, C. Zeng, J. Gu, H. Su, "One-2-3-45++: Fast single image to 3d objects with consistent multi-view generation and 3d diffusion", arXiv preprint:2311.07885, 2023.