DOI QR코드

DOI QR Code

Developing an Occupants Count Methodology in Buildings Using Virtual Lines of Interest in a Multi-Camera Network

다중 카메라 네트워크 가상의 관심선(Line of Interest)을 활용한 건물 내 재실자 인원 계수 방법론 개발

  • 천휘경 (네바다 주립대학교) ;
  • 박찬혁 (아이티엠건축사사무소 R&D연구소) ;
  • 지석호 (서울대학교 건설환경공학부) ;
  • 노명일 (서울대학교 조선해양공학과) ;
  • Received : 2023.07.26
  • Accepted : 2023.08.28
  • Published : 2023.10.01

Abstract

In the event of a disaster occurring within a building, the prompt and efficient evacuation and rescue of occupants within the building becomes the foremost priority to minimize casualties. For the purpose of such rescue operations, it is essential to ascertain the distribution of individuals within the building. Nevertheless, there is a primary dependence on accounts provided by pertinent individuals like building proprietors or security staff, alongside fundamental data encompassing floor dimensions and maximum capacity. Consequently, accurate determination of the number of occupants within the building holds paramount significance in reducing uncertainties at the site and facilitating effective rescue activities during the golden hour. This research introduces a methodology employing computer vision algorithms to count the number of occupants within distinct building locations based on images captured by installed multiple CCTV cameras. The counting methodology consists of three stages: (1) establishing virtual Lines of Interest (LOI) for each camera to construct a multi-camera network environment, (2) detecting and tracking people within the monitoring area using deep learning, and (3) aggregating counts across the multi-camera network. The proposed methodology was validated through experiments conducted in a five-story building with the average accurary of 89.9% and the average MAE of 0.178 and RMSE of 0.339, and the advantages of using multiple cameras for occupant counting were explained. This paper showed the potential of the proposed methodology for more effective and timely disaster management through common surveillance systems by providing prompt occupancy information.

건물에서 재난이 발생할경우, 건물 내 인원을 신속히 구조하여 사상자를 최소화하는 것은 단연 최우선순위가 된다. 이러한 구조활동을 위해서는 건물내 어디에 몇 명이 있는지를 알아야 하는데, 실시간으로 알기가 어렵다보니 주로 건물주나 경비원 등 관계자의 진술이나 층별 면적, 수용 인원과 같은 기초자료에 의존하는 실정이다. 따라서 빠르고 정확하게 재실인원 정보를 파악하여 현장에 대한 불확실성을 낮추고 골든타임내 효율적인 구조활동을 지원하는 것이 반드시 필요하다. 본 연구는 컴퓨터 비전 알고리즘을 활용하여 이미 건물에 설치되어 있는 여러대의 CCTV 가 촬영한 이미지 로부터 건물 위치별 재실인원을 계수하는 방법론을 제시한다. 계수 방법론은 (1)카메라별 관심선(LOI) 설정을 통한 다중카메라 네트워크 환경구축, (2)딥러닝을 활용한 모니터링 구역내 사람 탐지 및 추적, (3)다중 카메라 네트워크 환경을 고려한 인원 합산 세단계로 구성된다. 제안된 방법론은 5층 건물을 대상으로 세 개의 시간대 별로 수행된 현장 실험을 통해 검증되었다. 최종 결과는 89.9%의 정확도로 재실자를 인식하는 것으로 나타났으며, 층별, 구역별 합산결과도 93.1%, 93.3%의 정확도로 우수했다. 층별 평균MAE와 RMSE는 각각 0.178과 0.339이었다. 이 처럼 실시간으로 제공하는 건물내 재실자 정보는 초기 재난 대응단계에 신속하고 정확한 구조활동을 지원 할 수있다.

Keywords

Acknowledgement

이 논문은 2021년도 서울대학교 융복합 연구과제와 정부(과학기술정보통신부)의 재원으로 한국연구재단의 지원을 받아 수행된연구입니다(No.RS-2023-20241758).

References

  1. Bochkovskiy, A., Wang, C. Y. and Liao, H. Y. M. (2020). "YOLOv4: Optimal speed and accuracy of object detection." arXiv preprint, https://doi.org/10.48550/arXiv.2004.10934. 
  2. Chae, S. U., Kwon, H. S., Park, S. R., Cho, W. H., Kwon, O. S. and Lee, J. S. (2020). "CCTV high-speed analysis algorithm for real-time monitoring of building access." Journal of the Korean Society of Hazard Mitigation, KOSHAM, Vol. 20, No. 2, pp. 113-118, https://doi.org/10.9798/KOSHAM.2020.20.2.113 (in Korean). 
  3. Denman, S., Fookes, C., Ryan, D. and Sridharan, S. (2015). "Large scale monitoring of crowds and building utilisation: A new database and distributed approach." Proceedings of 2015 12th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), IEEE, Karlsruhe, Germany, pp. 1-6, https://doi.org/10.1109/AVSS.2015.7301796. 
  4. Dollar, P., Wojek, C., Schiele, B. and Perona, P. (2011). "Pedestrian detection: An evaluation of the state of the art." IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE, Vol. 34, No. 4, pp. 743-761, https://doi.org/10.1109/TPAMI.2011.155. 
  5. Javed, O., Shafique, K., Rasheed, Z. and Shah, M. (2008). "Modeling inter-camera space-time and appearance relationships for tracking across non-overlapping views." Computer Vision and Image Understanding, Elsevier, Vol. 109, No. 2, pp. 146-162, https://doi.org/10.1016/j.cviu.2007.01.003. 
  6. Jung, S. P., Lee, H. Y. and Kim, J. W. (2018). "A study on the development of occupant density and walking pattern measurement techniques for emergency evacuation and safety in the railroad station: Focusing on information about pedestrians' use behaviors." Journal of the Korean Society of Hazard Mitigation, KOSHAM, Vol. 18, No. 1, pp. 125-135, https://doi.org/10.9798/KOSHAM.2018.18.1.125 (in Korean). 
  7. Kalake, L., Wan, W. and Hou, L. (2021). "Analysis based on recent deep learning approaches applied in real-time multi-object tracking: a review." IEEE Access, IEEE, Vol. 9, pp. 32650-32671, https://doi.org/10.1109/ACCESS.2021.3060821. 
  8. Liu, A. S., Hsu, T. W., Hsiao, P. H., Liu, Y. C. and Fu, L. C. (2016). "The manhunt network: People tracking in hybrid-overlapping under the vertical top-view depth camera networks."Proceedings of 2016 International Conference on Advanced Robotics and Intelligent Systems (ARIS), IEEE, Taipei, Taiwan, pp. 1-6, https://doi.org/10.1109/ARIS.2016.7886632. 
  9. Maddalena, L., Petrosino, A. and Russo, F. (2014). "People counting by learning their appearance in a multi-view camera environment." Pattern Recognition Letters, Elsevier, Vol. 36, pp. 125-134, https://doi.org/10.1016/j.patrec.2013.10.006. 
  10. Melfi, R., Rosenblum, B., Nordman, B. and Christensen, K. (2011). "Measuring building occupancy using existing network infrastructure." Proceedings of 2011 International Green Computing Conference and Workshops, IEEE, Orlando, FL, USA, pp. 1-8, https://doi.org/10.1109/IGCC.2011.6008560. 
  11. Park, C. and Chi, S. (2020). "Developing a zone-level people counting methodology using surveillance cameras for search and rescue efforts during building disasters." Journal of the Spring Annual Conference of AIK, AIK, Vol. 40, No. 1, pp. 421-424 (in Korean). 
  12. Perng, J. W., Wang, T. Y., Hsu, Y. W. and Wu, B. F. (2016). "The design and implementation of a vision-based people counting system in buses." In 2016 International Conference on System Science and Engineering (ICSSE), IEEE, Puli, Taiwan, pp. 1-3, https://doi.org/10.1109/ICSSE.2016.7551620. 
  13. Sharma, D., Bhondekar, A. P., Shukla, A. K. and Ghanshyam, C. (2018). "A review on technological advancements in crowd management." Journal of Ambient Intelligence and Humanized Computing, Springer, Vol. 9, No. 3, pp. 485-495, https://doi.org/10.1007/s12652-016-0432-x. 
  14. Sun, K., Zhao, Q. and Zou, J. (2020). "A review of building occupancy measurement systems." Energy and Buildings, Elsevier, Vol. 216, 109965, https://doi.org/10.1016/j.enbuild.2020.109965. 
  15. Wang, X. (2013). "Intelligent multi-camera video surveillance: A review." Pattern Recognition Letters, Elsevier, Vol. 34, No. 1, pp. 3-19, https://doi.org/10.1016/j.patrec.2012.07.005. 
  16. Wen, L., Lei, Z., Chang, M. C., Qi, H. and Lyu, S. (2017). "Multi-camera multi-target tracking with space-time-view hyper-graph." International Journal of Computer Vision, Springer, Vol. 122, No. 2, pp. 313-333, https://doi.org/10.1007/s11263-016-0943-0. 
  17. Wojke, N., Bewley, A. and Paulus, D. (2017). "Simple online and realtime tracking with a deep association metric." Proceedings of 2017 IEEE International Conference on Image Processing (ICIP), IEEE, Beijing, China, pp. 3645-3649, https://doi.org/10.1109/ICIP.2017.8296962. 
  18. Yu, S. I., Yang, Y. and Hauptmann, A. (2013). "Harry potter's marauder's map: Localizing and tracking multiple persons-ofinterest by nonnegative discretization." Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Portland, OR, USA, pp. 3714-3720, https://doi.org/10.1109/CVPR.2013.476. 
  19. Zhang, S., Zhu, Y. and Roy-Chowdhury, A. (2015). "Tracking multiple interacting targets in a camera network." Computer Vision and Image Understanding, Elsevier, Vol. 134, pp. 64-73, https://doi.org/10.1016/j.cviu.2015.01.002.