Learning Spatio-Temporal Topology of a Multiple Cameras Network by Tracking Human Movement

사람의 움직임 추적에 근거한 다중 카메라의 시공간 위상 학습

  • 남윤영 (아주대학교 유비쿼터스시스템연구센터) ;
  • 류정훈 (아주대학교 전자공학과) ;
  • 최유주 (서울벤터정보대학원대학 컴퓨터공학과) ;
  • 조위덕 (아주대학교 유비쿼터스시스템연구센터 전자공학부)
  • Published : 2007.12.15

Abstract

This paper presents a novel approach for representing the spatio-temporal topology of the camera network with overlapping and non-overlapping fields of view (FOVs) in Ubiquitous Smart Space (USS). The topology is determined by tracking moving objects and establishing object correspondence across multiple cameras. To track people successfully in multiple camera views, we used the Merge-Split (MS) approach for object occlusion in a single camera and the grid-based approach for extracting the accurate object feature. In addition, we considered the appearance of people and the transition time between entry and exit zones for tracking objects across blind regions of multiple cameras with non-overlapping FOVs. The main contribution of this paper is to estimate transition times between various entry and exit zones, and to graphically represent the camera topology as an undirected weighted graph using the transition probabilities.

본 논문은 유비쿼터스 스마트 공간에서 중첩 FOV와 비중첩 FOV에 대한 카메라 네트워크의 시공간 위상을 표현하는 새로운 방법을 제안한다. 제안된 방법을 이용하여 다중 카메라들간의 움직이는 객체들을 인식 및 추적하였으며 이를 통해 카메라 네트워크의 위상을 결정하였다. 다중 카메라의 영상으로부터 여러 객체들을 추적하기 위해 여러 가지 방법들을 사용하였다. 우선, 단일 카메라에서 객체들의 겹침 문제를 해결하기 위해서 병합-분리(Merge-Split) 방법을 사용하였으며, 보다 정확한 객체 특성을 추출하기 위해 그리드 기반의 부분 추출 방법을 사용하였다. 또한, 비중첩 FOV를 포함하는 다중 카메라의 보이지 않는 지역에 대한 객체 추적을 위해 등장과 퇴장 영역간의 전이시간과 사람들의 외형 정보를 고려하였다. 본 논문에서는 다양한 등장과 퇴장 영역간의 전이시간을 추정하고 전이확률을 이용하여 무방향 가중치 그래프로써 카메라 위상을 가시적으로 표현하였다.

Keywords

References

  1. Y. Choi, K. Kim, W. Cho, 'Grid-Based Approach for Detecting Head and Hand Regions,' International Conference on Intelligent Computing, Qingdao China, August 21-24, 2007
  2. J. Black, T. Ellis, and D. Makris, 'Wide Area Surveillance with a Multi-Camera Network,' Proc. IDSS-04 Intelligent Distributed Surveillance Systems, 2003, pp. 21-25
  3. P. KaewTrakulPong and R. Bowden, 'A Real-time Adaptive Visual Surveillance System for Tracking Low Resolution Colour Targets in Dynamically Changing Scenes,' Journal of Image and Vision Computing. Vol.21, Issue 10, Elsevier Science Ltd, 2003, pp. 913-929 https://doi.org/10.1016/S0262-8856(03)00076-3
  4. J. Sturges and T. Whitfield, 'Locating Basic Colour in the Munsell Space,' Colour Research and Application, 1995, pp. 364-376
  5. Q. Cai and J. Agrarian, 'Tracking Human Motion using Multiple Cameras,' Proc. International Conference on Pattern Recognition, 1996, pp. 67-72
  6. P. Kelly, A. Katkere, D. Kuramura, S. Moezzi, and S. Chatterjee, 'An Architecture for Multiple Perspective Interactive Video,' Proc. of the 3rd ACE International Conference on Multimedia, 1995, pp. 201-212
  7. G. Welch and G. Bishop, 'An Introduction to the Kalman Kilter,' Technical Report 95-041,University of North Carolina at Chapel Hill, 1995
  8. I. Haritaoglu, D. Harwood, L. Davis, 'W4:Who, When, Where, What: A Real Time System for Detecting and Tracking People,' Third International Conference on Automatic Face and Gesture, 1998
  9. I. Haritaoglu, D. Harwood, and L. S. Davis, 'W4S: A realtime system for detecting and tracking people in 2 1/2D,' 5th European Conference on Computer Vision, Freiburg, Germany, 1998
  10. V. Kettnaker, R. Zabih, 'Counting People from multiple cameras,' in IEEE ICMCS, Florence, Italy, 1999, pp. 267-271
  11. T. Huang and S. Russell, 'Object Identification in a Bayesian Context, Proc. International Joint Conference on Artificial Intelligence (IJCAI-97), Nagoya, Japan, 1997, pp. 1276-1283
  12. Q. Cai and J.K. Aggarwal, 'Automatic Tracking of Human Motion in Indoor Scenes Across Multiple Synchronized video Streams,' 6th International conference on Computer Vision, Bombay, India, 1998, pp. 356-362
  13. O. Javed, Z. Rasheed, K. Shafique, and M. Shah. 'Tracking Across Multiple Cameras with Disjoint Views,' Proc. IEEE International Conference on Computer Vision, 2003, pp. 952-957
  14. A. Dick and M. Brooks, 'A Stochastic Approach to Tracking Objects Across Multiple Cameras,' Australian Conference on Artificial Intelligence, 2004, pp. 160-170
  15. T. J. Ellis, D. Makris, and J. Black, 'Learning a multi-camera topology,' In Joint IEEE Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance (VS-PETS), 2003, pp. 165-171
  16. C. Stauffer, 'Learning to track objects through unobserved regions,' In IEEE Computer Society Workshop on Motion and Video Computing, 2005, pp. 96-102
  17. K. Tieu, G. Dalley, and W. Grimson, 'Inference of nonoverlapping camera network topology by measuring statistical dependence,' In Proc. IEEE International Conference on Computer Vision, 2005, pp. 1842-1849
  18. A. Gilbert, R. Bowden, 'Tracking Objects Across Cameras by Incrementally Learning Inter-camera Colour Calibration and Patterns of Activity,' In Proc European Conference Computer Vision, 2006, pp. 125-136
  19. Intel. Open computer vision library. http://sourceforge. net/projects/opencvlibrary/
  20. P. Gabriel, J. Verly, J. Piater, A. Genon, 'The State of the Art in Multiple Object Tracking Under Occlusion in Video Sequences,' Advanced Concepts for Intelligent Vision Systems, pp. 166-173, 2003