• Title/Summary/Keyword: Surveillance camera

Search Result 383, Processing Time 0.029 seconds

Efficient Object Tracking System Using the Fusion of a CCD Camera and an Infrared Camera (CCD카메라와 적외선 카메라의 융합을 통한 효과적인 객체 추적 시스템)

  • Kim, Seung-Hun;Jung, Il-Kyun;Park, Chang-Woo;Hwang, Jung-Hoon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.3
    • /
    • pp.229-235
    • /
    • 2011
  • To make a robust object tracking and identifying system for an intelligent robot and/or home system, heterogeneous sensor fusion between visible ray system and infrared ray system is proposed. The proposed system separates the object by combining the ROI (Region of Interest) estimated from two different images based on a heterogeneous sensor that consolidates the ordinary CCD camera and the IR (Infrared) camera. Human's body and face are detected in both images by using different algorithms, such as histogram, optical-flow, skin-color model and Haar model. Also the pose of human body is estimated from the result of body detection in IR image by using PCA algorithm along with AdaBoost algorithm. Then, the results from each detection algorithm are fused to extract the best detection result. To verify the heterogeneous sensor fusion system, few experiments were done in various environments. From the experimental results, the system seems to have good tracking and identification performance regardless of the environmental changes. The application area of the proposed system is not limited to robot or home system but the surveillance system and military system.

Human Detection in Images Using Optical Flow and Learning (광 흐름과 학습에 의한 영상 내 사람의 검지)

  • Do, Yongtae
    • Journal of Sensor Science and Technology
    • /
    • v.29 no.3
    • /
    • pp.194-200
    • /
    • 2020
  • Human detection is an important aspect in many video-based sensing and monitoring systems. Studies have been actively conducted for the automatic detection of humans in camera images, and various methods have been proposed. However, there are still problems in terms of performance and computational cost. In this paper, we describe a method for efficient human detection in the field of view of a camera, which may be static or moving, through multiple processing steps. A detection line is designated at the position where a human appears first in a sensing area, and only the one-dimensional gray pixel values of the line are monitored. If any noticeable change occurs in the detection line, corner detection and optical flow computation are performed in the vicinity of the detection line to confirm the change. When significant changes are observed in the corner numbers and optical flow vectors, the final determination of human presence in the monitoring area is performed using the Histograms of Oriented Gradients method and a Support Vector Machine. The proposed method requires processing only specific small areas of two consecutive gray images. Furthermore, this method enables operation not only in a static condition with a fixed camera, but also in a dynamic condition such as an operation using a camera attached to a moving vehicle.

Egocentric Vision for Human Activity Recognition Using Deep Learning

  • Malika Douache;Badra Nawal Benmoussat
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.730-744
    • /
    • 2023
  • The topic of this paper is the recognition of human activities using egocentric vision, particularly captured by body-worn cameras, which could be helpful for video surveillance, automatic search and video indexing. This being the case, it could also be helpful in assistance to elderly and frail persons for revolutionizing and improving their lives. The process throws up the task of human activities recognition remaining problematic, because of the important variations, where it is realized through the use of an external device, similar to a robot, as a personal assistant. The inferred information is used both online to assist the person, and offline to support the personal assistant. With our proposed method being robust against the various factors of variability problem in action executions, the major purpose of this paper is to perform an efficient and simple recognition method from egocentric camera data only using convolutional neural network and deep learning. In terms of accuracy improvement, simulation results outperform the current state of the art by a significant margin of 61% when using egocentric camera data only, more than 44% when using egocentric camera and several stationary cameras data and more than 12% when using both inertial measurement unit (IMU) and egocentric camera data.

Using Contour Matching for Omnidirectional Camera Calibration (투영곡선의 자동정합을 이용한 전방향 카메라 보정)

  • Hwang, Yong-Ho;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.6
    • /
    • pp.125-132
    • /
    • 2008
  • Omnidirectional camera system with a wide view angle is widely used in surveillance and robotics areas. In general, most of previous studies on estimating a projection model and the extrinsic parameters from the omnidirectional images assume corresponding points previously established among views. This paper presents a novel omnidirectional camera calibration based on automatic contour matching. In the first place, we estimate the initial parameters including translation and rotations by using the epipolar constraint from the matched feature points. After choosing the interested points adjacent to more than two contours, we establish a precise correspondence among the connected contours by using the initial parameters and the active matching windows. The extrinsic parameters of the omnidirectional camera are estimated minimizing the angular errors of the epipolar plane of endpoints and the inverse projected 3D vectors. Experimental results on synthetic and real images demonstrate that the proposed algorithm obtains more precise camera parameters than the previous method.

Real-time object tracking in Multi-Camera environments (다중 카메라 환경에서의 실시간 객체 추적)

  • 조상현;강행봉
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.10b
    • /
    • pp.691-693
    • /
    • 2004
  • 비디오 시퀀스에서의 객체 추적은 보안 및 감시 시스템(Security and surveillance system), 비디오 원격 회의(Video teleconferencing)등과 같이 컴퓨터 비전 응용 분야에 널리 이용되어, 정정 그 중요성이 증가하고 있다 여러 가지 이유로 인친 카메라 덜(View)로부터 객체의 가시 상태가 변하는 경우, 하나의 뷰만을 이용해서는 좋은 결과를 가지기 어렵기 때문에 본 논문에서는 객체가 가장 잘 나타나는 뷰를 선택해서 객체를 추적하는 방법을 제안한다. 각각의 카메라 뷰에서 객체를 추적하기 위해 본 논문에서는 다중 후보가 결합된 Mean-shift 알고리즘을 이용한다. 제안된 시스템의 경우, 복잡한 환경으로 인해 객체의 가시 상태가 변하는 환경에서 단일 뷰를 이용하는 경우와 비교해 더 나은 성능을 가질 수 있었다.

  • PDF

Military surveillance System design using Digital video Recording Camera (디지털 녹화 감시 카메라 시스템에 의한 군사 방위 시스템 설계)

  • 조혜진;홍충효;최연성;김선우
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.05a
    • /
    • pp.175-178
    • /
    • 2003
  • In this paper, proposed system use real-time MPEG-2 compression, and retrieve video from the storage using efficient indexed algorithm. System survey wide military range, diffuse situation to adjacent units, and transmit images long distance.

  • PDF

Rear-side square surveillance system and rear camera communication in ECU (ECU 내부에서의 후측면 사각 감시 시스템과 후방 카메라 통신)

  • Lee, Seung-Jin;Kim, Sang-Hoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.11a
    • /
    • pp.124-125
    • /
    • 2017
  • 레이더 센서보다 가격이 저렴한 초음파 센서를 사용하여 센서 운용에 있어 가격을 줄였다. 후측면 사각감시 시스템이 경보를 못하거나 오 경보할 수 있는 상황을 후방 카메라의 객체 인식, 탐지를 통해 초음파 센서가 미치지 못하는 영역에 대해 감지를 한다. 또한 후방 카메라와 후측면 사각감시 시스템이 ECU 내부의 통신을 하여 오류를 줄인 정보를 운전자에게 전달 할 수 있다.

Image Analysis for Surveillance Camera Based on 3D Depth Map (3차원 깊이 정보 기반의 감시카메라 영상 분석)

  • Lee, Subin;Seo, Yongduek
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2012.07a
    • /
    • pp.286-289
    • /
    • 2012
  • 본 논문은 3차원 깊이 정보를 이용하여 감시카메라에서 움직이는 사람을 검출하고 추적하는 방법을 제안한다. 제안하는 방법은 GMM(Gaussian mixture model)을 이용하여 배경과 움직이는 사람을 분리한 후, 분리된 영역을 CCL(connected-component labeling)을 통하여 각각 블랍(blob) 단위로 나누고 그 블랍을 추적한다. 그 중 블랍 단위로 나누는 데 있어 두 블랍이 합쳐진 경우, 3차원 깊이 정보를 이용하여 두 블랍을 분리하는 방법을 제안한다. 실험을 통하여 제안하는 방법의 결과를 보인다.

  • PDF

Panoramic Video Generation Method Based on Foreground Extraction (전경 추출에 기반한 파노라마 비디오 생성 기법)

  • Kim, Sang-Hwan;Kim, Chang-Su
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.2
    • /
    • pp.441-445
    • /
    • 2011
  • In this paper, we propose an algorithm for generating panoramic videos using fixed multiple cameras. We estimate a background image from each camera. Then we calculate perspective relationships between images using extracted feature points. To eliminate stitching errors due to different image depths, we process background images and foreground images separately in the overlap regions between adjacent cameras by projecting regions of foreground images selectively. The proposed algorithm can be used to enhance the efficiency and convenience of wide-area surveillance systems.