• Title/Summary/Keyword: Video monitoring system

Search Result 325, Processing Time 0.028 seconds

Web-based Video Monitoring System on Real Time using Object Extraction (객체 추출을 이용한 실시간 웹기반 영상감시 시스템)

  • Lee, Keun-Wang;Oh, Taek-Hwan
    • Proceedings of the KAIS Fall Conference
    • /
    • 2006.05a
    • /
    • pp.426-429
    • /
    • 2006
  • 실시간 영상에서 객체 추적은 수년간 컴퓨터 비전 및 여러 실용적 응용 분야에서 관심을 가지는 주제 중 하나이다. 하지만 배경영상의 잡음을 객체로 인식하는 오류로 인하여 추출하고자 하는 객체를 찾지 못하는 경우가 있다. 본 논문에서는 실시간 영상에서 적응적 배경영상을 이용하여 객체를 추출하는 방법을 제안한다. 입력되는 영상에서 배경영역의 잡음을 제거하고 조명에 강인한 객체 추출을 위하여 객체영역이 아닌 배경영역 부분을 실시간으로 갱신함으로써 적응적 배경영상을 생성한다. 그리고 배경영상과 카메라로부터 입력되는 입력영상과의 차를 이용하여 객체를 추출한다.

  • PDF

Dividing Occluded Humans Based on an Artificial Neural Network for the Vision of a Surveillance Robot (감시용 로봇의 시각을 위한 인공 신경망 기반 겹친 사람의 구분)

  • Do, Yong-Tae
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.505-510
    • /
    • 2009
  • In recent years the space where a robot works has been expanding to the human space unlike traditional industrial robots that work only at fixed positions apart from humans. A human in the recent situation may be the owner of a robot or the target in a robotic application. This paper deals with the latter case; when a robot vision system is employed to monitor humans for a surveillance application, each person in a scene needs to be identified. Humans, however, often move together, and occlusions between them occur frequently. Although this problem has not been seriously tackled in relevant literature, it brings difficulty into later image analysis steps such as tracking and scene understanding. In this paper, a probabilistic neural network is employed to learn the patterns of the best dividing position along the top pixels of an image region of partly occlude people. As this method uses only shape information from an image, it is simple and can be implemented in real time.

Real-Time Motion Detection and Storage Method on a Compressed Domain for Multi-channel Video Surveillance Monitoring System (서베일런스 환경을 위한 압축 도메인에서 다채널 실시간 움직임 검출 및 저장 시스템)

  • wu, Xiangjian;Kim, Youngwoong;Ahn, Yong-Jo;Kim, Yong-sung;Kim, Seung-Hwan;Cho, Hyung-Jun;Sim, Donggyu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2014.11a
    • /
    • pp.56-58
    • /
    • 2014
  • 본 논문에서는 압축 도메인에서 고속으로 움직임을 검출하고 해당 구간을 저장 하는 알고리즘을 제안한다. 제안하는 알고리즘은 H.264/AVC 기반의 압축 비트스트림에서 움직임 벡터와 참조프레임을 이용하여 움직임이 있는 프레임을 검출하고 움직임 유무에 따라 GOP 단위로 저장하는 과정을 수행한다. 압축도메인에서 움직임 검출과 구간 저장을 수행함으로써 복잡도를 낮추고 비디오 저장을 위한 공간을 절약해 실시간 다채널 영상 처리에 최적화 된 성능을 제공한다. 제안하는 움직임 검출 및 저장 시스템은 single thread 환경에서 실시간으로 평균 2957 프레임을 처리 가능하며, Multi thread의 경우 30 fps 영상 98개 채널을 실시간으로 처리 가능하다.

  • PDF

A Design of Web-based Video Monitoring System on Real Time (실시간 웹기반 영상감시 시스템의 설계)

  • Jang, Jung-Hwa
    • Proceedings of the KAIS Fall Conference
    • /
    • 2010.05a
    • /
    • pp.479-482
    • /
    • 2010
  • 실시간 영상에서 객체 추적은 수년간 컴퓨터 비전 및 여러 실용적 응용 분야에서 관심을 가지는 주제 중 하나이다. 하지만 배경영상의 잡음을 객체로 인식하는 오류로 인하여 추출하고자 하는 객체를 찾지 못하는 경우가 있다. 본 논문에서는 실시간 영상에서 적응적 배경영상을 이용하여 객체를 추출하고 추적하는 방법을 제안한다. 입력되는 영상에서 배경영역의 잡음을 제거하고 조명에 강인한 객체 추출을 위하여 객체영역이 아닌 배경영역 부분을 실시간으로 갱신함으로써 적응적 배경영상을 생성한다. 그리고 배경영상과 카메라로부터 입력되는 입력영상과의 차를 이용하여 객체를 추출한다. 추출된 객체의 내부점을 이용하여 최소 사각영역을 설정하고, 이를 통해 객체를 추적한다. 아울러 제안방법의 성능에 대한 실험결과를 기존 추적 알고리즘과 비교, 분석하여 평가한다.

  • PDF

Computer Vision-based Method to Detect Fire Using Color Variation in Temporal Domain

  • Hwang, Ung;Jeong, Jechang;Kim, Jiyeon;Cho, JunSang;Kim, SungHwan
    • Quantitative Bio-Science
    • /
    • v.37 no.2
    • /
    • pp.81-89
    • /
    • 2018
  • It is commonplace that high false detection rates interfere with immediate vision-based fire monitoring system. To circumvent this challenge, we propose a fire detection algorithm that can accommodate color variations of RGB in temporal domain, aiming at reducing false detection rates. Despite interrupting images (e.g., background noise and sudden intervention), the proposed method is proved robust in capturing distinguishable features of fire in temporal domain. In numerical studies, we carried out extensive real data experiments related to fire detection using 24 video sequences, implicating that the propose algorithm is found outstanding as an effective decision rule for fire detection (e.g., false detection rate <10%).

Video Analysis System for Action and Emotion Detection by Object with Hierarchical Clustering based Re-ID (계층적 군집화 기반 Re-ID를 활용한 객체별 행동 및 표정 검출용 영상 분석 시스템)

  • Lee, Sang-Hyun;Yang, Seong-Hun;Oh, Seung-Jin;Kang, Jinbeom
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.89-106
    • /
    • 2022
  • Recently, the amount of video data collected from smartphones, CCTVs, black boxes, and high-definition cameras has increased rapidly. According to the increasing video data, the requirements for analysis and utilization are increasing. Due to the lack of skilled manpower to analyze videos in many industries, machine learning and artificial intelligence are actively used to assist manpower. In this situation, the demand for various computer vision technologies such as object detection and tracking, action detection, emotion detection, and Re-ID also increased rapidly. However, the object detection and tracking technology has many difficulties that degrade performance, such as re-appearance after the object's departure from the video recording location, and occlusion. Accordingly, action and emotion detection models based on object detection and tracking models also have difficulties in extracting data for each object. In addition, deep learning architectures consist of various models suffer from performance degradation due to bottlenects and lack of optimization. In this study, we propose an video analysis system consists of YOLOv5 based DeepSORT object tracking model, SlowFast based action recognition model, Torchreid based Re-ID model, and AWS Rekognition which is emotion recognition service. Proposed model uses single-linkage hierarchical clustering based Re-ID and some processing method which maximize hardware throughput. It has higher accuracy than the performance of the re-identification model using simple metrics, near real-time processing performance, and prevents tracking failure due to object departure and re-emergence, occlusion, etc. By continuously linking the action and facial emotion detection results of each object to the same object, it is possible to efficiently analyze videos. The re-identification model extracts a feature vector from the bounding box of object image detected by the object tracking model for each frame, and applies the single-linkage hierarchical clustering from the past frame using the extracted feature vectors to identify the same object that failed to track. Through the above process, it is possible to re-track the same object that has failed to tracking in the case of re-appearance or occlusion after leaving the video location. As a result, action and facial emotion detection results of the newly recognized object due to the tracking fails can be linked to those of the object that appeared in the past. On the other hand, as a way to improve processing performance, we introduce Bounding Box Queue by Object and Feature Queue method that can reduce RAM memory requirements while maximizing GPU memory throughput. Also we introduce the IoF(Intersection over Face) algorithm that allows facial emotion recognized through AWS Rekognition to be linked with object tracking information. The academic significance of this study is that the two-stage re-identification model can have real-time performance even in a high-cost environment that performs action and facial emotion detection according to processing techniques without reducing the accuracy by using simple metrics to achieve real-time performance. The practical implication of this study is that in various industrial fields that require action and facial emotion detection but have many difficulties due to the fails in object tracking can analyze videos effectively through proposed model. Proposed model which has high accuracy of retrace and processing performance can be used in various fields such as intelligent monitoring, observation services and behavioral or psychological analysis services where the integration of tracking information and extracted metadata creates greate industrial and business value. In the future, in order to measure the object tracking performance more precisely, there is a need to conduct an experiment using the MOT Challenge dataset, which is data used by many international conferences. We will investigate the problem that the IoF algorithm cannot solve to develop an additional complementary algorithm. In addition, we plan to conduct additional research to apply this model to various fields' dataset related to intelligent video analysis.

Implementation of a unified live streaming based on HTML5 for an IP camera (IP 카메라를 위한 HTML5 기반 통합형 Live Streaming 구현)

  • Ryu, Hong-Nam;Yang, Gil-Jin;Kim, Jong-Hun;Choi, Byoung-Wook
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.28 no.9
    • /
    • pp.99-104
    • /
    • 2014
  • This paper presents a unified live-streaming method based on Hypertext Mark-up Language 5(HTML5) for an IP camera which is independent of browsers of clients and is implemented with open-source libraries. Currently, conventional security systems based on analog CCTV cameras are being modified to newer surveillance systems utilizing IP cameras. These cameras offer remote surveillance and monitoring regardless of the device being used at any time, from any location. However, this approach needs live-streaming protocols to be implemented in order to verify real-time video streams and surveillance is possible after installation of separate plug-ins or special software. Recently, live streaming is being conducted through HTML5 using two different standard protocols: HLS and DASH, that works with Apple and Android products respectively. This paper proposes a live-streaming approach that is linked on either of the two protocols which makes the system independent with the browser or OS. The client is possible to monitor real-time video streams without the need of any additional plug-ins. Moreover, by implementing open source libraries, development costs and time were economized. We verified usefulness of the proposed approach through mobile devices and extendability to other various applications of the system.

Design of Upper Body Detection System Using RBFNN Based on HOG Algorithm (HOG기반 RBFNN을 이용한 상반신 검출 시스템의 설계)

  • Kim, Sun-Hwan;Oh, Sung-Kwun;Kim, Jin-Yul
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.4
    • /
    • pp.259-266
    • /
    • 2016
  • Recently, CCTV cameras are emplaced actively to reinforce security and intelligent surveillance systems have been under development for detecting and monitoring of the objects in the video. In this study, we propose a method for detection of upper body in intelligent surveillance system using FCM-based RBFNN classifier realized with the aid of HOG features. Firstly, HOG features that have been originally proposed to detect the pedestrian are adopted to train the unique gradient features about upper body. However, HOG features typically exhibit a very high dimension of which is proportional to the size of the input image, it is necessary to reduce the dimension of inputs of the RBFNN classifier. Thus the well-known PCA algorithm is applied prior to the RBFNN classification step. In the computer simulation experiments, the RBFNN classifier was trained using pre-classified upper body images and non-person images and then the performance of the proposed classifier for upper body detection is evaluated by using test images and video sequences.

Auto ABLB Audiometry System Supporting One-to-many Model (일 대 다 모델을 지원하는 자동 ABLB 청력 검사 시스템)

  • Song, Bok-Deuk;Kang, Deok-Hun;Shin, Bum-Joo;Kim, Jin-Dong;Jeon, Gye-Rok;Wang, Soo-Geun
    • Journal of the Korean Institute of Electrical and Electronic Material Engineers
    • /
    • v.24 no.6
    • /
    • pp.519-524
    • /
    • 2011
  • ABLB (alternate binaural loudness balance) test is one of the medical assessments to diagnose detailed lesion of sensory-neural hearing loss based on a recruitment phenomenon. However, current ABLB audiometry takes an operational model, so called face-to-face model, in which model one audiometrist can assess only one subject at a time. As a result, this face-to-face model leads to expensive audiometrist's labor cost and lengthy wait when there exist many subjects. As a solution, this paper suggests an ABLB audiometry system supporting one-to-many model in which model an audiometrist enables to assess several subjects concurrently. By providing such capabilities as real-time transfer of assessment result, video monitoring of subject and video chat, this solution can provide same effect as face-to-face model but overcome weakness of the existing face-to-face model.

RGB-Depth Camera for Dynamic Measurement of Liquid Sloshing (RGB-Depth 카메라를 활용한 유체 표면의 거동 계측분석)

  • Kim, Junhee;Yoo, Sae-Woung;Min, Kyung-Won
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.32 no.1
    • /
    • pp.29-35
    • /
    • 2019
  • In this paper, a low-cost dynamic measurement system using the RGB-depth camera, Microsoft $Kinect^{(R)}$ v2, is proposed for measuring time-varying free surface motion of liquid dampers used in building vibration mitigation. Various experimental studies are conducted consecutively: performance evaluation and validation of the $Kinect^{(R)}$ v2, real-time monitoring using the $Kinect^{(R)}$ v2 SDK(software development kits), point cloud acquisition of liquid free surface in the 3D space, comparison with the existing video sensing technology. Utilizing the proposed $Kinect^{(R)}$ v2-based measurement system in this study, dynamic behavior of liquid in a laboratory-scaled small tank under a wide frequency range of input excitation is experimentally analyzed.