• Title/Summary/Keyword: video object tracking

Search Result 313, Processing Time 0.024 seconds

Detection of Abnormal Behavior by Scene Analysis in Surveillance Video (감시 영상에서의 장면 분석을 통한 이상행위 검출)

  • Bae, Gun-Tae;Uh, Young-Jung;Kwak, Soo-Yeong;Byun, Hye-Ran
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.12C
    • /
    • pp.744-752
    • /
    • 2011
  • In intelligent surveillance system, various methods for detecting abnormal behavior were proposed recently. However, most researches are not robust enough to be utilized for actual reality which often has occlusions because of assumption the researches have that individual objects can be tracked. This paper presents a novel method to detect abnormal behavior by analysing major motion of the scene for complex environment in which object tracking cannot work. First, we generate Visual Word and Visual Document from motion information extracted from input video and process them through LDA(Latent Dirichlet Allocation) algorithm which is one of document analysis technique to obtain major motion information(location, magnitude, direction, distribution) of the scene. Using acquired information, we compare similarity between motion appeared in input video and analysed major motion in order to detect motions which does not match to major motions as abnormal behavior.

Moving Object Detection using Clausius Entropy and Adaptive Gaussian Mixture Model (클라우지우스 엔트로피와 적응적 가우시안 혼합 모델을 이용한 움직임 객체 검출)

  • Park, Jong-Hyun;Lee, Gee-Sang;Toan, Nguyen Dinh;Cho, Wan-Hyun;Park, Soon-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.1
    • /
    • pp.22-29
    • /
    • 2010
  • A real-time detection and tracking of moving objects in video sequences is very important for smart surveillance systems. In this paper, we propose a novel algorithm for the detection of moving objects that is the entropy-based adaptive Gaussian mixture model (AGMM). First, the increment of entropy generally means the increment of complexity, and objects in unstable conditions cause higher entropy variations. Hence, if we apply these properties to the motion segmentation, pixels with large changes in entropy in moments have a higher chance in belonging to moving objects. Therefore, we apply the Clausius entropy theory to convert the pixel value in an image domain into the amount of energy change in an entropy domain. Second, we use an adaptive background subtraction method to detect moving objects. This models entropy variations from backgrounds as a mixture of Gaussians. Experiment results demonstrate that our method can detect motion object effectively and reliably.

Detection of Smoking Behavior in Images Using Deep Learning Technology (딥러닝 기술을 이용한 영상에서 흡연행위 검출)

  • Dong Jun Kim;Yu Jin Choi;Kyung Min Park;Ji Hyun Park;Jae-Moon Lee;Kitae Hwang;In Hwan Jung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.4
    • /
    • pp.107-113
    • /
    • 2023
  • This paper proposes a method for detecting smoking behavior in images using artificial intelligence technology. Since smoking is not a static phenomenon but an action, the object detection technology was combined with the posture estimation technology that can detect the action. A smoker detection learning model was developed to detect smokers in images, and the characteristics of smoking behaviors were applied to posture estimation technology to detect smoking behaviors in images. YOLOv8 was used for object detection, and OpenPose was used for posture estimation. In addition, when smokers and non-smokers are included in the image, a method of separating only people was applied. The proposed method was implemented using Google Colab NVIDEA Tesla T4 GPU in Python, and it was found that the smoking behavior was perfectly detected in the given video as a result of the test.

Digital Library Interface Research Based on EEG, Eye-Tracking, and Artificial Intelligence Technologies: Focusing on the Utilization of Implicit Relevance Feedback (뇌파, 시선추적 및 인공지능 기술에 기반한 디지털 도서관 인터페이스 연구: 암묵적 적합성 피드백 활용을 중심으로)

  • Hyun-Hee Kim;Yong-Ho Kim
    • Journal of the Korean Society for information Management
    • /
    • v.41 no.1
    • /
    • pp.261-282
    • /
    • 2024
  • This study proposed and evaluated electroencephalography (EEG)-based and eye-tracking-based methods to determine relevance by utilizing users' implicit relevance feedback while navigating content in a digital library. For this, EEG/eye-tracking experiments were conducted on 32 participants using video, image, and text data. To assess the usefulness of the proposed methods, deep learning-based artificial intelligence (AI) techniques were used as a competitive benchmark. The evaluation results showed that EEG component-based methods (av_P600 and f_P3b components) demonstrated high classification accuracy in selecting relevant videos and images (faces/emotions). In contrast, AI-based methods, specifically object recognition and natural language processing, showed high classification accuracy for selecting images (objects) and texts (newspaper articles). Finally, guidelines for implementing a digital library interface based on EEG, eye-tracking, and artificial intelligence technologies have been proposed. Specifically, a system model based on implicit relevance feedback has been presented. Moreover, to enhance classification accuracy, methods suitable for each media type have been suggested, including EEG-based, eye-tracking-based, and AI-based approaches.

Illumination Environment Adaptive Real-time Video Surveillance System for Security of Important Area (중요지역 보안을 위한 조명환경 적응형 실시간 영상 감시 시스템)

  • An, Sung-Jin;Lee, Kwan-Hee;Kwon, Goo-Rak;Kim, Nam-Hyung;Ko, Sung-Jea
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.2 s.314
    • /
    • pp.116-125
    • /
    • 2007
  • In this paper, we propose a illumination environment adaptive real-time surveillance system for security of important area such as military bases, prisons, and strategic infra structures. The proposed system recognizes movement of objects on the bright environments as well as in dark illumination. The procedure of proposed system may be summarized as follows. First, the system discriminates between bright and dark with input image distribution. Then, if the input image is dark, the system has a pre-processing. The Multi-scale Retinex Color Restoration(MSRCR) is processed to enhance the contrast of image captured in dark environments. Secondly, the enhanced input image is subtracted with the revised background image. And then, we take a morphology image processing to obtain objects correctly. Finally, each bounding box enclosing each objects are tracked. The center point of each bounding box obtained by the proposed algorithm provides more accurate tracking information. Experimental results show that the proposed system provides good performance even though an object moves very fast and the background is quite dark.

Development of CCTV Cooperation Tracking System for Real-Time Crime Monitoring (실시간 범죄 모니터링을 위한 CCTV 협업 추적시스템 개발 연구)

  • Choi, Woo-Chul;Na, Joon-Yeop
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.12
    • /
    • pp.546-554
    • /
    • 2019
  • Typically, closed-circuit television (CCTV) monitoring is mainly used for post-processes (i.e. to provide evidence after an incident has occurred), but by using a streaming video feed, machine-based learning, and advanced image recognition techniques, current technology can be extended to respond to crimes or reports of missing persons in real time. The multi-CCTV cooperation technique developed in this study is a program model that delivers similarity information about a suspect (or moving object) extracted via CCTV at one location and sent to a monitoring agent to track the selected suspect or object when he, she, or it moves out of range to another CCTV camera. To improve the operating efficiency of local government CCTV control centers, we describe here the partial automation of a CCTV control system that currently relies upon monitoring by human agents. We envisage an integrated crime prevention service, which incorporates the cooperative CCTV network suggested in this study and that can easily be experienced by citizens in ways such as determining a precise individual location in real time and providing a crime prevention service linked to smartphones and/or crime prevention/safety information.

Vision-based Real-time Vehicle Detection and Tracking Algorithm for Forward Collision Warning (전방 추돌 경보를 위한 영상 기반 실시간 차량 검출 및 추적 알고리즘)

  • Hong, Sunghoon;Park, Daejin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.7
    • /
    • pp.962-970
    • /
    • 2021
  • The cause of the majority of vehicle accidents is a safety issue due to the driver's inattention, such as drowsy driving. A forward collision warning system (FCWS) can significantly reduce the number and severity of accidents by detecting the risk of collision with vehicles in front and providing an advanced warning signal to the driver. This paper describes a low power embedded system based FCWS for safety. The algorithm computes time to collision (TTC) through detection, tracking, distance calculation for the vehicle ahead and current vehicle speed information with a single camera. Additionally, in order to operate in real time even in a low-performance embedded system, an optimization technique in the program with high and low levels will be introduced. The system has been tested through the driving video of the vehicle in the embedded system. As a result of using the optimization technique, the execution time was about 170 times faster than that when using the previous non-optimized process.

Background Generation using Temporal and Spatial Information of Pixels (시간축과 공간축 화소 정보를 이용한 배경 생성)

  • Cho, Sang-Hyun;Kang, Hang-Bong
    • The KIPS Transactions:PartB
    • /
    • v.17B no.1
    • /
    • pp.15-22
    • /
    • 2010
  • Background generation is very important for accurate object tracking in video surveillance systems. Traditional background generation techniques have some problems with non-moving objects for longer periods. To overcome this problem, we propose a newbackground generation method using mean-shift and Fast Marching Method (FMM) to use pixel information along temporal and spatial dimensions. The mode of pixel value density along time axis is estimated by mean-shift algorithm and spatial information is evaluated by FMM, and then they are used together to generate a desirable background in the existence of non-moving objects during longer period. Experimental results show that our proposed method is more efficient than the traditional method.

Efficient Multimodal Background Modeling and Motion Defection (효과적인 다봉 배경 모델링 및 물체 검출)

  • Park, Dae-Yong;Byun, Hae-Ran
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.6
    • /
    • pp.459-463
    • /
    • 2009
  • Background modeling and motion detection is the one of the most significant real time video processing technique. Until now, many researches are conducted into the topic but it still needs much time for robustness. It is more important when other algorithms are used together such as object tracking, classification or behavior understanding. In this paper, we propose efficient multi-modal background modeling methods which can be understood as simplified learning method of Gaussian mixture model. We present its validity using numerical methods and experimentally show detecting performance.

Reseach for object auto tracking technology using video analysis and BLE device (근거리 무선통신 기기와 영상분석을 이용한 객체추적 기법에 관한 연구)

  • Choung, Kyung-Ho;Park, Jae-Yong;Kim, Jung-Gon
    • Proceedings of the Korean Society of Disaster Information Conference
    • /
    • 2015.11a
    • /
    • pp.96-99
    • /
    • 2015
  • 본 논문에서는 중복되지 않는 서로 다른 카메라의 영상을 활용한 동일 객체 판단 및 추적 기술에 대하여 소개한다. 영상분석에서 색상 정보는 가장 기본이 되는 중요한 정보라 할 수 있다. 특히 색상 정보를 이용하는 히스토그램은 일반적으로 추적, 인식 등에 많이 사용되고 있으나 이동 객체나 조도 변화 등에 따라 성능에 차이를 보인다. 이러한 문제점을 해결하고자 본 연구에서는 동일 객체 판단을 위해 대표적으로 사용되는 히스토그램 정합의 두 알고리즘(HSV 공간에서의 Histogram matching 방법과 RGB 공간에서의MCSHR 알고리즘) 결합을 통해 분할 히스토그램은 객체를 3조각으로 나누어 전체와 각각의 히스토그램을 구하며 MCSHR을 RGB공간이 아니 Hue 공간 히스토그램으로 변경하여 유사도를 도출 하였으며 조도 변화에 강인한 모델을 만들기 위해 Controlled equalization기법을 사용하여 원 영상의 히스토그램의 확률과 평활화한 히스토그램의 확률 융합을 시도 하였다. 해당 실험의 비교 결과 기존 HSV공간에서 Histogram matching을 통한 유사도 비교보다 12.9% 향상된 정합율의 결과를 보였다. 또한 영상 정보와 스마트 기기를 통한 인식 방법의 융합을 통해 영상 내에서 동일 객체 판단에 대한 추가 정보 제공에 대해 방법론 적인 부분을 제안 하였다.

  • PDF