• Title/Summary/Keyword: detection and tracking

Search Result 1,266, Processing Time 0.035 seconds

Video Analysis System for Action and Emotion Detection by Object with Hierarchical Clustering based Re-ID (계층적 군집화 기반 Re-ID를 활용한 객체별 행동 및 표정 검출용 영상 분석 시스템)

  • Lee, Sang-Hyun;Yang, Seong-Hun;Oh, Seung-Jin;Kang, Jinbeom
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.89-106
    • /
    • 2022
  • Recently, the amount of video data collected from smartphones, CCTVs, black boxes, and high-definition cameras has increased rapidly. According to the increasing video data, the requirements for analysis and utilization are increasing. Due to the lack of skilled manpower to analyze videos in many industries, machine learning and artificial intelligence are actively used to assist manpower. In this situation, the demand for various computer vision technologies such as object detection and tracking, action detection, emotion detection, and Re-ID also increased rapidly. However, the object detection and tracking technology has many difficulties that degrade performance, such as re-appearance after the object's departure from the video recording location, and occlusion. Accordingly, action and emotion detection models based on object detection and tracking models also have difficulties in extracting data for each object. In addition, deep learning architectures consist of various models suffer from performance degradation due to bottlenects and lack of optimization. In this study, we propose an video analysis system consists of YOLOv5 based DeepSORT object tracking model, SlowFast based action recognition model, Torchreid based Re-ID model, and AWS Rekognition which is emotion recognition service. Proposed model uses single-linkage hierarchical clustering based Re-ID and some processing method which maximize hardware throughput. It has higher accuracy than the performance of the re-identification model using simple metrics, near real-time processing performance, and prevents tracking failure due to object departure and re-emergence, occlusion, etc. By continuously linking the action and facial emotion detection results of each object to the same object, it is possible to efficiently analyze videos. The re-identification model extracts a feature vector from the bounding box of object image detected by the object tracking model for each frame, and applies the single-linkage hierarchical clustering from the past frame using the extracted feature vectors to identify the same object that failed to track. Through the above process, it is possible to re-track the same object that has failed to tracking in the case of re-appearance or occlusion after leaving the video location. As a result, action and facial emotion detection results of the newly recognized object due to the tracking fails can be linked to those of the object that appeared in the past. On the other hand, as a way to improve processing performance, we introduce Bounding Box Queue by Object and Feature Queue method that can reduce RAM memory requirements while maximizing GPU memory throughput. Also we introduce the IoF(Intersection over Face) algorithm that allows facial emotion recognized through AWS Rekognition to be linked with object tracking information. The academic significance of this study is that the two-stage re-identification model can have real-time performance even in a high-cost environment that performs action and facial emotion detection according to processing techniques without reducing the accuracy by using simple metrics to achieve real-time performance. The practical implication of this study is that in various industrial fields that require action and facial emotion detection but have many difficulties due to the fails in object tracking can analyze videos effectively through proposed model. Proposed model which has high accuracy of retrace and processing performance can be used in various fields such as intelligent monitoring, observation services and behavioral or psychological analysis services where the integration of tracking information and extracted metadata creates greate industrial and business value. In the future, in order to measure the object tracking performance more precisely, there is a need to conduct an experiment using the MOT Challenge dataset, which is data used by many international conferences. We will investigate the problem that the IoF algorithm cannot solve to develop an additional complementary algorithm. In addition, we plan to conduct additional research to apply this model to various fields' dataset related to intelligent video analysis.

Person Tracking by Detection of Mobile Robot using RGB-D Cameras

  • Kim, Young-Ju
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.12
    • /
    • pp.17-25
    • /
    • 2017
  • In this paper, we have implemented a low-cost mobile robot supporting the person tracking by detection using RGB-D cameras and ROS(Robot Operating System) framework. The mobile robot was developed based on the Kobuki mobile base equipped with 2's Kinect devices and a high performance controller. One kinect device was used to detect and track the single person among people in the constrained working area by combining point cloud data filtering & clustering, HOG classifier and Kalman Filter-based estimation successively, and the other to perform the SLAM-based navigation supported in ROS framework. In performance evaluation, the person tracking by detection was proved to be robustly executed in real-time, and the navigation function showed the accuracy with the mean distance error being lower than 50mm. The mobile robot implemented has a significance in using the open-source based, general-purpose and low-cost approach.

Face Detection and Tracking using Skin Color Information and Haar-Like Features in Real-Time Video (실시간 영상에서 피부색상 정보와 Haar-Like Feature를 이용한 얼굴 검출 및 추적)

  • Kim, Dong-Hyeon;Im, Jae-Hyun;Kim, Dae-Hee;Kim, Tae-Kyung;Paik, Joon-Ki
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.146-149
    • /
    • 2009
  • Face detection and recognition in real-time video constitutes one of the recent topics in the field of computer vision. In this paper, we propose face detection and tracking algorithm using the skin color and haar-like feature in real-time video sequence. The proposed algorithm further includes color space to enhance the result using haar-like feature and skin color. Experiment results reveal the real-time video processing speed and improvement in the rate of tracking.

  • PDF

Tracking Detection using Information Granulation-based Fuzzy Radial Basis Function Neural Networks (정보입자기반 퍼지 RBF 뉴럴 네트워크를 이용한 트랙킹 검출)

  • Choi, Jeoung-Nae;Kim, Young-Il;Oh, Sung-Kwun;Kim, Jeong-Tae
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.58 no.12
    • /
    • pp.2520-2528
    • /
    • 2009
  • In this paper, we proposed tracking detection methodology using information granulation-based fuzzy radial basis function neural networks (IG-FRBFNN). According to IEC 60112, tracking device is manufactured and utilized for experiment. We consider 12 features that can be used to decide whether tracking phenomenon happened or not. These features are considered by signal processing methods such as filtering, Fast Fourier Transform(FFT) and Wavelet. Such some effective features are used as the inputs of the IG-FRBFNN, the tracking phenomenon is confirmed by using the IG-FRBFNN. The learning of the premise and the consequent part of rules in the IG-FRBFNN is carried out by Fuzzy C-Means (FCM) clustering algorithm and weighted least squares method (WLSE), respectively. Also, Hierarchical Fair Competition-based Parallel Genetic Algorithm (HFC-PGA) is exploited to optimize the IG-FRBFNN. Effective features to be selected and the number of fuzzy rules, the order of polynomial of fuzzy rules, the fuzzification coefficient used in FCM are optimized by the HFC-PGA. Tracking inference engine is implemented by using the LabVIEW and loaded into embedded system. We show the superb performance and feasibility of the tracking detection system through some experiments.

Maneuvering detection and tracking in uncertain systems (불확정 시스템에서의 기동검출 및 추적)

  • Yoo, K. S.;Hong, I. S.;Kwon, O. K.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10a
    • /
    • pp.120-124
    • /
    • 1991
  • In this paper, we consider the maneuvering detection and target tracking problem in uncertain linear discrete-time systems. The maneuvering detection is based on X$^{2}$ test[2,71, where Kalman filters have been utilized so far. The target tracking is performed by the maneuvering input compensation based on a maximum likelihood estimator. KF has been known to diverge when some modelling errors exist and fail to detect the maneuvering and to track the target in uncertain systems. Thus this paper adopt the FIR filter[l], which is known to be robust to modelling errors, for maneuvering detection and target tracking problem. Various computer simulations show the superior performance of the FIR filter in this problem.

  • PDF

Animal Tracking in Infrared Video based on Adaptive GMOF and Kalman Filter

  • Pham, Van Khien;Lee, Guee Sang
    • Smart Media Journal
    • /
    • v.5 no.1
    • /
    • pp.78-87
    • /
    • 2016
  • The major problems of recent object tracking methods are related to the inefficient detection of moving objects due to occlusions, noisy background and inconsistent body motion. This paper presents a robust method for the detection and tracking of a moving in infrared animal videos. The tracking system is based on adaptive optical flow generation, Gaussian mixture and Kalman filtering. The adaptive Gaussian model of optical flow (GMOF) is used to extract foreground and noises are removed based on the object motion. Kalman filter enables the prediction of the object position in the presence of partial occlusions, and changes the size of the animal detected automatically along the image sequence. The presented method is evaluated in various environments of unstable background because of winds, and illuminations changes. The results show that our approach is more robust to background noises and performs better than previous methods.

Fundamental research of the target tracking system using a CMOS vision chip for edge detection (윤곽 검출용 CMOS 시각칩을 이용한 물체 추적 시스템 요소 기술 연구)

  • Hyun, Hyo-Young;Kong, Jae-Sung;Shin, Jang-Kyoo
    • Journal of Sensor Science and Technology
    • /
    • v.18 no.3
    • /
    • pp.190-196
    • /
    • 2009
  • In a conventional camera system, a target tracking system consists of a camera part and a image processing part. However, in the field of the real time image processing, the vision chip for edge detection which was made by imitating the algorithm of humanis retina is superior to the conventional digital image processing systems because the human retina uses the parallel information processing method. In this paper, we present a high speed target tracking system using the function of the CMOS vision chip for edge detection.

Maneuvering Target Tracking in Uncertain Parameter Systems Using RoubustH_\inftyFIR Filters (견실한$H_\infty$FIR 필터를 이용한 불확실성 기동표적의 추적)

  • Yoo, Kyung-Sang;Kim, Dae-Woo;Kwon, Oh-Kyu
    • The Transactions of the Korean Institute of Electrical Engineers A
    • /
    • v.48 no.3
    • /
    • pp.270-277
    • /
    • 1999
  • This paper deals with the maneuver detection and target tracking problem in uncertain parameter systems using a robust{{{{ { H}_{ } }}}} FIR filter to improve the unacceptable tracking performance due to the parametr uncertainty. The tracking filter used in the current paper is based on the robust{{{{ { H}_{ } }}}} FIR filter proposed by Kwon et al. [1,2] to estimate the state signal in uncertain systems with parameter uncertainty, and the basic scheme of the proposed method is the input estimation approach. Tracking performance of the maneuver detection and target tracking method proposed is compared with other techniques, Bogler allgorithm [4] and FIR tracking filter [2], via some simulations to examplify the good tracking performance of the proposed method over other techniques.

  • PDF

Moving Object Detection and Tracking in Image Sequence with complex background (복잡한 배경을 가진 영상 시퀀스에서의 이동 물체 검지 및 추적)

  • 정영기;호요성
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.615-618
    • /
    • 1999
  • In this paper, a object detection and tracking algorithm is presented which exhibits robust properties for image sequences with complex background. The proposed algorithm is composed of three parts: moving object detection, object tracking, and motion analysis. The moving object detection algorithm is implemented using a temporal median background method which is suitable for real-time applications. In the motion analysis, we propose a new technique for removing a temporal clutter, such as a swaying plant or a light reflection of a background object. In addition, we design a multiple vehicle tracking system based on Kalman filtering. Computer simulation of the proposed scheme shows its robustness for MPEG-7 test image sequences.

  • PDF

Deep-learning Sliding Window Based Object Detection and Tracking for Generating Trigger Signal of the LPR System (LPR 시스템 트리거 신호 생성을 위한 딥러닝 슬라이딩 윈도우 방식의 객체 탐지 및 추적)

  • Kim, Jinho
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.17 no.4
    • /
    • pp.85-94
    • /
    • 2021
  • The LPR system's trigger sensor makes problem occasionally due to the heave weight of vehicle or the obsolescence equipment. If we replace the hardware sensor to the deep-learning based software sensor in order to generate the trigger signal, LPR system maintenance would be a lot easier. In this paper we proposed the deep-learning sliding window based object detection and tracking algorithm for the LPR system's trigger signal generation. The gate passing vehicle's license plate recognition results are combined into the normal tracking algorithm to catch the position of the vehicle on the trigger line. The experimental results show that the deep learning sliding window based trigger signal generating performance was 100% for the gate passing vehicles including the 5.5% trigger signal position errors due to the minimum bounding box location errors in the vehicle detection process.