• Title/Summary/Keyword: feature-based tracking

Search Result 315, Processing Time 0.036 seconds

LSTM Network with Tracking Association for Multi-Object Tracking

  • Farhodov, Xurshedjon;Moon, Kwang-Seok;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.10
    • /
    • pp.1236-1249
    • /
    • 2020
  • In a most recent object tracking research work, applying Convolutional Neural Network and Recurrent Neural Network-based strategies become relevant for resolving the noticeable challenges in it, like, occlusion, motion, object, and camera viewpoint variations, changing several targets, lighting variations. In this paper, the LSTM Network-based Tracking association method has proposed where the technique capable of real-time multi-object tracking by creating one of the useful LSTM networks that associated with tracking, which supports the long term tracking along with solving challenges. The LSTM network is a different neural network defined in Keras as a sequence of layers, where the Sequential classes would be a container for these layers. This purposing network structure builds with the integration of tracking association on Keras neural-network library. The tracking process has been associated with the LSTM Network feature learning output and obtained outstanding real-time detection and tracking performance. In this work, the main focus was learning trackable objects locations, appearance, and motion details, then predicting the feature location of objects on boxes according to their initial position. The performance of the joint object tracking system has shown that the LSTM network is more powerful and capable of working on a real-time multi-object tracking process.

Invariant-Feature Based Object Tracking Using Discrete Dynamic Swarm Optimization

  • Kang, Kyuchang;Bae, Changseok;Moon, Jinyoung;Park, Jongyoul;Chung, Yuk Ying;Sha, Feng;Zhao, Ximeng
    • ETRI Journal
    • /
    • v.39 no.2
    • /
    • pp.151-162
    • /
    • 2017
  • With the remarkable growth in rich media in recent years, people are increasingly exposed to visual information from the environment. Visual information continues to play a vital role in rich media because people's real interests lie in dynamic information. This paper proposes a novel discrete dynamic swarm optimization (DDSO) algorithm for video object tracking using invariant features. The proposed approach is designed to track objects more robustly than other traditional algorithms in terms of illumination changes, background noise, and occlusions. DDSO is integrated with a matching procedure to eliminate inappropriate feature points geographically. The proposed novel fitness function can aid in excluding the influence of some noisy mismatched feature points. The test results showed that our approach can overcome changes in illumination, background noise, and occlusions more effectively than other traditional methods, including color-tracking and invariant feature-tracking methods.

Implementation of Drowsiness Driving Warning System based on Improved Eyes Detection and Pupil Tracking Using Facial Feature Information (얼굴 특징 정보를 이용한 향상된 눈동자 추적을 통한 졸음운전 경보 시스템 구현)

  • Jeong, Do Yeong;Hong, KiCheon
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.5 no.2
    • /
    • pp.167-176
    • /
    • 2009
  • In this paper, a system that detects driver's drowsiness has been implemented based on the automatic extraction and the tracking of pupils. The research also focuses on the compensation of illumination and reduction of background noises that naturally exist in the driving condition. The system, that is based on the principle of Haar-like feature, automatically collects data from areas of driver's face and eyes among the complex background. Then, it makes decision of driver's drowsiness by using recognition of characteristics of pupils area, detection of pupils, and their movements. The implemented system has been evaluated and verified the practical uses for the prevention of driver's drowsiness.

Pose Tracking of Moving Sensor using Monocular Camera and IMU Sensor

  • Jung, Sukwoo;Park, Seho;Lee, KyungTaek
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.3011-3024
    • /
    • 2021
  • Pose estimation of the sensor is important issue in many applications such as robotics, navigation, tracking, and Augmented Reality. This paper proposes visual-inertial integration system appropriate for dynamically moving condition of the sensor. The orientation estimated from Inertial Measurement Unit (IMU) sensor is used to calculate the essential matrix based on the intrinsic parameters of the camera. Using the epipolar geometry, the outliers of the feature point matching are eliminated in the image sequences. The pose of the sensor can be obtained from the feature point matching. The use of IMU sensor can help initially eliminate erroneous point matches in the image of dynamic scene. After the outliers are removed from the feature points, these selected feature points matching relations are used to calculate the precise fundamental matrix. Finally, with the feature point matching relation, the pose of the sensor is estimated. The proposed procedure was implemented and tested, comparing with the existing methods. Experimental results have shown the effectiveness of the technique proposed in this paper.

A robust Correlation Filter based tracker with rich representation and a relocation component

  • Jin, Menglei;Liu, Weibin;Xing, Weiwei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.10
    • /
    • pp.5161-5178
    • /
    • 2019
  • Correlation Filter was recently demonstrated to have good characteristics in the field of video object tracking. The advantages of Correlation Filter based trackers are reflected in the high accuracy and robustness it provides while maintaining a high speed. However, there are still some necessary improvements that should be made. First, most trackers cannot handle multi-scale problems. To solve this problem, our algorithm combines position estimation with scale estimation. The difference from the traditional method in regard to the scale estimation is that, the proposed method can track the scale of the object more quickly and effective. Additionally, in the feature extraction module, the feature representation of traditional algorithms is relatively simple, and furthermore, the tracking performance is easily affected in complex scenarios. In this paper, we design a novel and powerful feature that can significantly improve the tracking performance. Finally, traditional trackers often suffer from model drift, which is caused by occlusion and other complex scenarios. We introduce a relocation component to detect object at other locations such as the secondary peak of the response map. It partly alleviates the model drift problem.

Tracking of eyes based on the iterated spatial moment using weighted gray level (명암 가중치를 이용한 반복 수렴 공간 모멘트기반 눈동자의 시선 추적)

  • Choi, Woo-Sung;Lee, Kyu-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.5
    • /
    • pp.1240-1250
    • /
    • 2010
  • In this paper, an eye tracking method is presented by using on iterated spatial moment adapting weighted gray level that can accurately detect and track user's eyes under the complicated background. The region of face is detected by using Haar-like feature before extracting region of eyes to minimize an region of interest from the input picture of CCD camera. And the region of eyes is detected by using eigeneye based on the eigenface of Principal component analysis. Also, feature points of eyes are detected from darkest part in the region of eyes. The tracking of eyes is achieved correctly by using iterated spatial moment adapting weighted gray level.

Tracking of eyes based on the spatial moment using weighted gray level (명암 가중치를 이용한 공간 모멘트기반 눈동자 추적)

  • Choi, Woo-Sung;Lee, Kyu-Won;Kim, Kwan-Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.198-201
    • /
    • 2009
  • In this paper, an eye tracking method is presented by using on iterated spatial moment adapting weighted gray level that can accurately detect and track user's eyes under the complicated background. The region of face is detected by using Haar-like feature before extracting region of eyes to minimize an region of interest from the input picture of CCD camera. And the region of eyes is detected by using eigeneye based on the eigenface of Principal component analysis. And then feature points of eyes are detected from darkest part in the region of eyes. The tracking of eyes is achieved correctly by using iterated spatial moment adapting weighted gray level.

  • PDF

Enhanced Representation for Object Tracking (물체 추적을 위한 강화된 부분공간 표현)

  • Yun, Frank;Yoo, Haan-Ju;Choi, Jin-Young
    • Proceedings of the IEEK Conference
    • /
    • 2009.05a
    • /
    • pp.408-410
    • /
    • 2009
  • We present an efficient and robust measurement model for visual tracking. This approach builds on and extends work on subspace representations of measurement model. Subspace-based tracking algorithms have been introduced to visual tracking literature for a decade and show considerable tracking performance due to its robustness in matching. However the measures used in their measurement models are often restricted to few approaches. We propose a novel measure of object matching using Angle In Feature Space, which aims to improve the discriminability of matching in subspace. Therefore, our tracking algorithm can distinguish target from similar background clutters which often cause erroneous drift by conventional Distance From Feature Space measure. Experiments demonstrate the effectiveness of the proposed tracking algorithm under severe cluttered background.

  • PDF

Vehicle Tracking using Sequential Monte Carlo Filter (순차적인 몬테카를로 필터를 사용한 차량 추적)

  • Lee, Won-Ju;Yun, Chang-Yong;Kim, Eun-Tae;Park, Min-Yong
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.434-436
    • /
    • 2006
  • In a visual driver-assistance system, separating moving objects from fixed objects are an important problem to maintain multiple hypothesis for the state. Color and edge-based tracker can often be "distracted" causing them to track the wrong object. Many researchers have dealt with this problem by using multiple features, as it is unlikely that all will be distracted at the same time. In this paper, we improve the accuracy and robustness of real-time tracking by combining a color histogram feature with a brightness of Optical Flow-based feature under a Sequential Monte Carlo framework. And it is also excepted from Tracking as time goes on, reducing density by Adaptive Particles Number in case of the fixed object. This new framework makes two main contributions. The one is about the prediction framework which separating moving objects from fixed objects and the other is about measurement framework to get a information from the visual data under a partial occlusion.

  • PDF

Active Facial Tracking for Fatigue Detection (피로 검출을 위한 능동적 얼굴 추적)

  • Kim, Tae-Woo;Kang, Yong-Seok
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.3
    • /
    • pp.53-60
    • /
    • 2009
  • The vision-based driver fatigue detection is one of the most prospective commercial applications of facial expression recognition technology. The facial feature tracking is the primary technique issue in it. Current facial tracking technology faces three challenges: (1) detection failure of some or all of features due to a variety of lighting conditions and head motions; (2) multiple and non-rigid object tracking; and (3) features occlusion when the head is in oblique angles. In this paper, we propose a new active approach. First, the active IR sensor is used to robustly detect pupils under variable lighting conditions. The detected pupils are then used to predict the head motion. Furthermore, face movement is assumed to be locally smooth so that a facial feature can be tracked with a Kalman filter. The simultaneous use of the pupil constraint and the Kalman filtering greatly increases the prediction accuracy for each feature position. Feature detection is accomplished in the Gabor space with respect to the vicinity of predicted location. Local graphs consisting of identified features are extracted and used to capture the spatial relationship among detected features. Finally, a graph-based reliability propagation is proposed to tackle the occlusion problem and verify the tracking results. The experimental results show validity of our active approach to real-life facial tracking under variable lighting conditions, head orientations, and facial expressions.

  • PDF