• Title/Summary/Keyword: Template Tracking

Search Result 107, Processing Time 0.024 seconds

Vision-based recognition of a simple non-verbal intent representation by head movements (고개운동에 의한 단순 비언어 의사표현의 비전인식)

  • Yu, Gi-Ho;No, Deok-Su;Lee, Seong-Cheol
    • Journal of the Ergonomics Society of Korea
    • /
    • v.19 no.1
    • /
    • pp.91-100
    • /
    • 2000
  • In this paper the intent recognition system which recognizes the human's head movements as a simple non-verbal intent representation is presented. The system recognizes five basic intent representations. i.e., strong/weak affirmation. strong/weak negation, and ambiguity by image processing of nodding or shaking movements of head. The vision system for tracking the head movements is composed of CCD camera, image processing board and personal computer. The modified template matching method which replaces the reference image with the searched target image in the previous step is used for the robust tracking of the head movements. For the improvement of the processing speed, the searching is performed in the pyramid representation of the original image. By inspecting the variance of the head movement trajectories. we can recognizes the two basic intent representations - affirmation and negation. Also, by focusing the speed of the head movements, we can see the possibility which recognizes the strength of the intent representation.

  • PDF

Object Tracking Using Template Based on Adaptive 3-Frame Difference (Adaptive 3-Frame Difference 기반 템플릿을 이용한 객체 추적)

  • Kim, Heon-Gi;Lee, Jin-Hyeong;Gang, Ji-Un;Jo, Seong-Won;Kim, Jae-Min;Jeong, Seon-Tae;Jang, Yong-Seok
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.357-360
    • /
    • 2007
  • 물체를 추적하는데 있어서 추적하고자 하는 물체를 검출하여 템플릿을 만드는 것과 두 물체가 겹쳐지거나 다른 배경에 가려진 물체를 구분하여 추적하는 것은 물체 추적에 있어서 중요한 문제이다. 물체를 검출하여 템플릿을 만드는 방법으로 frame difference를 이용하면 천천히 움직이는 물체를 잘 구분할 수 없는 문제점이 있다. 이를 해결하기 위하여 본 논문에서는 adaptive 3-frame difference를 이용하여 정확한 물체의 템플릿을 생성하는 알고리즘을 제안한다.

  • PDF

Multi-Object Tracking Algorithm for Vehicle Detection (차량 검출을 위한 다중객체추적 알고리즘)

  • Lee, Geun-Hoo;Kim, Gyu-Yeong;Park, Hong-Min;Park, Jang-Sik;Kim, Hyun-Tae;Yu, Yun-Sik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.05a
    • /
    • pp.816-819
    • /
    • 2011
  • The image recognition system using CCTV camera has been introduced to minimize not only loss of life and property but also traffic jam in the tunnel. In this paper, multi-object detection algorithm is proposed to track multi vehicles. The proposed algorithm is to detect multi cars based on Adaboost and to track multi vehicles to use template matching. As results of simulations, it is shown that proposed algorithm is useful for tracking multi vehicles.

  • PDF

A Robust Correlation-based Video Tracking (강인한 상관방식 추적기를 이용한 움직이는 물체 추적)

  • Park Dong-Jo;Cho Jae-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.7
    • /
    • pp.587-594
    • /
    • 2005
  • In this paper, a robust correlation-based video tracking is proposed to track a moving object in correlated image sequences. A correlation-based video tracking algorithm seeks to align the incoming target image with the reference target block image, but has critical problems, so called a false-peak problem and a drift phenomenon (correlator walk-off. The false-peak problem is generally caused by highly correlated background pixels with similar intensity of a moving target and the drift phenomenon occurs when tracking errors accumulate from frame to frame because of the nature of the correlation process. At first, the false-peaks problem for the ordinary correlation-based video tracking is investigated using a simple mathematical analysis. And, we will suggest a robust selective-attention correlation measure with a gradient preprocessor combined by a drift removal compensator to overcome the walk-off problem. The drift compensator adaptively controls the template block size according to the target size of interest. The robustness of the proposed method for practical application is demonstrated by simulating two real-image sequences.

Robust Eye Region Discrimination and Eye Tracking to the Environmental Changes (환경변화에 강인한 눈 영역 분리 및 안구 추적에 관한 연구)

  • Kim, Byoung-Kyun;Lee, Wang-Heon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.5
    • /
    • pp.1171-1176
    • /
    • 2014
  • The eye-tracking [ET] is used on the human computer interaction [HCI] analysing the movement status as well as finding the gaze direction of the eye by tracking pupil's movement on a human face. Nowadays, the ET is widely used not only in market analysis by taking advantage of pupil tracking, but also in grasping intention, and there have been lots of researches on the ET. Although the vision based ET is known as convenient in application point of view, however, not robust in changing environment such as illumination, geometrical rotation, occlusion and scale changes. This paper proposes two steps in the ET, at first, face and eye regions are discriminated by Haar classifier on the face, and then the pupils from the discriminated eye regions are tracked by CAMShift as well as Template matching. We proved the usefulness of the proposed algorithm by lots of real experiments in changing environment such as illumination as well as rotation and scale changes.

Robust pupil detection and gaze tracking under occlusion of eyes

  • Lee, Gyung-Ju;Kim, Jin-Suh;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.10
    • /
    • pp.11-19
    • /
    • 2016
  • The size of a display is large, The form becoming various of that do not apply to previous methods of gaze tracking and if setup gaze-track-camera above display, can solve the problem of size or height of display. However, This method can not use of infrared illumination information of reflected cornea using previous methods. In this paper, Robust pupil detecting method for eye's occlusion, corner point of inner eye and center of pupil, and using the face pose information proposes a method for calculating the simply position of the gaze. In the proposed method, capture the frame for gaze tracking that according to position of person transform camera mode of wide or narrow angle. If detect the face exist in field of view(FOV) in wide mode of camera, transform narrow mode of camera calculating position of face. The frame captured in narrow mode of camera include gaze direction information of person in long distance. The method for calculating the gaze direction consist of face pose estimation and gaze direction calculating step. Face pose estimation is estimated by mapping between feature point of detected face and 3D model. To calculate gaze direction the first, perform ellipse detect using splitting from iris edge information of pupil and if occlusion of pupil, estimate position of pupil with deformable template. Then using center of pupil and corner point of inner eye, face pose information calculate gaze position at display. In the experiment, proposed gaze tracking algorithm in this paper solve the constraints that form of a display, to calculate effectively gaze direction of person in the long distance using single camera, demonstrate in experiments by distance.

Soccer Ball Tracking Robust Against Occlusion (가려짐에 강인한 축구공 추적)

  • Lee, Kwon;Lee, Chulhee
    • Journal of Broadcast Engineering
    • /
    • v.17 no.6
    • /
    • pp.1040-1047
    • /
    • 2012
  • In this paper, we propose a ball tracking algorithm robust against occlusion in broadcasting soccer video sequences. Soccer ball tracking is a challenging task due to occlusion, fast motion and fast direction changes. Many works have been proposed based on ball trajectory. However, this approach requires heavy computational complexity. We propose a ball tracking algorithm with occlusion handling capability. Initial ball location is calculated using the circular hough transform. Then, the ball is tracked using template matching. Occlusion is handled by matching score. In occlusion cases, we generate a set of ball candidates. The ball candidates which exist in the previous frame were removed. On the other hand, the new appearing candidate is determined as the ball. Experiments with several broadcasting soccer video sequences show that the proposed method efficiently handles the occlusion cases.

Multi-level Cross-attention Siamese Network For Visual Object Tracking

  • Zhang, Jianwei;Wang, Jingchao;Zhang, Huanlong;Miao, Mengen;Cai, Zengyu;Chen, Fuguo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.12
    • /
    • pp.3976-3990
    • /
    • 2022
  • Currently, cross-attention is widely used in Siamese trackers to replace traditional correlation operations for feature fusion between template and search region. The former can establish a similar relationship between the target and the search region better than the latter for robust visual object tracking. But existing trackers using cross-attention only focus on rich semantic information of high-level features, while ignoring the appearance information contained in low-level features, which makes trackers vulnerable to interference from similar objects. In this paper, we propose a Multi-level Cross-attention Siamese network(MCSiam) to aggregate the semantic information and appearance information at the same time. Specifically, a multi-level cross-attention module is designed to fuse the multi-layer features extracted from the backbone, which integrate different levels of the template and search region features, so that the rich appearance information and semantic information can be used to carry out the tracking task simultaneously. In addition, before cross-attention, a target-aware module is introduced to enhance the target feature and alleviate interference, which makes the multi-level cross-attention module more efficient to fuse the information of the target and the search region. We test the MCSiam on four tracking benchmarks and the result show that the proposed tracker achieves comparable performance to the state-of-the-art trackers.

Sensibility Classification Algorithm of EEGs using Multi-template Method (다중 템플릿 방법을 이용한 뇌파의 감성 분류 알고리즘)

  • Kim Dong-Jun
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.53 no.12
    • /
    • pp.834-838
    • /
    • 2004
  • This paper proposes an algorithm for EEG pattern classification using the Multi-template method, which is a kind of speaker adaptation method for speech signal processing. 10-channel EEG signals are collected in various environments. The linear prediction coefficients of the EEGs are extracted as the feature parameter of human sensibility. The human sensibility classification algorithm is developed using neural networks. Using EEGs of comfortable or uncomfortable seats, the proposed algorithm showed about 75% of classification performance in subject-independent test. In the tests using EEG signals according to room temperature and humidity variations, the proposed algorithm showed good performance in tracking of pleasantness changes and the subject-independent tests produced similar performances with subject-dependent ones.

Feature based Object Tracking from an Active Camera (능동카메라 환경에서의 특징기반의 이동물체 추적)

  • 오종안;정영기
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.141-144
    • /
    • 2002
  • This paper describes a new feature based tracking system that can track moving objects with a pan-tilt camera. We extract corner features of the scene and tracks the features using filtering, The global motion energy caused by camera movement is eliminated by finding the maximal matching position between consecutive frames using Pyramidal template matching. The region of moving object is segmented by clustering the motion trajectories and command the pan-tilt controller to follow the object such that the object will always lie at the center of the camera. The proposed system has demonstrated good performance for several video sequences.

  • PDF