• Title/Summary/Keyword: visual tracking

Search Result 522, Processing Time 0.025 seconds

A new visual tracking approach based on salp swarm algorithm for abrupt motion tracking

  • Zhang, Huanlong;Liu, JunFeng;Nie, Zhicheng;Zhang, Jie;Zhang, Jianwei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.1142-1166
    • /
    • 2020
  • Salp Swarm Algorithm (SSA) is a new nature-inspired swarm optimization algorithm that mimics the swarming behavior of salps navigating and foraging in the oceans. SSA has been proved to enable to avoid local optima and enhance convergence speed benefiting from the adaptive nonlinear mechanism and salp chains. In this paper, visual tracking is considered to be a process of locating the optimal position through the interaction between leaders and followers in successive images. A novel SSA-based tracking framework is proposed and the analysis and adjustment of parameters are discussed experimentally. Besides, the qualitative analysis and quantitative analysis are performed to demonstrate the tracking effect of our proposed approach by comparing with ten classical tracking algorithms. Extensive comparative experimental results show that our algorithm has good performance in visual tracking, especially for abrupt motion tracking.

Evaluation of Tracking Performance: Focusing on Improvement of Aiming Ability for Individual Weapon (개인화기 조준 능력 향상 관점에서의 추적 기법의 성능평가)

  • Kim, Sang Hoon;Yun, Il Dong
    • Journal of Broadcast Engineering
    • /
    • v.18 no.3
    • /
    • pp.481-490
    • /
    • 2013
  • In this paper, an investigation of weapon tracking performance is shown in regard to improving individual weapon performance of aiming objects. On the battlefield, a battle can last only a few hours, sometimes it can last several days until finished. In these long-lasting combats, a wide variety of factors will gradually lower the visual ability of soldiers. The experiments were focusing on enhancing the degraded aiming performance by applying visual tracking technology to roof mounted sights so as to track the movement of troops automatically. In order to select the optimal algorithm among the latest visual tracking techniques, performance of each algorithm was evaluated using the real combat images with characteristics of overlapping problems, camera's mobility, size changes, low contrast images, and illumination changes. The results show that VTD (Visual Tracking Decomposition)[2], IVT (Incremental learning for robust Visual Tracking)[7], and MIL (Multiple Instance Learning)[1] perform the best at accuracy, response speed, and total performance, respectively. The evaluation suggests that the roof mounted sights equipped with visual tracking technology are likely to improve the reduced aiming ability of forces.

Measuring Visual Attention Processing of Virtual Environment Using Eye-Fixation Information

  • Kim, Jong Ha;Kim, Ju Yeon
    • Architectural research
    • /
    • v.22 no.4
    • /
    • pp.155-162
    • /
    • 2020
  • Numerous scholars have explored the modeling, control, and optimization of energy systems in buildings, offering new insights about technology and environments that can advance industry innovation. Eye trackers deliver objective eye-gaze data about visual and attentional processes. Due to its flexibility, accuracy, and efficiency in research, eye tracking has a control scheme that makes measuring rapid eye movement in three-dimensional space possible (e.g., virtual reality, augmented reality). Because eye movement is an effective modality for digital interaction with a virtual environment, tracking how users scan a visual field and fix on various digital objects can help designers optimize building environments and materials. Although several scholars have conducted Virtual Reality studies in three-dimensional space, scholars have not agreed on a consistent way to analyze eye tracking data. We conducted eye tracking experiments using objects in three-dimensional space to find an objective way to process quantitative visual data. By applying a 12 × 12 grid framework for eye tracking analysis, we investigated how people gazed at objects in a virtual space wearing a headmounted display. The findings provide an empirical base for a standardized protocol for analyzing eye tracking data in the context of virtual environments.

Voting based Cue Integration for Visual Servoing

  • Cho, Che-Seung;Chung, Byeong-Mook
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.798-802
    • /
    • 2003
  • The robustness and reliability of vision algorithms is the key issue in robotic research and industrial applications. In this paper, the robust real time visual tracking in complex scene is considered. A common approach to increase robustness of a tracking system is to use different models (CAD model etc.) known a priori. Also fusion of multiple features facilitates robust detection and tracking of objects in scenes of realistic complexity. Because voting is a very simple or no model is needed for fusion, voting-based fusion of cues is applied. The approach for this algorithm is tested in a 3D Cartesian robot which tracks a toy vehicle moving along 3D rail, and the Kalman filter is used to estimate the motion parameters, namely the system state vector of moving object with unknown dynamics. Experimental results show that fusion of cues and motion estimation in a tracking system has a robust performance.

  • PDF

Real-time Target Tracking System by Extended Kalman Filter (확장칼만필터를 이용한 실시간 표적추적)

  • 임양남;이성철
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.15 no.7
    • /
    • pp.175-181
    • /
    • 1998
  • This paper describes realtime visual tracking system of moving object for three dimensional target using EKF(Extended Kalman Filter). We present a new realtime visual tracking using EKF algorithm and image prediction algorithm. We demonstrate the performance of these tracking algorithm through real experiment. The experimental results show the effectiveness of the EKF algorithm and image prediction algorithm for realtime tracking and estimated state value of filter, predicting the position of moving object to minimize an image processing area, and by reducing the effect by quantization noise of image.

  • PDF

Robust 3D visual tracking for moving object using pan/tilt stereo cameras (Pan/Tilt스테레오 카메라를 이용한 이동 물체의 강건한 시각추적)

  • Cho, Che-Seung;Chung, Byeong-Mook;Choi, In-Su;Nho, Sang-Hyun;Lim, Yoon-Kyu
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.22 no.9 s.174
    • /
    • pp.77-84
    • /
    • 2005
  • In most vision applications, we are frequently confronted with determining the position of object continuously. Generally, intertwined processes ire needed for target tracking, composed with tracking and control process. Each of these processes can be studied independently. In case of actual implementation we must consider the interaction between them to achieve robust performance. In this paper, the robust real time visual tracking in complex background is considered. A common approach to increase robustness of a tracking system is to use known geometric models (CAD model etc.) or to attach the marker. In case an object has arbitrary shape or it is difficult to attach the marker to object, we present a method to track the target easily as we set up the color and shape for a part of object previously. Robust detection can be achieved by integrating voting-based visual cues. Kalman filter is used to estimate the motion of moving object in 3D space, and this algorithm is tested in a pan/tilt robot system. Experimental results show that fusion of cues and motion estimation in a tracking system has a robust performance.

Tip Position Control of a Robot Manipulator using Visual Markers (영상표식 기반의 로봇 매니퓰레이터 끝점 위치 제어)

  • Lim, Sei-Jun;Lim, Hyun;Lee, Young-Sam
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.9
    • /
    • pp.883-890
    • /
    • 2010
  • This paper proposes tip position control system which uses a visual marker to determine the tip position of a robot manipulator. The main idea of this paper is to introduce visual marker for the tracking control of a robot manipulator. Existing researches utilize stationary markers to get pattern information from them. Unlike existing researches, we introduce visual markers to get the coordinates of them in addition to their pattern information. Markers need not be stationary and the extracted coordinate of markers are used as a reference trajectory for the tracking control of a robot manipulator. To build the proposed control scheme, we first obtain intrinsic parameters through camera calibration and evaluate their validity. Secondly, we present a procedure to obtain the relative coordinate of a visual marker with respect to a camera. Thirdly, we derive the equation for the kinematics of the SCORBOTER 4pc manipulator which we use for control of manipulator. Also, we provide a flow diagram of entire visual marker tracking system. The feasibility of the proposed scheme will be demonstrated through real experiments.

Weighted Parameter Analysis of L1 Minimization for Occlusion Problem in Visual Tracking (영상 추적의 Occlusion 문제 해결을 위한 L1 Minimization의 Weighted Parameter 분석)

  • Wibowo, Suryo Adhi;Jang, Eunseok;Lee, Hansoo;Kim, Sungshin
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.05a
    • /
    • pp.101-103
    • /
    • 2016
  • Recently, the target object can be represented as sparse coefficient vector in visual tracking. Due to this reason, exploitation of the compressibility in the transform domain by using L1 minimization is needed. Further, L1 minimization is proposed to handle the occlusion problem in visual tracking, since tracking failures mostly are caused by occlusion. Furthermore, there is a weighted parameter in L1 minimization that influences the result of this minimization. In this paper, this parameter is analyzed for occlusion problem in visual tracking. Several coefficients that derived from median value of the target object, mean value of the arget object, the standard deviation of the target object are, 0, 0.1, and 0.01 are used as weighted parameter of L1 minimization. Based on the experimental results, the value which is equal to 0.1 is suggested as weighted parameter of L1 minimization, due to achieved the best result of success rate and precision performance parameter. Both of these performance parameters are based on one pass evaluation (OPE).

  • PDF

DRIVING CONTROLOF A VISUAL SYSTEM

  • Sugisaka, Masanori;Hara, Masayoshi
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1995.10a
    • /
    • pp.131-134
    • /
    • 1995
  • We developed a visual system that is able to track the moving objects within a certain range of errors. The visual system is driven by two DC servo motors that are controlled by a computer based on the visual data obtained from a CCD video camera. The software to track the moving objects is developed based on the PWM of the DC motors. Also, the problems how to implement a fuzzy logic control method and a neural network in this system, are also considered in order to check the control performance of tracking. The fuzzy logic algorithm is a powerful control technique for nonlinear dynamical system and also the neural network could be implemented in this system. In this paper, we present configuration of tracking system developed in our laboratory, the control methods of the visual system and the experimental results are shown.

  • PDF

Reinforced Feature of Dynamic Search Area for the Discriminative Model Prediction Tracker based on Multi-domain Dataset (다중 도메인 데이터 기반 구별적 모델 예측 트레커를 위한 동적 탐색 영역 특징 강화 기법)

  • Lee, Jun Ha;Won, Hong-In;Kim, Byeong Hak
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.6
    • /
    • pp.323-330
    • /
    • 2021
  • Visual object tracking is a challenging area of study in the field of computer vision due to many difficult problems, including a fast variation of target shape, occlusion, and arbitrary ground truth object designation. In this paper, we focus on the reinforced feature of the dynamic search area to get better performance than conventional discriminative model prediction trackers on the condition when the accuracy deteriorates since low feature discrimination. We propose a reinforced input feature method shown like the spotlight effect on the dynamic search area of the target tracking. This method can be used to improve performances for deep learning based discriminative model prediction tracker, also various types of trackers which are used to infer the center of the target based on the visual object tracking. The proposed method shows the improved tracking performance than the baseline trackers, achieving a relative gain of 38% quantitative improvement from 0.433 to 0.601 F-score at the visual object tracking evaluation.