• Title/Summary/Keyword: vision-based tracking

Search Result 405, Processing Time 0.019 seconds

Vision Sensor-Based Driving Algorithm for Indoor Automatic Guided Vehicles

  • Quan, Nguyen Van;Eum, Hyuk-Min;Lee, Jeisung;Hyun, Chang-Ho
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.13 no.2
    • /
    • pp.140-146
    • /
    • 2013
  • In this paper, we describe a vision sensor-based driving algorithm for indoor automatic guided vehicles (AGVs) that facilitates a path tracking task using two mono cameras for navigation. One camera is mounted on vehicle to observe the environment and to detect markers in front of the vehicle. The other camera is attached so the view is perpendicular to the floor, which compensates for the distance between the wheels and markers. The angle and distance from the center of the two wheels to the center of marker are also obtained using these two cameras. We propose five movement patterns for AGVs to guarantee smooth performance during path tracking: starting, moving straight, pre-turning, left/right turning, and stopping. This driving algorithm based on two vision sensors gives greater flexibility to AGVs, including easy layout change, autonomy, and even economy. The algorithm was validated in an experiment using a two-wheeled mobile robot.

A Study on Adaptive Control to Fill Weld Groove by Using Multi-Torches in SAW (SAW 용접시 다중 토치를 이용한 용접부 적응제어에 관한 연구)

  • 문형순;정문영;배강열
    • Journal of Welding and Joining
    • /
    • v.17 no.6
    • /
    • pp.90-99
    • /
    • 1999
  • Significant portion of the total manufacturing time for a pipe fabrication process is spent on the welding following primary machining and fit-up processes. To achieve a reliable weld bead appearance, automatic seam tracking and adaptive control to fill the groove are urgently needed. For the seam tracking in welding processes, the vision sensors have been successfully applied. However, the adaptive filling control of the multi-torches system for the appropriate welded area has not been implemented in the area of SAW(submerged arc welding) by now. The term adaptive control is often used to describe recent advances in welding process control by strictly this only applies to a system which is able to cope with dynamic changes in system performance. In welding applications, the term adaptive control may not imply the conventional control theory definition but may be used in the more descriptive sense to explain the need for the process to adapt to the changing welding conditions. This paper proposed various types of methodologies for obtaining a good bead appearance based on multi-torches welding system with the vision system in SAW. The methodologies for adaptive filling control used welding current/voltage, arc voltage/welding current/wire feed speed combination and welding speed by using vision sensor. It was shown that the algorithm for welding current/voltage combination and welding speed revealed sound weld bead appearance compared with that of voltage/current combination.

  • PDF

Target Tracking Control of Mobile Robots with Vision System in the Absence of Velocity Sensors (속도센서가 없는 비전시스템을 이용한 이동로봇의 목표물 추종)

  • Cho, Namsub;Kwon, Ji-Wook;Chwa, Dongkyoung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.62 no.6
    • /
    • pp.852-862
    • /
    • 2013
  • This paper proposes a target tracking control method for wheeled mobile robots with nonholonomic constraints by using a backstepping-like feedback linearization. For the target tracking, we apply a vision system to mobile robots to obtain the relative posture information between the mobile robot and the target. The robots do not use the sensors to obtain the velocity information in this paper and therefore assumed the unknown velocities of both mobile robot and target. Instead, the proposed method uses only the maximum velocity information of the mobile robot and target. First, the pseudo command for the forward linear velocity and the heading direction angle are designed based on the kinematics by using the obtained image information. Then, the actual control inputs are designed to make the actual forward linear velocity and the heading direction angle follow the pseudo commands. Through simulations and experiments for the mobile robot we have confirmed that the proposed control method is able to track target even when the velocity sensors are not used at all.

Vision-based Target Tracking for UAV and Relative Depth Estimation using Optical Flow (무인 항공기의 영상기반 목표물 추적과 광류를 이용한 상대깊이 추정)

  • Jo, Seon-Yeong;Kim, Jong-Hun;Kim, Jung-Ho;Lee, Dae-Woo;Cho, Kyeum-Rae
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.37 no.3
    • /
    • pp.267-274
    • /
    • 2009
  • Recently, UAVs (Unmanned Aerial Vehicles) are expected much as the Unmanned Systems for various missions. These missions are often based on the Vision System. Especially, missions such as surveillance and pursuit have a process which is carried on through the transmitted vision data from the UAV. In case of small UAVs, monocular vision is often used to consider weights and expenses. Research of missions performance using the monocular vision is continued but, actually, ground and target model have difference in distance from the UAV. So, 3D distance measurement is still incorrect. In this study, Mean-Shift Algorithm, Optical Flow and Subspace Method are posed to estimate the relative depth. Mean-Shift Algorithm is used for target tracking and determining Region of Interest (ROI). Optical Flow includes image motion information using pixel intensity. After that, Subspace Method computes the translation and rotation of image and estimates the relative depth. Finally, we present the results of this study using images obtained from the UAV experiments.

Multiple Person Tracking based on Spatial-temporal Information by Global Graph Clustering

  • Su, Yu-ting;Zhu, Xiao-rong;Nie, Wei-Zhi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.6
    • /
    • pp.2217-2229
    • /
    • 2015
  • Since the variations of illumination, the irregular changes of human shapes, and the partial occlusions, multiple person tracking is a challenging work in computer vision. In this paper, we propose a graph clustering method based on spatio-temporal information of moving objects for multiple person tracking. First, the part-based model is utilized to localize individual foreground regions in each frame. Then, we heuristically leverage the spatio-temporal constraints to generate a set of reliable tracklets. Finally, the graph shift method is applied to handle tracklet association problem and consequently generate the completed trajectory for individual object. The extensive comparison experiments demonstrate the superiority of the proposed method.

Simple Online Multiple Human Tracking based on LK Feature Tracker and Detection for Embedded Surveillance

  • Vu, Quang Dao;Nguyen, Thanh Binh;Chung, Sun-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.6
    • /
    • pp.893-910
    • /
    • 2017
  • In this paper, we propose a simple online multiple object (human) tracking method, LKDeep (Lucas-Kanade feature and Detection based Simple Online Multiple Object Tracker), which can run in fast online enough on CPU core only with acceptable tracking performance for embedded surveillance purpose. The proposed LKDeep is a pragmatic hybrid approach which tracks multiple objects (humans) mainly based on LK features but is compensated by detection on periodic times or on necessity times. Compared to other state-of-the-art multiple object tracking methods based on 'Tracking-By-Detection (TBD)' approach, the proposed LKDeep is faster since it does not have to detect object on every frame and it utilizes simple association rule, but it shows a good object tracking performance. Through experiments in comparison with other multiple object tracking (MOT) methods using the public DPM detector among online state-of-the-art MOT methods reported in MOT challenge [1], it is shown that the proposed simple online MOT method, LKDeep runs faster but with good tracking performance for surveillance purpose. It is further observed through single object tracking (SOT) visual tracker benchmark experiment [2] that LKDeep with an optimized deep learning detector can run in online fast with comparable tracking performance to other state-of-the-art SOT methods.

Specified Object Tracking Problem in an Environment of Multiple Moving Objects

  • Park, Seung-Min;Park, Jun-Heong;Kim, Hyung-Bok;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.11 no.2
    • /
    • pp.118-123
    • /
    • 2011
  • Video based object tracking normally deals with non-stationary image streams that change over time. Robust and real time moving object tracking is considered to be a problematic issue in computer vision. Multiple object tracking has many practical applications in scene analysis for automated surveillance. In this paper, we introduce a specified object tracking based particle filter used in an environment of multiple moving objects. A differential image region based tracking method for the detection of multiple moving objects is used. In order to ensure accurate object detection in an unconstrained environment, a background image update method is used. In addition, there exist problems in tracking a particular object through a video sequence, which cannot rely only on image processing techniques. For this, a probabilistic framework is used. Our proposed particle filter has been proved to be robust in dealing with nonlinear and non-Gaussian problems. The particle filter provides a robust object tracking framework under ambiguity conditions and greatly improves the estimation accuracy for complicated tracking problems.

Experimental Studies of Vision Based Position Tracking Control of Mobile Robot Using Neural Network (신경회로망을 이용한 비전 기반 이동 로봇의 위치제어에 대한 실험적 연구)

  • Jung, Seul;Jang, Pyung-Soo;Won, Moon-Chul;Hong, Sub
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.9 no.7
    • /
    • pp.515-526
    • /
    • 2003
  • Tutorial contents of kinematics and dynamics of a wheeled drive mobile robot are presented. Based on the dynamic model, simulation studies of position tracking of a mobile robot are performed. The control structure of several position control algorithms using visual feedback are proposed and their performances are compared. In order to compensate for uncertainties from unknown dynamics and ignored dynamic effects such as slip conditions, neural network based position control schemes are proposed. Experiments are conducted and the results show the performance of the vision based neural network control scheme fumed out to be the best among several proposed schemes.