• Title/Summary/Keyword: vision-based tracking

Search Result 405, Processing Time 0.027 seconds

A object tracking based robot manipulator built on fast stereo vision

  • Huang, Hua;Won, Sangchul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2002년도 ICCAS
    • /
    • pp.99.5-99
    • /
    • 2002
  • $\textbullet$ 3-D object tracking framework $\textbullet$ Using fast stereo vision system for range image $\textbullet$ Using CONDENSATION algorithm to tracking object $\textbullet$ For recognizing object, superquardrics model is used $\textbullet$ Our target object is like coils in steel works

  • PDF

다수의 건설인력 위치 추적을 위한 스테레오 비전의 활용 (Simultaneous Tracking of Multiple Construction Workers Using Stereo-Vision)

  • 이용주;박만우
    • 한국BIM학회 논문집
    • /
    • 제7권1호
    • /
    • pp.45-53
    • /
    • 2017
  • Continuous research efforts have been made on acquiring location data on construction sites. As a result, GPS and RFID are increasingly employed on the site to track the location of equipment and materials. However, these systems are based on radio frequency technologies which require attaching tags on every target entity. Implementing the systems incurs time and costs for attaching/detaching/managing the tags or sensors. For this reason, efforts are currently being made to track construction entities using only cameras. Vision-based 3D tracking has been presented in a previous research work in which the location of construction manpower, vehicle, and materials were successfully tracked. However, the proposed system is still in its infancy and yet to be implemented on practical applications for two reasons. First, it does not involve entity matching across two views, and thus cannot be used for tracking multiple entities, simultaneously. Second, the use of a checker board in the camera calibration process entails a focus-related problem when the baseline is long and the target entities are located far from the cameras. This paper proposes a vision-based method to track multiple workers simultaneously. An entity matching procedure is added to acquire the matching pairs of the same entities across two views which is necessary for tracking multiple entities. Also, the proposed method simplified the calibration process by avoiding the use of a checkerboard, making it more adequate to the realistic deployment on construction sites.

능동 스테레오 비젼을 시스템을 이용한 자율이동로봇의 목표물 추적에 관한 연구 (Study on the Target Tracking of a Mobile Robot Using Active Stereo-Vision System)

  • 이희명;이수희;이병룡;양순용;안경관
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2003년도 춘계학술대회 논문집
    • /
    • pp.915-919
    • /
    • 2003
  • This paper presents a fuzzy-motion-control based tracking algorithm of mobile robots, which uses the geometrical information derived from the active stereo-vision system mounted on the mobile robot. The active stereo-vision system consists of two color cameras that rotates in two angular dimensions. With the stereo-vision system, the center position and depth information of the target object can be calculated. The proposed fuzzy motion controller is used to calculate the tracking velocity and angular position of the mobile robot, which makes the mobile robot keep following the object with a constant distance and orientation.

  • PDF

평균 이동 알고리즘을 이용한 영상기반 실내 물체 추적 (Vision-Based Indoor Object Tracking Using Mean-Shift Algorithm)

  • 김종훈;조겸래;이대우
    • 제어로봇시스템학회논문지
    • /
    • 제12권8호
    • /
    • pp.746-751
    • /
    • 2006
  • In this paper, we present tracking algorithm for the indoor moving object. We research passive method using a camera and image processing. It had been researched to use dynamic based estimators, such as Kalman Filter, Extended Kalman Filter and Particle Filter for tracking moving object. These algorithm have a good performance on real-time tracking, but they have a limit. If the shape of object is changed or object is located on complex background, they will fail to track them. This problem will need the complicated image processing algorithm. Finally, a large algorithm is made from integration of dynamic based estimator and image processing algorithm. For eliminating this inefficiency problem, image based estimator, Mean-shift Algorithm is suggested. This algorithm is implemented by color histogram. In other words, it decide coordinate of object's center from using probability density of histogram in image. Although shape is changed, this is not disturbed by complex background and can track object. This paper shows the results in real camera system, and decides 3D coordinate using the data from mean-shift algorithm and relationship of real frame and camera frame.

3D Feature Based Tracking using SVM

  • Kim, Se-Hoon;Choi, Seung-Joon;Kim, Sung-Jin;Won, Sang-Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1458-1463
    • /
    • 2004
  • Tracking is one of the most important pre-required task for many application such as human-computer interaction through gesture and face recognition, motion analysis, visual servoing, augment reality, industrial assembly and robot obstacle avoidance. Recently, 3D information of object is required in realtime for many aforementioned applications. 3D tracking is difficult problem to solve because during the image formation process of the camera, explicit 3D information about objects in the scene is lost. Recently, many vision system use stereo camera especially for 3D tracking. The 3D feature based tracking(3DFBT) which is on of the 3D tracking system using stereo vision have many advantage compare to other tracking methods. If we assumed the correspondence problem which is one of the subproblem of 3DFBT is solved, the accuracy of tracking depends on the accuracy of camera calibration. However, The existing calibration method based on accurate camera model so that modelling error and weakness to lens distortion are embedded. Therefore, this thesis proposes 3D feature based tracking method using SVM which is used to solve reconstruction problem.

  • PDF

이동 타겟 추적을 위한 N-R과 EKF방법의 로봇비젼제어기법에 관한 연구 (A Study on the Robot Vision Control Schemes of N-R and EKF Methods for Tracking the Moving Targets)

  • 홍성문;장완식;김재명
    • 한국생산제조학회지
    • /
    • 제23권5호
    • /
    • pp.485-497
    • /
    • 2014
  • This paper presents the robot vision control schemes based on the Newton-Raphson (N-R) and the Extended Kalman Filter (EKF) methods for the tracking of moving targets. The vision system model used in this study involves the six camera parameters. The difference is that refers to the uncertainty of the camera's orientation and focal length, and refers to the unknown relative position between the camera and the robot. Both N-R and EKF methods are employed towards the estimation of the six camera parameters. Based on the these six parameters estimated using three cameras, the robot's joint angles are computed with respect to the moving targets, using both N-R and EKF methods. The two robot vision control schemes are tested by tracking the moving target experimentally. Given the experimental results, the two robot control schemes are compared in order to evaluate their strengths and weaknesses.

파이프 용접에서 다중 시각센서를 이용한 용접선 추적 및 용접결함 측정에 관한 연구 (A Study on Seam Tracking and Weld Defects Detecting for Automated Pipe Welding by Using Double Vision Sensors)

  • 송형진;이승기;강윤희;나석주
    • Journal of Welding and Joining
    • /
    • 제21권1호
    • /
    • pp.60-65
    • /
    • 2003
  • At present. welding of most pipes with large diameter is carried out by the manual process. Automation of the welding process is necessary f3r the sake of consistent weld quality and improvement in productivity. In this study, two vision sensors, based on the optical triangulation, were used to obtain the information for seam tracking and detecting the weld defects. Through utilization of the vision sensors, noises were removed, images and 3D information obtained and positions of the feature points detected. The aforementioned process provided the seam and leg position data, calculated the magnitude of the gap, fillet area and leg length and judged the weld defects by ISO 5817. Noises in the images were removed by using the gradient values of the laser stripe's coordinates and various feature points were detected by using an algorithm based on the iterative polygon approximation method. Since the process time is very important, all the aforementioned processes should be conducted during welding.

An Efficient Vision-based Object Detection and Tracking using Online Learning

  • Kim, Byung-Gyu;Hong, Gwang-Soo;Kim, Ji-Hae;Choi, Young-Ju
    • Journal of Multimedia Information System
    • /
    • 제4권4호
    • /
    • pp.285-288
    • /
    • 2017
  • In this paper, we propose a vision-based object detection and tracking system using online learning. The proposed system adopts a feature point-based method for tracking a series of inter-frame movement of a newly detected object, to estimate rapidly and toughness. At the same time, it trains the detector for the object being tracked online. Temporarily using the result of the failure detector to the object, it initializes the tracker back tracks to enable the robust tracking. In particular, it reduced the processing time by improving the method of updating the appearance models of the objects to increase the tracking performance of the system. Using a data set obtained in a variety of settings, we evaluate the performance of the proposed system in terms of processing time.

Accelerating particle filter-based object tracking algorithms using parallel programming

  • Truong, Mai Thanh Nhat;Kim, Sanghoon
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2018년도 춘계학술발표대회
    • /
    • pp.469-470
    • /
    • 2018
  • Object tracking is a common task in computer vision, an essential part of various vision-based applications. After several years of development, object tracking in video is still a challenging problem because of various visual properties of objects and surrounding environment. Particle filter is a well-known technique among common approaches, has been proven its effectiveness in dealing with difficulties in object tracking. However, particle filter is a high-complexity algorithms, which is an severe disadvantage because object tracking algorithms are required to run in real time. In this research, we utilize parallel programming to accelerate particle filter-based object tracking algorithms. Experimental results showed that our approach reduced the execution time significantly.

Vision-Based Finger Action Recognition by Angle Detection and Contour Analysis

  • Lee, Dae-Ho;Lee, Seung-Gwan
    • ETRI Journal
    • /
    • 제33권3호
    • /
    • pp.415-422
    • /
    • 2011
  • In this paper, we present a novel vision-based method of recognizing finger actions for use in electronic appliance interfaces. Human skin is first detected by color and consecutive motion information. Then, fingertips are detected by a novel scale-invariant angle detection based on a variable k-cosine. Fingertip tracking is implemented by detected region-based tracking. By analyzing the contour of the tracked fingertip, fingertip parameters, such as position, thickness, and direction, are calculated. Finger actions, such as moving, clicking, and pointing, are recognized by analyzing these fingertip parameters. Experimental results show that the proposed angle detection can correctly detect fingertips, and that the recognized actions can be used for the interface with electronic appliances.