• Title/Summary/Keyword: vision tracking system

Search Result 437, Processing Time 0.024 seconds

A Study on Automatic Seam Tracking using Vision Sensor (비전센서를 이용한 용접선 자동추적에 관한 연구)

  • 조택동;양상민;전진환
    • Journal of Welding and Joining
    • /
    • v.16 no.6
    • /
    • pp.68-76
    • /
    • 1998
  • A CCD camera with a laser stripe was applied to realized the automatic weld seam tracking. The 3-dimensional information obtained from the vision system made it possible to generate the weld torch path. The adaptive Hough transformation was used to extract laser stripes an to obtain specific weld points. It takes relatively long time to process image on-line control using the basic control using the basic Hough transformation, but it has a tendency of robustness over the noises such as spatter. For this reason, it was complemented with adaptive Hough transformation to have an on-line processing ability for scanning specific weld points. The dead zone, where the sensing of weld line is impossible, was eliminated by rotating the camera with its rotating axis centered at the weld torch. When weld lines were detected, the camera angle was controlled in order to get the minimum image data for sensing of weld lines. Consequently, the image processing time was reduced.

  • PDF

A Study on Automatic Seam Tracking using Vision Sensor (비전센서를 이용한 자동추적장치에 관한 연구)

  • 전진환;조택동;양상민
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.10a
    • /
    • pp.1105-1109
    • /
    • 1995
  • A CCD-camera, which is structured with vision system, was used to realize automatic seam-tracking system and 3-D information which is needed to generate torch path, was obtained by using laser-slip beam. To extract laser strip and obtain welding-specific point, Adaptive Hough-transformation was used. Although the basic Hough transformation takes too much time to process image on line, it has a tendency to be robust to the noises as like spatter. For that reson, it was complemented with Adaptive Hough transformation to have an on-line processing ability for scanning a welding-specific point. the dead zone,where the sensing of weld line is impossible, is eliminated by rotating the camera with its rotating axis centered at welding torch. The camera angle is controlled so as to get the minimum image data for the sensing of weld line, hence the image processing time is reduced. The fuzzy controller is adapted to control the camera angle.

  • PDF

A Study on the Robot Vision Control Schemes of N-R and EKF Methods for Tracking the Moving Targets (이동 타겟 추적을 위한 N-R과 EKF방법의 로봇비젼제어기법에 관한 연구)

  • Hong, Sung-Mun;Jang, Wan-Shik;Kim, Jae-Meung
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.23 no.5
    • /
    • pp.485-497
    • /
    • 2014
  • This paper presents the robot vision control schemes based on the Newton-Raphson (N-R) and the Extended Kalman Filter (EKF) methods for the tracking of moving targets. The vision system model used in this study involves the six camera parameters. The difference is that refers to the uncertainty of the camera's orientation and focal length, and refers to the unknown relative position between the camera and the robot. Both N-R and EKF methods are employed towards the estimation of the six camera parameters. Based on the these six parameters estimated using three cameras, the robot's joint angles are computed with respect to the moving targets, using both N-R and EKF methods. The two robot vision control schemes are tested by tracking the moving target experimentally. Given the experimental results, the two robot control schemes are compared in order to evaluate their strengths and weaknesses.

Vision Based Sensor Fusion System of Biped Walking Robot for Environment Recognition (영상 기반 센서 융합을 이용한 이쪽로봇에서의 환경 인식 시스템의 개발)

  • Song, Hee-Jun;Lee, Seon-Gu;Kang, Tae-Gu;Kim, Dong-Won;Seo, Sam-Jun;Park, Gwi-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.123-125
    • /
    • 2006
  • This paper discusses the method of vision based sensor fusion system for biped robot walking. Most researches on biped walking robot have mostly focused on walking algorithm itself. However, developing vision systems for biped walking robot is an important and urgent issue since biped walking robots are ultimately developed not only for researches but to be utilized in real life. In the research, systems for environment recognition and tole-operation have been developed for task assignment and execution of biped robot as well as for human robot interaction (HRI) system. For carrying out certain tasks, an object tracking system using modified optical flow algorithm and obstacle recognition system using enhanced template matching and hierarchical support vector machine algorithm by wireless vision camera are implemented with sensor fusion system using other sensors installed in a biped walking robot. Also systems for robot manipulating and communication with user have been developed for robot.

  • PDF

Simultaneous Tracking of Multiple Construction Workers Using Stereo-Vision (다수의 건설인력 위치 추적을 위한 스테레오 비전의 활용)

  • Lee, Yong-Ju;Park, Man-Woo
    • Journal of KIBIM
    • /
    • v.7 no.1
    • /
    • pp.45-53
    • /
    • 2017
  • Continuous research efforts have been made on acquiring location data on construction sites. As a result, GPS and RFID are increasingly employed on the site to track the location of equipment and materials. However, these systems are based on radio frequency technologies which require attaching tags on every target entity. Implementing the systems incurs time and costs for attaching/detaching/managing the tags or sensors. For this reason, efforts are currently being made to track construction entities using only cameras. Vision-based 3D tracking has been presented in a previous research work in which the location of construction manpower, vehicle, and materials were successfully tracked. However, the proposed system is still in its infancy and yet to be implemented on practical applications for two reasons. First, it does not involve entity matching across two views, and thus cannot be used for tracking multiple entities, simultaneously. Second, the use of a checker board in the camera calibration process entails a focus-related problem when the baseline is long and the target entities are located far from the cameras. This paper proposes a vision-based method to track multiple workers simultaneously. An entity matching procedure is added to acquire the matching pairs of the same entities across two views which is necessary for tracking multiple entities. Also, the proposed method simplified the calibration process by avoiding the use of a checkerboard, making it more adequate to the realistic deployment on construction sites.

Motion Analysis of a Moving Object using one Camera and Tracking Method (단일 카메라와 Tracking 기법을 이용한 이동 물체의 모션 분석)

  • Shin, Myong-Jun;Son, Young-Ik;Kim, Kab-Il
    • Proceedings of the KIEE Conference
    • /
    • 2005.07d
    • /
    • pp.2821-2823
    • /
    • 2005
  • When we deal with the image data through camera lens, much works are necessary for removing image distortions and obtaining accurate informations from the raw data. However, the calibration process is very complicated and requires many trials and errors. In this paper, 3 new approach to image processing is presented by developing a H/W vision system with a tracking camera. Using motor control with encoders the proposed tracking method tells us exact displacements of a moving object. Therefore this method does not require any calibration process for pin cusion. Owing to the mobility one camera covers wide ranges and, by lowering its height, the camera also obtains high resolution of the image. We first introduce the structure of the motion analysis system. Then the construced vision system is investigated by some experiments.

  • PDF

3D Feature Based Tracking using SVM

  • Kim, Se-Hoon;Choi, Seung-Joon;Kim, Sung-Jin;Won, Sang-Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1458-1463
    • /
    • 2004
  • Tracking is one of the most important pre-required task for many application such as human-computer interaction through gesture and face recognition, motion analysis, visual servoing, augment reality, industrial assembly and robot obstacle avoidance. Recently, 3D information of object is required in realtime for many aforementioned applications. 3D tracking is difficult problem to solve because during the image formation process of the camera, explicit 3D information about objects in the scene is lost. Recently, many vision system use stereo camera especially for 3D tracking. The 3D feature based tracking(3DFBT) which is on of the 3D tracking system using stereo vision have many advantage compare to other tracking methods. If we assumed the correspondence problem which is one of the subproblem of 3DFBT is solved, the accuracy of tracking depends on the accuracy of camera calibration. However, The existing calibration method based on accurate camera model so that modelling error and weakness to lens distortion are embedded. Therefore, this thesis proposes 3D feature based tracking method using SVM which is used to solve reconstruction problem.

  • PDF

Target Tracking Control of Mobile Robots with Vision System in the Absence of Velocity Sensors (속도센서가 없는 비전시스템을 이용한 이동로봇의 목표물 추종)

  • Cho, Namsub;Kwon, Ji-Wook;Chwa, Dongkyoung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.62 no.6
    • /
    • pp.852-862
    • /
    • 2013
  • This paper proposes a target tracking control method for wheeled mobile robots with nonholonomic constraints by using a backstepping-like feedback linearization. For the target tracking, we apply a vision system to mobile robots to obtain the relative posture information between the mobile robot and the target. The robots do not use the sensors to obtain the velocity information in this paper and therefore assumed the unknown velocities of both mobile robot and target. Instead, the proposed method uses only the maximum velocity information of the mobile robot and target. First, the pseudo command for the forward linear velocity and the heading direction angle are designed based on the kinematics by using the obtained image information. Then, the actual control inputs are designed to make the actual forward linear velocity and the heading direction angle follow the pseudo commands. Through simulations and experiments for the mobile robot we have confirmed that the proposed control method is able to track target even when the velocity sensors are not used at all.

3-D Object Tracking using 3-D Information and Optical Correlator in the Stereo Vision System (스테레오 비젼 시스템에서 3차원정보와 광 상관기를 이용한 3차원 물체추적 방법)

  • 서춘원;이승현;김은수
    • Journal of Broadcast Engineering
    • /
    • v.7 no.3
    • /
    • pp.248-261
    • /
    • 2002
  • In this paper, we proposed a new 3-dimensional(3-D) object-tracking algorithm that can control a stereo camera using a variable window mask supported by which uses ,B-D information and an optical BPEJTC. Hence, three-dimensional information characteristics of a stereo vision system, distance information from the stereo camera to the tracking object. can be easily acquired through the elements of a stereo vision system. and with this information, we can extract an area of the tracking object by varying window masks. This extractive area of the tracking object is used as the next updated reference image. furthermore, by carrying out an optical BPEJTC between a reference image and a stereo input image the coordinates of the tracking objects location can be acquired, and with this value a 3-D object tracking can be accomplished through manipulation of the convergence angie and a pan/tilt of a stereo camera. From the experimental results, the proposed algorithm was found to be able to the execute 3-D object tracking by extracting the area of the target object from an input image that is independent of the background noise in the stereo input image. Moreover a possible implementation of a 3-D tele-working or an adaptive 3-D object tracker, using the proposed algorithm is suggested.

A Study on a Visual Sensor System for Weld Seam Tracking in Robotic GMA Welding (GMA 용접로봇용 용접선 시각 추적 시스템에 관한 연구)

  • 김동호;김재웅
    • Journal of Welding and Joining
    • /
    • v.19 no.2
    • /
    • pp.208-214
    • /
    • 2001
  • In this study, we constructed a visual sensor system for weld seam tracking in real time in GMA welding. A sensor part consists of a CCD camera, a band-pass filter, a diode laser system with a cylindrical lens, and a vision board for inter frame process. We used a commercialized robot system which includes a GMA welding machine. To extract the weld seam we used a inter frame process in vision board from that we could remove the noise due to the spatters and fume in the image. Since the image was very reasonable by using the inter frame p개cess, we could use the simplest way to extract the weld seam from the image, such as first differential and central difference method. Also we used a moving average method to the successive position data or weld seam for reducing the data fluctuation. In experiment the developed robot system with visual sensor could be able to track a most popular weld seam. such as a fillet-joint, a V-groove, and a lap-joint of which weld seam include planar and height directional variation.

  • PDF