• Title/Summary/Keyword: Vision sensor

Search Result 822, Processing Time 0.045 seconds

Development of Laser Vision Sensor with Multi-line for High Speed Lap Joint Welding

  • Sung, K.;Rhee, S.
    • International Journal of Korean Welding Society
    • /
    • v.2 no.2
    • /
    • pp.57-60
    • /
    • 2002
  • Generally, the laser vision sensor makes it possible design a highly reliable and precise range sensor at a low cost. When the laser vision sensor is applied to lap joint welding, however. there are many limitations. Therefore, a specially-designed hardware system has to be used. However, if the multi-lines are used instead of a single line, multi-range data .:an be generated from one image. Even under a set condition of 30fps, the generated 2D range data increases depending on the number of lines used. In this study, a laser vision sensor with a multi-line pattern is developed with conventional CCD camera to carry out high speed seam tracking in lap joint welding.

  • PDF

Vision and force/torque sensor fusion in peg-in-hole using fuzzy logic (삽입 작업에서 퍼지추론에 의한 비젼 및 힘/토오크 센서의 퓨젼)

  • 이승호;이범희;고명삼;김대원
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10a
    • /
    • pp.780-785
    • /
    • 1992
  • We present a multi-sensor fusion method in positioning control of a robot by using fuzzy logic. In general, the vision sensor is used in the gross motion control and the force/torque sensor is used in the fine motion control. We construct a fuzzy logic controller to combine the vision sensor data and the force/torque sensor data. Also, we apply the fuzzy logic controller to the peg-in-hole process. Simulation results uphold the theoretical results.

  • PDF

Study on the Localization Improvement of the Dead Reckoning using the INS Calibrated by the Fusion Sensor Network Information (융합 센서 네트워크 정보로 보정된 관성항법센서를 이용한 추측항법의 위치추정 향상에 관한 연구)

  • Choi, Jae-Young;Kim, Sung-Gaun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.8
    • /
    • pp.744-749
    • /
    • 2012
  • In this paper, we suggest that how to improve an accuracy of mobile robot's localization by using the sensor network information which fuses the machine vision camera, encoder and IMU sensor. The heading value of IMU sensor is measured using terrestrial magnetism sensor which is based on magnetic field. However, this sensor is constantly affected by its surrounding environment. So, we isolated template of ceiling using vision camera to increase the sensor's accuracy when we use IMU sensor; we measured the angles by pattern matching algorithm; and to calibrate IMU sensor, we compared the obtained values with IMU sensor values and the offset value. The values that were used to obtain information on the robot's position which were of Encoder, IMU sensor, angle sensor of vision camera are transferred to the Host PC by wireless network. Then, the Host PC estimates the location of robot using all these values. As a result, we were able to get more accurate information on estimated positions than when using IMU sensor calibration solely.

3-D vision sensor system for arc welding robot with coordinated motion by transputer system

  • Ishida, Hirofumi;Kasagami, Fumio;Ishimatsu, Takakazu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1993.10b
    • /
    • pp.446-450
    • /
    • 1993
  • In this paper we propose an arc welding robot system, where two robots works coordinately and employ the vision sensor. In this system one robot arm holds a welding target as a positioning device, and the other robot moves the welding torch. The vision sensor consists of two laser slit-ray projectors and one CCD TV camera, and is mounted on the top of one robot. The vision sensor detects the 3-dimensional shape of the groove on the target work which needs to be weld. And two robots are moved coordinately to trace the grooves with accuracy. In order to realize fast image processing, totally five sets of high-speed parallel processing units (Transputer) are employed. The teaching tasks of the coordinated motions are simplified considerably due to this vision sensor. Experimental results reveal the applicability of our system.

  • PDF

High speed seam tracking system using vision sensor with multi-line laser (다중 레이저 선을 이용한 비전 센서를 통한 고속 용접선 추적 시스템)

  • 성기은;이세헌
    • Proceedings of the KWS Conference
    • /
    • 2002.05a
    • /
    • pp.49-52
    • /
    • 2002
  • A vision sensor measure range data using laser light source, This sensor generally use patterned laser which shaped single line. But this vision sensor cannot satisfy new trend which needs faster and more precise processing. The sensor's sampling rate increases as reduced image processing time. However, the sampling rate can not over 30fps, because a camera has mechanical sampling limit. If we use multi line laser pattern, we will measure multi range data in one image. In the case of using same sampling rate camera, number of 2D range data profile in one second is directly proportional to laser line's number. For example, the vision sensor using 5 laser lines can sample 150 profiles per second in best condition.

  • PDF

Measurement of GMAW Bead Geometry Using Biprism Stereo Vision Sensor (바이프리즘 스테레오 시각 센서를 이용한 GMA 용접 비드의 3차원 형상 측정)

  • 이지혜;이두현;유중돈
    • Journal of Welding and Joining
    • /
    • v.19 no.2
    • /
    • pp.200-207
    • /
    • 2001
  • Three-diemnsional bead profile was measured using the biprism stereo vision sensor in GMAW, which consists of an optical filter, biprism and CCD camera. Since single CCD camera is used, this system has various advantages over the conventional stereo vision system using two cameras such as finding the corresponding points along the horizontal scanline. In this wort, the biprism stereo vision sensor was designed for the GMAW, and the linear calibration method was proposed to determine the prism and camera parameters. Image processing techniques were employed to find the corresponding point along the pool boundary. The ism-intensity contour corresponding to the pool boundary was found in the pixel order and the filter-based matching algorithm was used to refine the corresponding points in the subpixel order. Predicted bead dimensions were in broad agreements with the measured results under the conditions of spray mode and humping bead.

  • PDF

A Study on IMM-PDAF based Sensor Fusion Method for Compensating Lateral Errors of Detected Vehicles Using Radar and Vision Sensors (레이더와 비전 센서를 이용하여 선행차량의 횡방향 운동상태를 보정하기 위한 IMM-PDAF 기반 센서융합 기법 연구)

  • Jang, Sung-woo;Kang, Yeon-sik
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.8
    • /
    • pp.633-642
    • /
    • 2016
  • It is important for advanced active safety systems and autonomous driving cars to get the accurate estimates of the nearby vehicles in order to increase their safety and performance. This paper proposes a sensor fusion method for radar and vision sensors to accurately estimate the state of the preceding vehicles. In particular, we performed a study on compensating for the lateral state error on automotive radar sensors by using a vision sensor. The proposed method is based on the Interactive Multiple Model(IMM) algorithm, which stochastically integrates the multiple Kalman Filters with the multiple models depending on lateral-compensation mode and radar-single sensor mode. In addition, a Probabilistic Data Association Filter(PDAF) is utilized as a data association method to improve the reliability of the estimates under a cluttered radar environment. A two-step correction method is used in the Kalman filter, which efficiently associates both the radar and vision measurements into single state estimates. Finally, the proposed method is validated through off-line simulations using measurements obtained from a field test in an actual road environment.

A Study on Vision Sensor-based Measurement of Die Location for Its Remodeling (금형 개조 용접시 시각 센서를 이용한 대상물 위치 파악에 관한 연구)

  • Kim, Jitae;Na, Suck-Joo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.17 no.10
    • /
    • pp.141-146
    • /
    • 2000
  • We introduce the algorithms of 3-D position estimation using a laser sensor for automatic die remodeling. First, a vision sensor based on the optical triangulation was used to collect the range data of die surface. Second, line vector equations were constructed by the measured range data, and an analytic algorithm was proposed for recognizing the die location with these vector equations. This algorithm could make the transformation matrix without any specific corresponding points. To ascertain this algorithm, folded SUS plate was measured by the laser vision sensor attached to a 3-axis cartesian manipulator and the transformation matrix was calculated.

  • PDF

Autonomous Sensor Center Position Calibration with Linear Laser-Vision Sensor

  • Jeong, Jeong-Woo;Kang, Hee-Jun
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.4 no.1
    • /
    • pp.43-48
    • /
    • 2003
  • A linear laser-vision sensor called ‘Perception TriCam Contour' is mounted on an industrial robot and often used for various application of the robot such as the position correction and the inspection of a part. In this paper, a sensor center position calibration is presented for the most accurate use of the robot-Perceptron system. The obtained algorithm is suitable for on-site calibration in an industrial application environment. The calibration algorithm requires the joint sensor readings, and the Perceptron sensor measurements on a specially devised jig which is essential for this calibration process. The algorithm is implemented on the Hyundai 7602 AP robot, and Perceptron's measurement accuracy is increased up to less than 1.4mm.

INS/Multi-Vision Integrated Navigation System Based on Landmark (다수의 비전 센서와 INS를 활용한 랜드마크 기반의 통합 항법시스템)

  • Kim, Jong-Myeong;Leeghim, Henzeh
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.45 no.8
    • /
    • pp.671-677
    • /
    • 2017
  • A new INS/Vision integrated navigation system by using multi-vision sensors is addressed in this paper. When the total number of landmark measured by the vision sensor is smaller than the allowable number, there is possibility that the navigation filter can diverge. To prevent this problem, multi-vision concept is applied to expend the field of view so that reliable number of landmarks are always guaranteed. In this work, the orientation of camera installed are 0, 120, and -120degree with respect to the body frame to improve the observability. Finally, the proposed technique is verified by using numerical simulation.