• Title/Summary/Keyword: Robot Vision Control Algorithm

Search Result 169, Processing Time 0.027 seconds

Design of an Intelligent Integrated Control System Using Neural Network (뉴럴 네트워크를 이용한 지능형 통합 제어 시스템 설계)

  • 정동연;김경년;이정호;김원일;한성현
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2002.04a
    • /
    • pp.381-386
    • /
    • 2002
  • In this paper, we have proposed a new approach to the design of robot vision system to develop the technology for the automatic test and assembling of precision mechanical and electronic parts for the factory automation. In order to perform real time implementation of the automatic assembling tasks in the complex processes, we have developed an intelligent control algorithm based-on neural networks control theory to enhance the precise motion control. Implementing of the automatic test tasks has been performed by the real-time vision algorithm based-on TMS320C31 DSPs. It distinguishes correctly the difference between the acceptable and unacceptable defective item through pattern recognition of parts by the developed vision algorithm. Finally, the performance of proposed robot vision system has been illustrated by experiment for the similar model of fifth cell among the twelve cell for automatic test and assembling in S company.

  • PDF

Design of an Intelligent Integrated Control System Using Neural Network (뉴럴 네트워크를 이용한 지능형 통합 제어 시스템 설계)

  • 정동연;이우송;안인모;한성현
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2001.04a
    • /
    • pp.217-222
    • /
    • 2001
  • In this paper, we have proposed a new approach to the design of robot vision system to develop the technology for the automatic test and assembling of precision mechanical and electronic parts for the factory automation. In order to perform real time implementation of the automatic assembling tasks in the complex processes, we have developed an intelligent control algorithm based-on neural networks control theory to enhance the precise motion control. Implementing of the automatic test tasks has been performed by the real-time vision algorithm based-on TMS320C31 DSPs. It distinguishes correctly the difference between the acceptable and unacceptable defective item through pattern recognition of parts by the developed vision algorithm. Finally, the performance of proposed robot vision system has been illustrated by experiment for the similar model of fifth cell among the twelve cell for automatic test and assembling in S company.

  • PDF

Collision Avoidance Using Omni Vision SLAM Based on Fisheye Image (어안 이미지 기반의 전방향 영상 SLAM을 이용한 충돌 회피)

  • Choi, Yun Won;Choi, Jeong Won;Im, Sung Gyu;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.3
    • /
    • pp.210-216
    • /
    • 2016
  • This paper presents a novel collision avoidance technique for mobile robots based on omni-directional vision simultaneous localization and mapping (SLAM). This method estimates the avoidance path and speed of a robot from the location of an obstacle, which can be detected using the Lucas-Kanade Optical Flow in images obtained through fish-eye cameras mounted on the robots. The conventional methods suggest avoidance paths by constructing an arbitrary force field around the obstacle found in the complete map obtained through the SLAM. Robots can also avoid obstacles by using the speed command based on the robot modeling and curved movement path of the robot. The recent research has been improved by optimizing the algorithm for the actual robot. However, research related to a robot using omni-directional vision SLAM to acquire around information at once has been comparatively less studied. The robot with the proposed algorithm avoids obstacles according to the estimated avoidance path based on the map obtained through an omni-directional vision SLAM using a fisheye image, and returns to the original path. In particular, it avoids the obstacles with various speed and direction using acceleration components based on motion information obtained by analyzing around the obstacles. The experimental results confirm the reliability of an avoidance algorithm through comparison between position obtained by the proposed algorithm and the real position collected while avoiding the obstacles.

A VISION SYSTEM IN ROBOTIC WELDING

  • Absi Alfaro, S. C.
    • Proceedings of the KWS Conference
    • /
    • 2002.10a
    • /
    • pp.314-319
    • /
    • 2002
  • The Automation and Control Group at the University of Brasilia is developing an automatic welding station based on an industrial robot and a controllable welding machine. Several techniques were applied in order to improve the quality of the welding joints. This paper deals with the implementation of a laser-based computer vision system to guide the robotic manipulator during the welding process. Currently the robot is taught to follow a prescribed trajectory which is recorded a repeated over and over relying on the repeatability specification from the robot manufacturer. The objective of the computer vision system is monitoring the actual trajectory followed by the welding torch and to evaluate deviations from the desired trajectory. The position errors then being transfer to a control algorithm in order to actuate the robotic manipulator and cancel the trajectory errors. The computer vision systems consists of a CCD camera attached to the welding torch, a laser emitting diode circuit, a PC computer-based frame grabber card, and a computer vision algorithm. The laser circuit establishes a sharp luminous reference line which images are captured through the video camera. The raw image data is then digitized and stored in the frame grabber card for further processing using specifically written algorithms. These image-processing algorithms give the actual welding path, the relative position between the pieces and the required corrections. Two case studies are considered: the first is the joining of two flat metal pieces; and the second is concerned with joining a cylindrical-shape piece to a flat surface. An implementation of this computer vision system using parallel computer processing is being studied.

  • PDF

Vision Sensor and Ultrasonic Sensor Fusion Using Neural Network

  • Baek, Sang-Hoon;Oh, Se-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.668-671
    • /
    • 2004
  • This paper proposes a new method of sensor fusion of an ultrasonic sensor and a vision sensor at the sensor level. In general vision system, the vision system finds edges of objects. And in general ultrasonic system, the ultrasonic system finds absolute distance between robot and object. So, the method integrates data of two different types. The system makes perfect output for robot control in the end. But this paper does not propose only integrating a different kind of data but also fusion information which receives from different kind of sensors. This method has advantages which can simply embody algorithm and can control robot on real time.

  • PDF

Design of Autonomous Stair Robot System (자율주행 형 계단 승하강용 로봇 시스템 설계)

  • 홍영호;김동환;임충혁
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.9 no.1
    • /
    • pp.73-81
    • /
    • 2003
  • An autonomous stair robot recognizing the stair, and climbing up and down the stair by utilizing a robot vision, photo sensors, and appropriate climbing algorithm is introduced. Four arms associated with four wheels make the robot climb up and down more safely and faster than a simple track typed robot. The robot can adjust wheel base according to the stair width, hence it can adopt to a variable width stair with different algorithms in climbing up and down. The command and image data acquired from the robot are transferred to the main computer through RF wireless modules, and the data are delivered to a remote computer via a network communication through a proper data compression, thus, the real time image monitoring is implemented effectively.

Development of Vision Control Scheme of Extended Kalman filtering for Robot's Position Control (실시간 로봇 위치 제어를 위한 확장 칼만 필터링의 비젼 저어 기법 개발)

  • Jang, W.S.;Kim, K.S.;Park, S.I.;Kim, K.Y.
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.23 no.1
    • /
    • pp.21-29
    • /
    • 2003
  • It is very important to reduce the computational time in estimating the parameters of vision control algorithm for robot's position control in real time. Unfortunately, the batch estimation commonly used requires too murk computational time because it is iteration method. So, the batch estimation has difficulty for robot's position control in real time. On the other hand, the Extended Kalman Filtering(EKF) has many advantages to calculate the parameters of vision system in that it is a simple and efficient recursive procedures. Thus, this study is to develop the EKF algorithm for the robot's vision control in real time. The vision system model used in this study involves six parameters to account for the inner(orientation, focal length etc) and outer (the relative location between robot and camera) parameters of camera. Then, EKF has been first applied to estimate these parameters, and then with these estimated parameters, also to estimate the robot's joint angles used for robot's operation. finally, the practicality of vision control scheme based on the EKF has been experimentally verified by performing the robot's position control.

Development of a Vision-based Crack Detection Algorithm for Bridge Inspection (교량점검을 위한 비전 기반의 균열검출 알고리즘 개발)

  • Kim, Jin-Oh;Park, Dong-Jin
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.7
    • /
    • pp.642-646
    • /
    • 2008
  • We have developed a vision based crack detection system and algorithm to inspect base side of bridges. After human operator decides from vision images captured if lines on base side are cracks or dirt, our algorithm finds automatically the length, the width and the shape of cracks. The system has been tested with a robot extender on a truck in real environment and has been proved to be very useful to reduce inspection cost as well as the data management.

Target Detection of Mobile Robot by Vision (시각 정보에 의한 이동 로봇의 대상 인식)

  • 변정민;김종수;김성주;전홍태
    • Proceedings of the IEEK Conference
    • /
    • 2002.06c
    • /
    • pp.29-32
    • /
    • 2002
  • This paper suggest target detection algorithm for mobile robot control using color and shape recognition. In many cases, ultrasonic sensor(USS) is used in mobile robot system to measure the distance between obstacles. But with only USS, it may have many restrictions. So we attached CCD camera to mobile robot to overcome its restrictions. If visual information is given to robot system then robot system will be able to accomplish more complex mission successfully. With acquired vision data, robot looks for target by color and recognize its shape.

  • PDF

A Clean Mobile Robot for 4th Generation LCD Cassette transfer (4세대 LCD Cassette 자동 반송 이동로봇)

  • 김진기;성학경;김성권
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.249-249
    • /
    • 2000
  • This paper introduces a clean mobile robot fur 4th generation LCD cassette, which is guided by optical sensor and position compensation using vision module. The mobile robot for LCD cassette transfer might be controlled by AGV controller which has powerful algorithms. It offers optimum routes to the destination of clean mobile robot by using dynamic dispatch algorithm and MAP data. This clean mobile robot is equipped with 4 axes fork type manipulator providing repeatability accuracy of $\pm$ 0.05mm.

  • PDF