• Title/Summary/Keyword: robot vision system

Search Result 589, Processing Time 0.035 seconds

Vision Based Mobile Robot Control (이동 로봇의 비젼 기반 제어)

  • Kim, Jin-Hwan
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.60 no.2
    • /
    • pp.63-67
    • /
    • 2011
  • This paper presents the mobile robot control based on vision system. The proposed vision based controller consist of the camera tracking controller and the formation controller. Th e camera controller has the adaptive gain based on IBVS. The formation controller which is designed in the sense of the Lyapunov stability follows the leader. Simluation results show that the proposed vision based mobile robot control is validated for indoor mobile robot applications.

The Stereoscopic Vision Robot System Design with DSP Processor (DSP를 이용한 스테레오 비젼 로봇의 설계에 관한 연구)

  • 노석환;강희조;류광렬
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.10a
    • /
    • pp.264-267
    • /
    • 2003
  • The stereoscopic vision robot system design with DSP processor is presented. The vision system is consists of control system, vision system and host computer. The vision system is based on 32bits DSP processor. The stereoscopic image processing applies the correlation coefficient method to execute the software. The result of experiment, image recognition rate is 95% on the stereoscopic vision robot system.

  • PDF

Target Detection of Mobile Robot by Vision (시각 정보에 의한 이동 로봇의 대상 인식)

  • 변정민;김종수;김성주;전홍태
    • Proceedings of the IEEK Conference
    • /
    • 2002.06c
    • /
    • pp.29-32
    • /
    • 2002
  • This paper suggest target detection algorithm for mobile robot control using color and shape recognition. In many cases, ultrasonic sensor(USS) is used in mobile robot system to measure the distance between obstacles. But with only USS, it may have many restrictions. So we attached CCD camera to mobile robot to overcome its restrictions. If visual information is given to robot system then robot system will be able to accomplish more complex mission successfully. With acquired vision data, robot looks for target by color and recognize its shape.

  • PDF

Tracking Control of a Moving Target Using a Robot Vision System

  • Kim, Dong-Hwan;Cheon, Gyung-Il
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.77.5-77
    • /
    • 2001
  • A Robot vision system with a visual skill so as take information for arbitrary target or object has been applied to auto-inspection and assembling system. It catches the moving target with the manipulator by using the information from the vision system. The robot needs some information where the moving object will place after certain time. A camera is fixed on a robot manipulator, not on the fixed support outside of the robot. It secures wider working area than the fixed camera, and it dedicates to auto scanning of the object. It computes some information on the object center, angle and speed by vision data, and can guess grabbing spot by arriving time. When the location ...

  • PDF

The Study of Mobile Robot Self-displacement Recognition Using Stereo Vision (스테레오 비젼을 이용한 이동로봇의 자기-이동변위인식 시스템에 관한 연구)

  • 심성준;고덕현;김규로;이순걸
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2003.06a
    • /
    • pp.934-937
    • /
    • 2003
  • In this paper, authors use a stereo vision system based on the visual model of human and establish inexpensive method that recognizes moving distance using characteristic points around the robot. With the stereovision. the changes of the coordinate values of the characteristic points that are fixed around the robot are measured. Self-displacement and self-localization recognition system is proposed from coordination reconstruction with those changes. To evaluate the proposed system, several characteristic points that is made with a LED around the robot and two cheap USB PC cameras are used. The mobile robot measures the coordinate value of each characteristic point at its initial position. After moving, the robot measures the coordinate values of the characteristic points those are set at the initial position. The mobile robot compares the changes of these several coordinate values and converts transformation matrix from these coordinate changes. As a matrix of the amount and the direction of moving displacement of the mobile robot, the obtained transformation matrix represents self-displacement and self-localization by the environment.

  • PDF

A Study on the Environment Recognition System of Biped Robot for Stable Walking (안정적 보행을 위한 이족 로봇의 환경 인식 시스템 연구)

  • Song, Hee-Jun;Lee, Seon-Gu;Kang, Tae-Gu;Kim, Dong-Won;Park, Gwi-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2006.07d
    • /
    • pp.1977-1978
    • /
    • 2006
  • This paper discusses the method of vision based sensor fusion system for biped robot walking. Most researches on biped walking robot have mostly focused on walking algorithm itself. However, developing vision systems for biped walking robot is an important and urgent issue since biped walking robots are ultimately developed not only for researches but to be utilized in real life. In the research, systems for environment recognition and tele-operation have been developed for task assignment and execution of biped robot as well as for human robot interaction (HRI) system. For carrying out certain tasks, an object tracking system using modified optical flow algorithm and obstacle recognition system using enhanced template matching and hierarchical support vector machine algorithm by wireless vision camera are implemented with sensor fusion system using other sensors installed in a biped walking robot. Also systems for robot manipulating and communication with user have been developed for robot.

  • PDF

A Study on the Effect of Weighting Matrix of Robot Vision Control Algorithm in Robot Point Placement Task (점 배치 작업 시 제시된 로봇 비젼 제어알고리즘의 가중행렬의 영향에 관한 연구)

  • Son, Jae-Kyung;Jang, Wan-Shik;Sung, Yoon-Gyung
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.29 no.9
    • /
    • pp.986-994
    • /
    • 2012
  • This paper is concerned with the application of the vision control algorithm with weighting matrix in robot point placement task. The proposed vision control algorithm involves four models, which are the robot kinematic model, vision system model, the parameter estimation scheme and robot joint angle estimation scheme. This proposed algorithm is to make the robot move actively, even if relative position between camera and robot, and camera's focal length are unknown. The parameter estimation scheme and joint angle estimation scheme in this proposed algorithm have form of nonlinear equation. In particular, the joint angle estimation model includes several restrictive conditions. For this study, the weighting matrix which gave various weighting near the target was applied to the parameter estimation scheme. Then, this study is to investigate how this change of the weighting matrix will affect the presented vision control algorithm. Finally, the effect of the weighting matrix of robot vision control algorithm is demonstrated experimentally by performing the robot point placement.

A study on map generation of autonomous Mobile Robot using stereo vision system (스테레오 비젼 시스템을 이용한 자율 이동 로봇의 지도 작성에 관한 연구)

  • Son, Young-Seop;Lee, Kwae-Hi
    • Proceedings of the KIEE Conference
    • /
    • 1998.07g
    • /
    • pp.2200-2202
    • /
    • 1998
  • Autonomous mobile robot provide many functions such as sensing, processing, and driving. For more intelligent jobs, more intelligent functions are to be added and the existing functions may be updated. To execute a job autonomous mobile robot has a information of surrounding environment. So, robot uses sonar sensor, vision sensor and so on. Obtained sensor information is used map generation. This paper is focused on map generation using stereo vision system.

  • PDF

A VISION SYSTEM IN ROBOTIC WELDING

  • Absi Alfaro, S. C.
    • Proceedings of the KWS Conference
    • /
    • 2002.10a
    • /
    • pp.314-319
    • /
    • 2002
  • The Automation and Control Group at the University of Brasilia is developing an automatic welding station based on an industrial robot and a controllable welding machine. Several techniques were applied in order to improve the quality of the welding joints. This paper deals with the implementation of a laser-based computer vision system to guide the robotic manipulator during the welding process. Currently the robot is taught to follow a prescribed trajectory which is recorded a repeated over and over relying on the repeatability specification from the robot manufacturer. The objective of the computer vision system is monitoring the actual trajectory followed by the welding torch and to evaluate deviations from the desired trajectory. The position errors then being transfer to a control algorithm in order to actuate the robotic manipulator and cancel the trajectory errors. The computer vision systems consists of a CCD camera attached to the welding torch, a laser emitting diode circuit, a PC computer-based frame grabber card, and a computer vision algorithm. The laser circuit establishes a sharp luminous reference line which images are captured through the video camera. The raw image data is then digitized and stored in the frame grabber card for further processing using specifically written algorithms. These image-processing algorithms give the actual welding path, the relative position between the pieces and the required corrections. Two case studies are considered: the first is the joining of two flat metal pieces; and the second is concerned with joining a cylindrical-shape piece to a flat surface. An implementation of this computer vision system using parallel computer processing is being studied.

  • PDF

Path finding via VRML and VISION overlay for Autonomous Robotic (로봇의 위치보정을 통한 경로계획)

  • Sohn, Eun-Ho;Park, Jong-Ho;Kim, Young-Chul;Chong, Kil-To
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.527-529
    • /
    • 2006
  • In this paper, we find a robot's path using a Virtual Reality Modeling Language and overlay vision. For correct robot's path we describe a method for localizing a mobile robot in its working environment using a vision system and VRML. The robt identifies landmarks in the environment, using image processing and neural network pattern matching techniques, and then its performs self-positioning with a vision system based on a well-known localization algorithm. After the self-positioning procedure, the 2-D scene of the vision is overlaid with the VRML scene. This paper describes how to realize the self-positioning, and shows the overlap between the 2-D and VRML scenes. The method successfully defines a robot's path.

  • PDF