• Title/Summary/Keyword: Robot vision

Search Result 878, Processing Time 0.025 seconds

Development of a Grinding Robot System for the Engine Cylinder Liner's Oil Groove (실린더 라이너 오일그루브 가공 로봇 시스템 개발)

  • Noh, Tae-Yang;Lee, Yun-Sik;Jung, Chang-Wook;Oh, Yong-Chul
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.33 no.6
    • /
    • pp.614-619
    • /
    • 2009
  • An engine for marine propulsion and power generation consists of several cylinder liner-piston sets. And the oil groove is on the cylinder liner inside wall for the lubrication between a piston and cylinder. The machining process of oil groove has been carried by manual work so far, because of the diversity of the shape. Recently, we developed an automatic grinding robot system for oil groove machining of engine cylinder liners. It can covers various types of oil grooves and adjust its position by itself. The grinding robot system consists of a robot, a machining tool head, sensors and a control system. The robot automatically recognizes the cylinder liner's inside configuration by using a laser displacement sensor and a vision sensor after the cylinder liner is placed on a set-up equipment.

Implementation of Transformation Algorithm for a Leg-wheel Hexapod Robot Using Stereo Vision (스테레오 영상처리를 이용한 바퀴달린 6족 로봇의 형태변형 알고리즘 구현)

  • Lee, Sang-Hun;Kim, Jin-Geol
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.202-204
    • /
    • 2006
  • In this paper, the detection scheme of the spatial coordinates based on stereo camera for a Transformation algorithm of an Leg-wheel Hexapod Robot is proposed. Robot designed as can have advantages that do transfer possibility fast mobility in flat topography and uneven topography through walk that use wheel drive. In the proposed system, using the disparity data obtained from the left and right images captured by the stereo camera system and the perspective transformation between a 3-D scene and an image plane, depth information can be detected. Robot uses construed environmental data and transformation algorithm, decide wheel drive and leg waik, and can calculate width of street and regulate width of robot.

  • PDF

A study on localization and compensation of mobile robot using fusion of vision and ultrasound (영상 및 거리정보 융합을 이용한 이동로봇의 위치 인식 및 오차 보정에 관한 연구)

  • Jang, Cheol-Woong;Jung, Ki-Ho;Jung, Dae-Sub;Ryu, Je-Goon;Shim, Jae-Hong;Lee, Eung-Hyuk
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.554-556
    • /
    • 2006
  • A key component for autonomous mobile robot is to localize ifself. In this paper we suggest a vision-based localization and compensation of robot's location using ultrasound. Mobile robot travels along wall and searches each feature in indoor environment and transformed absolute coordinates of actuality environment using these points and builds a map. And we obtain information of the environment because mobile robot travels along wall. Localzation search robot's location candidate point by ultrasound and decide position among candidate point by features matching.

  • PDF

Development of a grinding robot system for the oil groove of the engine cylinder liner (실린더 라이너 오일그루브 가공 로봇 시스템 개발)

  • Noh, Tae-Yang;Lee, Yun-Sik;Jung, Chang-Wook;Lee, Ji-Hyung;Oh, Yong-Chul
    • Proceedings of the KSME Conference
    • /
    • 2008.11a
    • /
    • pp.1075-1080
    • /
    • 2008
  • An engine for marine propulsion and power generation consists of several cylinder liner-piston sets. And the oil groove is on the cylinder liner inside wall for the lubrication between a piston and cylinder. The machining process of oil groove has been carried by manual work so far, because of the diversity of the shape. Recently, we developed an automatic grinding robot system for oil groove machining of engine cylinder liners. It can covers various types of oil grooves and adjust its position by itself. The grinding robot system consists of a robot, a machining tool head, sensors and a control system. The robot automatically recognizes the cylinder liner's inside configuration by using a laser displacement sensor and a vision sensor after the cylinder liner is placed on a set-up equipment.

  • PDF

Design of Navigation Algorithm for Mobile Robot using Sensor fusion (센서 합성을 이용한 자율이동로봇의 주행 알고리즘 설계)

  • Kim Jung-Hoon;Kim young-Joong;Lim Myo-Teag
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.53 no.10
    • /
    • pp.703-713
    • /
    • 2004
  • This paper presents the new obstacle avoidance method that is composed of vision and sonar sensors, also a navigation algorithm is proposed. Sonar sensors provide poor information because the angular resolution of each sonar sensor is not exact. So they are not suitable to detect relative direction of obstacles. In addition, it is not easy to detect the obstacle by vision sensors because of an image disturbance. In This paper, the new obstacle direction measurement method that is composed of sonar sensors for exact distance information and vision sensors for abundance information. The modified splitting/merging algorithm is proposed, and it is robuster for an image disturbance than the edge detecting algorithm, and it is efficient for grouping of the obstacle. In order to verify our proposed algorithm, we compare the proposed algorithm with the edge detecting algorithm via experiments. The direction of obstacle and the relative distance are used for the inputs of the fuzzy controller. We design the angular velocity controllers for obstacle avoidance and for navigation to center in corridor, respectively. In order to verify stability and effectiveness of our proposed method, it is apply to a vision and sonar based mobile robot navigation system.

A Study on the Improvement of Pose Information of Objects by Using Trinocular Vision System (Trinocular Vision System을 이용한 물체 자세정보 인식 향상방안)

  • Kim, Jong Hyeong;Jang, Kyoungjae;Kwon, Hyuk-dong
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.26 no.2
    • /
    • pp.223-229
    • /
    • 2017
  • Recently, robotic bin-picking tasks have drawn considerable attention, because flexibility is required in robotic assembly tasks. Generally, stereo camera systems have been used widely for robotic bin-picking, but these have two limitations: First, computational burden for solving correspondence problem on stereo images increases calculation time. Second, errors in image processing and camera calibration reduce accuracy. Moreover, the errors in robot kinematic parameters directly affect robot gripping. In this paper, we propose a method of correcting the bin-picking error by using trinocular vision system which consists of two stereo cameras andone hand-eye camera. First, the two stereo cameras, with wide viewing angle, measure object's pose roughly. Then, the 3rd hand-eye camera approaches the object, and corrects the previous measurement of the stereo camera system. Experimental results show usefulness of the proposed method.

An implementation of the automatic labeling rolling-coil using robot vision system (로봇 시각 장치를 이용한 압연코일의 라벨링 자동화 구현)

  • Lee, Yong-Joong;Lee, Yang-Bum
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.3 no.5
    • /
    • pp.497-502
    • /
    • 1997
  • In this study an automatic rolling-coil labeling system using robot vision system and peripheral mechanism is proposed and implemented, which instead of the manual labor to attach labels Rolling-coils in a steel mill. The binary image process for the image processing is performed with the threshold, and the contour line is converted to the binary gradient which detects the discontinuous variation of brightness of rolling-coils. The moments invariant algorithm proposed by Hu is used to make it easy to recognize even when the position of the center are different from the trained data. The position error compensation algorithm of six degrees of freedom industrial robot manipulator is also developed and the data of the position of the center rolling-coils, which is obtained by floor mount camera, are transferred by asynchronous communication method. Therefore, even if the position of center is changed, robot moves to the position of center and performs the labeling work successfully. Therefore, this system can be improved the safety and efficiency.

  • PDF

Target Tracking Control of Mobile Robots with Vision System in the Absence of Velocity Sensors (속도센서가 없는 비전시스템을 이용한 이동로봇의 목표물 추종)

  • Cho, Namsub;Kwon, Ji-Wook;Chwa, Dongkyoung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.62 no.6
    • /
    • pp.852-862
    • /
    • 2013
  • This paper proposes a target tracking control method for wheeled mobile robots with nonholonomic constraints by using a backstepping-like feedback linearization. For the target tracking, we apply a vision system to mobile robots to obtain the relative posture information between the mobile robot and the target. The robots do not use the sensors to obtain the velocity information in this paper and therefore assumed the unknown velocities of both mobile robot and target. Instead, the proposed method uses only the maximum velocity information of the mobile robot and target. First, the pseudo command for the forward linear velocity and the heading direction angle are designed based on the kinematics by using the obtained image information. Then, the actual control inputs are designed to make the actual forward linear velocity and the heading direction angle follow the pseudo commands. Through simulations and experiments for the mobile robot we have confirmed that the proposed control method is able to track target even when the velocity sensors are not used at all.

A Study on the Determination of 3-D Object's Position Based on Computer Vision Method (컴퓨터 비젼 방법을 이용한 3차원 물체 위치 결정에 관한 연구)

  • 김경석
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.8 no.6
    • /
    • pp.26-34
    • /
    • 1999
  • This study shows an alternative method for the determination of object's position, based on a computer vision method. This approach develops the vision system model to define the reciprocal relationship between the 3-D real space and 2-D image plane. The developed model involves the bilinear six-view parameters, which is estimated using the relationship between the camera space location and real coordinates of known position. Based on estimated parameters in independent cameras, the position of unknown object is accomplished using a sequential estimation scheme that permits data of unknown points in each of the 2-D image plane of cameras. This vision control methods the robust and reliable, which overcomes the difficulties of the conventional research such as precise calibration of the vision sensor, exact kinematic modeling of the robot, and correct knowledge of the relative positions and orientation of the robot and CCD camera. Finally, the developed vision control method is tested experimentally by performing determination of object position in the space using computer vision system. These results show the presented method is precise and compatible.

  • PDF

Development and Performance Evaluation of Hull Blasting Robot for Surface Pre-Preparation for Painting Process (도장전처리 작업을 위한 블라스팅 로봇 시스템 개발 및 성능평가)

  • Lee, JunHo;Jin, Taeseok
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.5
    • /
    • pp.383-389
    • /
    • 2016
  • In this paper, we present the hull blasting machine with vision-based weld bead recognition device for cleaning shipment exterior wall. The purpose of this study is to introduce the mechanism design of the high efficiency hull blasting machine using the vision system to recognize the weld bead. Therefore, we have developed a robot mechanism and drive controller system of the hull blasting robot. And hull blasting characteristics such as the climbing mechanism, vision system, remote controller and CAN have been discussed and compared with the experimental data. The hull blasting robots are able to remove rust or paint at anchor, so the re-docking is unnecessary. Therefore, this can save time and cost of undergoing re-docking process and build more vessels instead. The robot uses sensors to navigate safely around the hull and has a filter system to collect the fouling removed. We have completed a pilot test of the robot and demonstrated the drive control and CAN communication performance.