• Title/Summary/Keyword: 3D Robot Vision

Search Result 138, Processing Time 0.031 seconds

3-D Positioning Using Stereo Vision and Guide-Mark Pattern For A Quadruped Walking Robot (스테레오 시각 정보를 이용한 4각보행 로보트의 3차원 위치 및 자세 검출)

  • ;;;Zeungnam Bien
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.27 no.8
    • /
    • pp.1188-1200
    • /
    • 1990
  • In this paper, the 3-D positioning problem for a quadruped walking robot is investigated. In order to determine the robot's exterior position and orentation in a worls coordinate system, a stereo 3-D positioning algorithm is proposed. The proposed algorithm uses a Guide-Mark Pattern (GMP) specialy designed for fast and reliable extraction of 3-D robot position information from the uncontrolled working environment. Some experimental results along with error analysis and several means of reducing the effects of vision processing error in the proposed algorithm are disscussed.

  • PDF

Evaluation of Two Robot Vision Control Algorithms Developed Based on N-R and EKF Methods for Slender Bar Placement (얇은막대 배치작업에 대한 N-R 과 EKF 방법을 이용하여 개발한 로봇 비젼 제어알고리즘의 평가)

  • Son, Jae Kyung;Jang, Wan Shik;Hong, Sung Mun
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.37 no.4
    • /
    • pp.447-459
    • /
    • 2013
  • Many problems need to be solved before vision systems can actually be applied in industry, such as the precision of the kinematics model of the robot control algorithm based on visual information, active compensation of the camera's focal length and orientation during the movement of the robot, and understanding the mapping of the physical 3-D space into 2-D camera coordinates. An algorithm is proposed to enable robot to move actively even if the relative positions between the camera and the robot is unknown. To solve the correction problem, this study proposes vision system model with six camera parameters. To develop the robot vision control algorithm, the N-R and EKF methods are applied to the vision system model. Finally, the position accuracy and processing time of the two algorithms developed based based on the EKF and the N-R methods are compared experimentally by making the robot perform slender bar placement task.

Hand/Eye calibration of Robot arms with a 3D visual sensing system (3차원 시각 센서를 탑재한로봇의 Hand/Eye 캘리브레이션)

  • 김민영;노영준;조형석;김재훈
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.76-76
    • /
    • 2000
  • The calibration of the robot system with a visual sensor consists of robot, hand-to-eye, and sensor calibration. This paper describe a new technique for computing 3D position and orientation of a 3D sensor system relative to the end effect of a robot manipulator in an eye-on-hand robot configuration. When the 3D coordinates of the feature points at each robot movement and the relative robot motion between two robot movements are known, a homogeneous equation of the form AX : XB is derived. To solve for X uniquely, it is necessary to make two robot arm movements and form a system of two equation of the form: A$_1$X : XB$_1$ and A$_2$X = XB$_2$. A closed-form solution to this system of equations is developed and the constraints for solution existence are described in detail. Test results through a series of simulation show that this technique is simple, efficient, and accurate fur hand/eye calibration.

  • PDF

3-D position estimation for eye-in-hand robot vision

  • Jang, Won;Kim, Kyung-Jin;Chung, Myung-Jin;ZeungnamBien
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1988.10b
    • /
    • pp.832-836
    • /
    • 1988
  • "Motion Stereo" is quite useful for visual guidance of the robot, but most range finding algorithms of motion stereo have suffered from poor accuracy due to the quantization noise and measurement error. In this paper, 3-D position estimation and refinement scheme is proposed, and its performance is discussed. The main concept of the approach is to consider the entire frame sequence at the same time rather than to consider the sequence as a pair of images. The experiments using real images have been performed under following conditions : hand-held camera, static object. The result demonstrate that the proposed nonlinear least-square estimation scheme provides reliable and fairly accurate 3-D position information for vision-based position control of robot. of robot.

  • PDF

A Development of The Remote Robot Control System with Virtual Reality Interface System (가상현실과 결합된 로봇제어 시스템의 구현방법)

  • 김우경;김훈표;현웅근
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.10a
    • /
    • pp.320-324
    • /
    • 2003
  • Recently, Virtual reality parts is applied in various fields of industry. In this paper we got under control motion of reality robot from interface manipulation in the virtual world. This paper created virtual robot using of 3D Graphic Tool. And we reappeared a similar image with reality robot put on texture the use of components of Direct 3D Graphic. Also a reality robot and a virtual robot is controlled by joystick. The developed robot consists of robot controller with vision system and host PC program. The robot and camera can move with 2 degree of freedom by independent remote controlling a user friendly designed joystick. An environment is recognized by the vision system and ultra sonic sensors. The visual mage and command data translated through 900MHz and 447MHz RF controller, respectively. If user send robot control command the use of simulator to control the reality robot, the transmitter/recever got under control until 500miter outdoor at the rate of 4800bps a second in Hlaf Duplex method via radio frequency module useing 447MHz frequency.

  • PDF

An Experimental Study on the Optimal Number of Cameras used for Vision Control System (비젼 제어시스템에 사용된 카메라의 최적개수에 대한 실험적 연구)

  • 장완식;김경석;김기영;안힘찬
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.13 no.2
    • /
    • pp.94-103
    • /
    • 2004
  • The vision system model used for this study involves the six parameters that permits a kind of adaptability in that relationship between the camera space location of manipulable visual cues and the vector of robot joint coordinates is estimated in real time. Also this vision control method requires the number of cameras to transform 2-D camera plane from 3-D physical space, and be used irrespective of location of cameras, if visual cues are displayed in the same camera plane. Thus, this study is to investigate the optimal number of cameras used for the developed vision control system according to the change of the number of cameras. This study is processed in the two ways : a) effectiveness of vision system model b) optimal number of cameras. These results show the evidence of the adaptability of the developed vision control method using the optimal number of cameras.

A 3-D Position Compensation Method of Industrial Robot Using Block Interpolation (블록 보간법을 이용한 산업용 로봇의 3차원 위치 보정기법)

  • Ryu, Hang-Ki;Woo, Kyung-Hang;Choi, Won-Ho;Lee, Jae-Kook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.3
    • /
    • pp.235-241
    • /
    • 2007
  • This paper proposes a self-calibration method of robots those are used in industrial assembly lines. The proposed method is a position compensation using laser sensor and vision camera. Because the laser sensor is cross type laser sensor which can scan a horizontal and vertical line, it is efficient way to detect a feature of vehicle and winding shape of vehicle's body. For position compensation of 3-Dimensional axis, we applied block interpolation method. For selecting feature point, pattern matching method is used and 3-D position is selected by Euclidean distance mapping between 462 feature values and evaluated feature point. In order to evaluate the proposed algorithm, experiments are performed in real industrial vehicle assembly line. In results, robot's working point can be displayed 3-D points. These points are used to diagnosis error of position and reselecting working point.

A Stereo-Vision System for 3D Position Recognition of Cow Teats on Robot Milking System (로봇 착유시스템의 3차원 유두위치인식을 위한 스테레오비젼 시스템)

  • Kim, Woong;Min, Byeong-Ro;Lee, Dea-Weon
    • Journal of Biosystems Engineering
    • /
    • v.32 no.1 s.120
    • /
    • pp.44-49
    • /
    • 2007
  • A stereo vision system was developed for robot milking system (RMS) using two monochromatic cameras. An algorithm for inverse perspective transformation was developed for the 3-D information acquisition of all teats. To verify performance of the algorithm in the stereo vision system, indoor tests were carried out using a test-board and model teats. A real cow and a model cow were used to measure distance errors. The maximum distance errors of test-board, model teats and real teats were 0.5 mm, 4.9 mm and 6 mm, respectively. The average distance errors of model teats and real teats were 2.9 mm and 4.43 mm, respectively. Therefore, it was concluded that this algorithm was sufficient for the RMS to be applied.

Self-localization of a Mobile Robot for Decreasing the Error and VRML Image Overlay (오차 감소를 위한 이동로봇 Self-Localization과 VRML 영상오버레이 기법)

  • Kwon Bang-Hyun;Shon Eun-Ho;Kim Young-Chul;Chong Kil-To
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.4
    • /
    • pp.389-394
    • /
    • 2006
  • Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-localization, and there are different modalities as well (vision, laser range finders, ultrasonic sonars). Since sensor information is generally uncertain and contains noise, there are many researches to reduce the noise. But, the correctness is limited because most researches are based on statistical approach. The goal of our research is to measure more exact robot location by matching between built VRML 3D model and real vision image. To determine the position of mobile robot, landmark-localization technique has been applied. Landmarks are any detectable structure in the physical environment. Some use vertical lines, others use specially designed markers, In this paper, specially designed markers are used as landmarks. Given known focal length and a single image of three landmarks it is possible to compute the angular separation between the lines of sight of the landmarks. The image-processing and neural network pattern matching techniques are employed to recognize landmarks placed in a robot working environment. After self-localization, the 2D scene of the vision is overlaid with the VRML scene.

VRML image overlay method for Robot's Self-Localization (VRML 영상오버레이기법을 이용한 로봇의 Self-Localization)

  • Sohn, Eun-Ho;Kwon, Bang-Hyun;Kim, Young-Chul;Chong, Kil-To
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.318-320
    • /
    • 2006
  • Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-localization, and there are different modalities as well (vision, laser range finders, ultrasonic sonars). Since sensor information is generally uncertain and contains noise, there are many researches to reduce the noise. But, the correctness is limited because most researches are based on statistical approach. The goal of our research is to measure more exact robot location by matching between built VRML 3D model and real vision image. To determine the position of mobile robot, landmark-localitzation technique has been applied. Landmarks are any detectable structure in the physical environment. Some use vertical lines, others use specially designed markers, In this paper, specially designed markers are used as landmarks. Given known focal length and a single image of three landmarks it is possible to compute the angular separation between the lines of sight of the landmarks. The image-processing and neural network pattern matching techniques are employed to recognize landmarks placed in a robot working environment. After self-localization, the 2D scene of the vision is overlaid with the VRML scene.

  • PDF