• Title/Summary/Keyword: Robot Vision

Search Result 878, Processing Time 0.033 seconds

A study on the rigid bOdy placement task of robot system based on the computer vision system (컴퓨터 비젼시스템을 이용한 로봇시스템의 강체 배치 실험에 대한 연구)

  • 장완식;유창규;신광수;김호윤
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.10a
    • /
    • pp.1114-1119
    • /
    • 1995
  • This paper presents the development of estimation model and control method based on the new computer vision. This proposed control method is accomplished using a sequential estimation scheme that permits placement of the rigid body in each of the two-dimensional image planes of monitoring cameras. Estimation model with six parameters is developed based on a model that generalizes known 4-axis scara robot kinematics to accommodate unknown relative camera position and orientation, etc. Based on the estimated parameters,depending on each camers the joint angle of robot is estimated by the iteration method. The method is tested experimentally in two ways, the estimation model test and a three-dimensional rigid body placement task. Three results show that control scheme used is precise and robust. This feature can open the door to a range of application of multi-axis robot such as assembly and welding.

  • PDF

Autonomous Sensor Center Position Calibration with Linear Laser-Vision Sensor

  • Jeong, Jeong-Woo;Kang, Hee-Jun
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.4 no.1
    • /
    • pp.43-48
    • /
    • 2003
  • A linear laser-vision sensor called ‘Perception TriCam Contour' is mounted on an industrial robot and often used for various application of the robot such as the position correction and the inspection of a part. In this paper, a sensor center position calibration is presented for the most accurate use of the robot-Perceptron system. The obtained algorithm is suitable for on-site calibration in an industrial application environment. The calibration algorithm requires the joint sensor readings, and the Perceptron sensor measurements on a specially devised jig which is essential for this calibration process. The algorithm is implemented on the Hyundai 7602 AP robot, and Perceptron's measurement accuracy is increased up to less than 1.4mm.

Robust Landmark Matching for Self-localization of Robots from the Multiple Candidates

  • Kang, Hyun-Deok;Jo, Kang-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.41.1-41
    • /
    • 2002
  • This paper describes a robust landmark matching method to reduce ambiguity of candidate of landmark. General robot system acquires the candidate of landmark through vision sensor in outdoor environment. Our robot uses the omnidirectional vision system to get all around the view. Thus, the robot obtains more candidates of landmark than the conventional vision system. To obtain the candidates of landmark, robot uses the two types of feature. They are vertical edge and merged region of vertical edges. The former is to extract the vertical line of building, street lamp, etc. The latter is to reduce ambiguity of vertical edge in similar region. It is difficult to match the candidates of landmark...

  • PDF

A MNN(Modular Neural Network) for Robot Endeffector Recognition (로봇 Endeffector 인식을 위한 모듈라 신경회로망)

  • 김영부;박동선
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.496-499
    • /
    • 1999
  • This paper describes a medular neural network(MNN) for a vision system which tracks a given object using a sequence of images from a camera unit. The MNN is used to precisely recognize the given robot endeffector and to minize the processing time. Since the robot endeffector can be viewed in many different shapes in 3-D space, a MNN structure, which contains a set of feedforwared neural networks, co be more attractive in recognizing the given object. Each single neural network learns the endeffector with a cluster of training patterns. The training patterns for a neural network share the similar charateristics so that they can be easily trained. The trained MNN is less sensitive to noise and it shows the better performance in recognizing the endeffector. The recognition rate of MNN is enhanced by 14% over the single neural network. A vision system with the MNN can precisely recognize the endeffector and place it at the center of a display for a remote operator.

  • PDF

A Study on Obstacle Detection for Mobile Robot Navigation (이동형 로보트 주행을 위한 장애물 검출에 관한 연구)

  • Yun, Ji-Ho;Woo, Dong-Min
    • Proceedings of the KIEE Conference
    • /
    • 1995.11a
    • /
    • pp.587-589
    • /
    • 1995
  • The safe navigation of a mobile robot requires the recognition of the environment in terms of vision processing. To be guided in the given path, the robot should acquire the information about where the wall and corridor are located. Also unexpected obstacles should be detected as rapid as possible for the safe obstacle avoidance. In the paper, we assume that the mobile robot should be navigated in the flat surface. In terms of this assumption we simplify the correspondence problem by the free navigation surface and matching features in that coordinate system. Basically, the vision processing system adopts line segment of edge as the feature. The extracted line segments of edge out of both image are matched in the free nevigation surface. According to the matching result, each line segment is labeled by the attributes regarding obstacle and free surface and the 3D shape of obstacle is interpreted. This proposed vision processing method is verified in terms of various simulations and experimentation using real images.

  • PDF

Development of multi-object image processing algorithm in a image plane (한 이미지 평면에 있는 다물체 화상처리 기법 개발)

  • 장완식;윤현권;김재확
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.555-555
    • /
    • 2000
  • This study is concentrated on the development of hight speed multi-object image processing algorithm, and based on these a1gorithm, vision control scheme is developed for the robot's position control in real time. Recently, the use of vision system is rapidly increasing in robot's position centre. To apply vision system in robot's position control, it is necessary to transform the physical coordinate of object into the image information acquired by CCD camera, which is called image processing. Thus, to control the robot's point position in real time, we have to know the center point of object in image plane. Particularly, in case of rigid body, the center points of multi-object must be calculated in a image plane at the same time. To solve these problems, the algorithm of multi-object for rigid body control is developed.

  • PDF

A Path tracking algorithm and a VRML image overlay method (VRML과 영상오버레이를 이용한 로봇의 경로추적)

  • Sohn, Eun-Ho;Zhang, Yuanliang;Kim, Young-Chul;Chong, Kil-To
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.907-908
    • /
    • 2006
  • We describe a method for localizing a mobile robot in its working environment using a vision system and Virtual Reality Modeling Language (VRML). The robot identifies landmarks in the environment, using image processing and neural network pattern matching techniques, and then its performs self-positioning with a vision system based on a well-known localization algorithm. After the self-positioning procedure, the 2-D scene of the vision is overlaid with the VRML scene. This paper describes how to realize the self-positioning, and shows the overlap between the 2-D and VRML scenes. The method successfully defines a robot's path.

  • PDF

Visual Servoing of a Mobile Manipulator Based on Stereo Vision

  • Lee, H.J.;Park, M.G.;Lee, M.C.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.767-771
    • /
    • 2003
  • In this study, stereo vision system is applied to a mobile manipulator for effective tasks. The robot can recognize a target and compute the position of the target using a stereo vision system. While a monocular vision system needs properties such as geometric shape of a target, a stereo vision system enables the robot to find the position of a target without additional information. Many algorithms have been studied and developed for an object recognition. However, most of these approaches have a disadvantage of the complexity of computations and they are inadequate for real-time visual servoing. However, color information is useful for simple recognition in real-time visual servoing. In this paper, we refer to about object recognition using colors, stereo matching method, recovery of 3D space and the visual servoing.

  • PDF

Object Recognition using Smart Tag and Stereo Vision System on Pan-Tilt Mechanism

  • Kim, Jin-Young;Im, Chang-Jun;Lee, Sang-Won;Lee, Ho-Gil
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2379-2384
    • /
    • 2005
  • We propose a novel method for object recognition using the smart tag system with a stereo vision on a pan-tilt mechanism. We developed a smart tag which included IRED device. The smart tag is attached onto the object. We also developed a stereo vision system which pans and tilts for the object image to be the centered on each whole image view. A Stereo vision system on the pan-tilt mechanism can map the position of IRED to the robot coordinate system by using pan-tilt angles. And then, to map the size and pose of the object for the robot to coordinate the system, we used a simple model-based vision algorithm. To increase the possibility of tag-based object recognition, we implemented our approach by using as easy and simple techniques as possible.

  • PDF

An Experimental Study on the Optimal Number of Cameras used for Vision Control System (비젼 제어시스템에 사용된 카메라의 최적개수에 대한 실험적 연구)

  • 장완식;김경석;김기영;안힘찬
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.13 no.2
    • /
    • pp.94-103
    • /
    • 2004
  • The vision system model used for this study involves the six parameters that permits a kind of adaptability in that relationship between the camera space location of manipulable visual cues and the vector of robot joint coordinates is estimated in real time. Also this vision control method requires the number of cameras to transform 2-D camera plane from 3-D physical space, and be used irrespective of location of cameras, if visual cues are displayed in the same camera plane. Thus, this study is to investigate the optimal number of cameras used for the developed vision control system according to the change of the number of cameras. This study is processed in the two ways : a) effectiveness of vision system model b) optimal number of cameras. These results show the evidence of the adaptability of the developed vision control method using the optimal number of cameras.