• Title/Summary/Keyword: Vision Mobile Robot

Search Result 315, Processing Time 0.029 seconds

Control of a mobile robot supporting a task robot on the top

  • Lee, Jang M.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10a
    • /
    • pp.1-7
    • /
    • 1996
  • This paper addresses the control problem of a mobile robot supporting a task robot with needs to be positioned precisely. The main difficulty residing in the precise control of a mobile robot supporting a task robot is providing an accurate and stable base for the task robot. That is, the end-plate of the mobile robot which is the base of the task robot can not be positioned accurately without external position sensors. This difficulty is resolved in this paper through the vision information obtained from the camera attached at the end of a task robot. First of all, the camera parameters were measured by using the images of a fixed object captured by the camera. The measured parameters include the rotation, the position, the scale factor, and the focal length of the camera. These parameters could be measured by using the features of each vertex point for a hexagonal object and by using the pin-hole model of a camera. Using the measured pose(position and orientation) of the camera and the given kinematics of the task robot, we calculate a pose of the end-plate of the mobile robot, which is used for the precise control of the mobile robot. Experimental results for the pose estimations are shown.

  • PDF

Vision-based Self Localization Using Ceiling Artificial Landmark for Ubiquitous Mobile Robot (유비쿼터스 이동로봇용 천장 인공표식을 이용한 비젼기반 자기위치인식법)

  • Lee Ju-Sang;Lim Young-Cheol;Ryoo Young-Jae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.5
    • /
    • pp.560-566
    • /
    • 2005
  • In this paper, a practical technique for correction of a distorted image for vision-based localization of ubiquitous mobile robot. The localization of mobile robot is essential and is realized by using camera vision system. In order to wide the view angle of camera, the vision system includes a fish-eye lens, which distorts the image. Because a mobile robot moves rapidly, the image processing should he fast to recognize the localization. Thus, we propose the practical correction technique for a distorted image, verify the Performance by experimental test.

Target Tracking Control of Mobile Robots with Vision System in the Absence of Velocity Sensors (속도센서가 없는 비전시스템을 이용한 이동로봇의 목표물 추종)

  • Cho, Namsub;Kwon, Ji-Wook;Chwa, Dongkyoung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.62 no.6
    • /
    • pp.852-862
    • /
    • 2013
  • This paper proposes a target tracking control method for wheeled mobile robots with nonholonomic constraints by using a backstepping-like feedback linearization. For the target tracking, we apply a vision system to mobile robots to obtain the relative posture information between the mobile robot and the target. The robots do not use the sensors to obtain the velocity information in this paper and therefore assumed the unknown velocities of both mobile robot and target. Instead, the proposed method uses only the maximum velocity information of the mobile robot and target. First, the pseudo command for the forward linear velocity and the heading direction angle are designed based on the kinematics by using the obtained image information. Then, the actual control inputs are designed to make the actual forward linear velocity and the heading direction angle follow the pseudo commands. Through simulations and experiments for the mobile robot we have confirmed that the proposed control method is able to track target even when the velocity sensors are not used at all.

Moving Target Tracking using Vision System for an Omni-directional Wheel Robot (전방향 구동 로봇에서의 비젼을 이용한 이동 물체의 추적)

  • Kim, San;Kim, Dong-Hwan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.10
    • /
    • pp.1053-1061
    • /
    • 2008
  • In this paper, a moving target tracking using a binocular vision for an omni-directional mobile robot is addressed. In the binocular vision, three dimensional information on the target is extracted by vision processes including calibration, image correspondence, and 3D reconstruction. The robot controller is constituted with SPI(serial peripheral interface) to communicate effectively between robot master controller and wheel controllers.

Implementation of Visual Data Compressor for Vision Sensor of Mobile Robot (이동로봇의 시각센서를 위한 동영상 압축기 구현)

  • Kim Hyung O;Cho Kyoung Su;Baek Moon Yeal;Kee Chang Doo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.22 no.9 s.174
    • /
    • pp.99-106
    • /
    • 2005
  • In recent years, vision sensors are widely used to mobile robot for navigation or exploration. The analog signal transmission of visual data being used in this area, however, has some disadvantages including noise weakness in view of the data storage. A large amount of data also makes it difficult to use this method for a mobile robot. In this paper, a digital data compressing technology based on MPEG4 which substitutes for analog technology is proposed to overcome the disadvantages by using DWT(Discreate Wavelet Transform) instead of DCT(Discreate Cosine Transform). The TI Company's DSP chip, TMS320C6711, is used for the image encoder, and the performance of the proposed method is evaluated by PSNR(Peake Signal to Noise Rates), QP(Quantization Parameter) and bitrate.

Implementation of Path Finding Method using 3D Mapping for Autonomous Robotic (3차원 공간 맵핑을 통한 로봇의 경로 구현)

  • Son, Eun-Ho;Kim, Young-Chul;Chong, Kil-To
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.2
    • /
    • pp.168-177
    • /
    • 2008
  • Path finding is a key element in the navigation of a mobile robot. To find a path, robot should know their position exactly, since the position error exposes a robot to many dangerous conditions. It could make a robot move to a wrong direction so that it may have damage by collision by the surrounding obstacles. We propose a method obtaining an accurate robot position. The localization of a mobile robot in its working environment performs by using a vision system and Virtual Reality Modeling Language(VRML). The robot identifies landmarks located in the environment. An image processing and neural network pattern matching techniques have been applied to find location of the robot. After the self-positioning procedure, the 2-D scene of the vision is overlaid onto a VRML scene. This paper describes how to realize the self-positioning, and shows the overlay between the 2-D and VRML scenes. The suggested method defines a robot's path successfully. An experiment using the suggested algorithm apply to a mobile robot has been performed and the result shows a good path tracking.

Obstacle Avoidance and Path Planning for a Mobile Robot Using Single Vision System and Fuzzy Rule (모노비전과 퍼지규칙을 이용한 이동로봇의 경로계획과 장애물회피)

  • 배봉규;이원창;강근택
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2000.11a
    • /
    • pp.274-277
    • /
    • 2000
  • In this paper we propose new algorithms of path planning and obstacle avoidance for an autonomous mobile robot with vision system. Distance variation is included in path planning to approach the target point and avoid obstacles well. The fuzzy rules are also applied to both trajectory planning and obstacle avoidance to improve the autonomy of mobile robot. It is shown by computer simulation that the proposed algorithm is working well.

  • PDF

Navigation and Localization of Mobile Robot Based on Vision and Sensor Network Using Fuzzy Rules (퍼지 규칙을 이용한 비전 및 무선 센서 네트워크 기반의 이동로봇의 자율 주행 및 위치 인식)

  • Heo, Jun-Young;Kang, Geun-Tack;Lee, Won-Chang
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.673-674
    • /
    • 2008
  • This paper presents a new navigation algorithm of an autonomous mobile robot with vision and IR sensors, Zigbee Sensor Network using fuzzy rules. We also show that the developed mobile robot with the proposed algorithm is navigating very well in complex unknown environments.

  • PDF

Self-Localization of Mobile Robot Using Single Camera (단일 카메라를 이용한 이동로봇의 자기 위치 추정)

  • 김명호;이쾌희
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.404-404
    • /
    • 2000
  • This paper presents a single vision-based sel(-localization method in an corridor environment. We use the Hough transform for finding parallel lines and vertical lines. And we use these cross points as feature points and it is calculated relative distance from mobile robot to these points. For matching environment map to feature points, searching window is defined and self-localization is performed by matching procedure. The result shows the suitability of this method by experiment.

  • PDF

Self-Localization of Autonomous Mobile Robot using Multiple Landmarks (다중 표식을 이용한 자율이동로봇의 자기위치측정)

  • 강현덕;조강현
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.10 no.1
    • /
    • pp.81-86
    • /
    • 2004
  • This paper describes self-localization of a mobile robot from the multiple candidates of landmarks in outdoor environment. Our robot uses omnidirectional vision system for efficient self-localization. This vision system acquires the visible information of all direction views. The robot uses feature of landmarks whose size is bigger than that of others in image such as building, sculptures, placard etc. Robot uses vertical edges and those merged regions as the feature. In our previous work, we found the problem that landmark matching is difficult when selected candidates of landmarks belonging to region of repeating the vertical edges in image. To overcome these problems, robot uses the merged region of vertical edges. If interval of vertical edges is short then robot bundles them regarding as the same region. Thus, these features are selected as candidates of landmarks. Therefore, the extracted merged region of vertical edge reduces the ambiguity of landmark matching. Robot compares with the candidates of landmark between previous and current image. Then, robot is able to find the same landmark between image sequences using the proposed feature and method. We achieved the efficient self-localization result using robust landmark matching method through the experiments implemented in our campus.