• Title/Summary/Keyword: Robot Vision

Search Result 877, Processing Time 0.031 seconds

Point Number Algorithm for Position Identification of Mobile Robots (로봇의 위치계산을 위한 포인트 개수 알고리즘)

  • Liu, Jiang;Son, Young-Ik;Kim, Kab-Il
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.427-429
    • /
    • 2005
  • This paper presents the use of Point Number Algorithm (PNA) for real-time image processing for position identification of mobile robot. PNA can get how many points in the image gotten from the robot vision and can calculate the distance between the robot and the wall by the number of the points. The algorithm can be applied to a robot vision system enable to identify where it is in the workspace. In the workspace, the walls are made up by white background with many black points on them evenly. The angle of the vision is set invariable. So the more black points in the vision, the longer the distance is from the robot to the wall. But when the robot does not face the wall directly, the number of the black points is different. When the robot faces the wall, the least number of the black points can be gotten. The simulation results are presented at the end of this paper.

  • PDF

Vision Based Mobile Robot Control (이동 로봇의 비젼 기반 제어)

  • Kim, Jin-Hwan
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.60 no.2
    • /
    • pp.63-67
    • /
    • 2011
  • This paper presents the mobile robot control based on vision system. The proposed vision based controller consist of the camera tracking controller and the formation controller. Th e camera controller has the adaptive gain based on IBVS. The formation controller which is designed in the sense of the Lyapunov stability follows the leader. Simluation results show that the proposed vision based mobile robot control is validated for indoor mobile robot applications.

Vision Based Sensor Fusion System of Biped Walking Robot for Environment Recognition (영상 기반 센서 융합을 이용한 이쪽로봇에서의 환경 인식 시스템의 개발)

  • Song, Hee-Jun;Lee, Seon-Gu;Kang, Tae-Gu;Kim, Dong-Won;Seo, Sam-Jun;Park, Gwi-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.123-125
    • /
    • 2006
  • This paper discusses the method of vision based sensor fusion system for biped robot walking. Most researches on biped walking robot have mostly focused on walking algorithm itself. However, developing vision systems for biped walking robot is an important and urgent issue since biped walking robots are ultimately developed not only for researches but to be utilized in real life. In the research, systems for environment recognition and tole-operation have been developed for task assignment and execution of biped robot as well as for human robot interaction (HRI) system. For carrying out certain tasks, an object tracking system using modified optical flow algorithm and obstacle recognition system using enhanced template matching and hierarchical support vector machine algorithm by wireless vision camera are implemented with sensor fusion system using other sensors installed in a biped walking robot. Also systems for robot manipulating and communication with user have been developed for robot.

  • PDF

Development of Vision based Autonomous Obstacle Avoidance System for a Humanoid Robot (휴머노이드 로봇을 위한 비전기반 장애물 회피 시스템 개발)

  • Kang, Tae-Koo;Kim, Dong-Won;Park, Gwi-Tae
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.1
    • /
    • pp.161-166
    • /
    • 2011
  • This paper addresses the vision based autonomous walking control system. To handle the obstacles which exist beyond the field of view(FOV), we used the 3d panoramic depth image. Moreover, to decide the avoidance direction and walking motion of a humanoid robot for the obstacle avoidance by itself, we proposed the vision based path planning using 3d panoramic depth image. In the vision based path planning, the path and walking motion are decided under environment condition such as the size of obstacle and available avoidance space. The vision based path planning is applied to a humanoid robot, URIA. The results from these evaluations show that the proposed method can be effectively applied to decide the avoidance direction and the walking motion of a practical humanoid robot.

A Study on the Obstacle Avoidance of a Multi-Link Robot System using Vision System (Vision System을 이용한 다관절 로봇팔의 장애물 우회에 관한 연구)

  • 송경수;이병룡
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2000.11a
    • /
    • pp.691-694
    • /
    • 2000
  • In this paper, a motion control algorithm is proposed by using neural network system, which makes a robot arm successfully avoid unexpected obstacle when the robot is moving from the start to the goal position. During the motion, if there is an obstacle the vision system recognizes it. And in every time the optimization-algorithm quickly chooses a motion among the possible motions of robot. The proposed algorithm has a good avoidance characteristic in simulation.

  • PDF

Evaluation of Robot Vision Control Scheme Based on EKF Method for Slender Bar Placement in the Appearance of Obstacles (장애물 출현 시 얇은 막대 배치작업에 대한 EKF 방법을 이용한 로봇 비젼제어기법 평가)

  • Hong, Sung-Mun;Jang, Wan-Shik;Kim, Jae-Meung
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.32 no.5
    • /
    • pp.471-481
    • /
    • 2015
  • This paper presents the robot vision control schemes using Extended Kalman Filter (EKF) method for the slender bar placement in the appearance of obstacles during robot movement. The vision system model used for this study involves the six camera parameters($C_1{\sim}C_6$). In order to develop the robot vision control scheme, first, the six parameters are estimated. Then, based on the estimated parameters, the robot's joint angles are estimated for the slender bar placement. Especially, robot trajectory caused by obstacles is divided into three obstacle regions, which are beginning region, middle region and near target region. Finally, the effects of number of obstacles using the proposed robot's vision control schemes are investigated in each obstacle region by performing experiments of the slender bar placement.

Attitude Estimation for the Biped Robot with Vision and Gyro Sensor Fusion (비전 센서와 자이로 센서의 융합을 통한 보행 로봇의 자세 추정)

  • Park, Jin-Seong;Park, Young-Jin;Park, Youn-Sik;Hong, Deok-Hwa
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.6
    • /
    • pp.546-551
    • /
    • 2011
  • Tilt sensor is required to control the attitude of the biped robot when it walks on an uneven terrain. Vision sensor, which is used for recognizing human or detecting obstacles, can be used as a tilt angle sensor by comparing current image and reference image. However, vision sensor alone has a lot of technological limitations to control biped robot such as low sampling frequency and estimation time delay. In order to verify limitations of vision sensor, experimental setup of an inverted pendulum, which represents pitch motion of the walking or running robot, is used and it is proved that only vision sensor cannot control an inverted pendulum mainly because of the time delay. In this paper, to overcome limitations of vision sensor, Kalman filter for the multi-rate sensor fusion algorithm is applied with low-quality gyro sensor. It solves limitations of the vision sensor as well as eliminates drift of gyro sensor. Through the experiment of an inverted pendulum control, it is found that the tilt estimation performance of fusion sensor is greatly improved enough to control the attitude of an inverted pendulum.

Vision steered micro robot for MIROSOT (화상처리에 의한 등곡률반경 방식의 로봇 제어)

  • 차승엽;김병수;김경태
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.825-827
    • /
    • 1997
  • This paper presents a robot which is steered by vision system. The proposed robot system has an AM188ES CPU(5.3 MIPS) and 2DC motors with encoder and turns accurately at any speed and shows a movement like a human controlled car using a steering wheel. To the robot only steering angle value is sent without considering the speed. We present how to control this robot using our real time vision system.

  • PDF

Servo control of mobile robot using vision system (비젼시스템을 이용한 이동로봇의 서보제어)

  • 백승민;국태용
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.540-543
    • /
    • 1997
  • In this paper, a precise trajectory tracking method for mobile robot using a vision system is presented. In solving the problem of precise trajectory tracking, a hierarchical control structure is used which is composed of the path planer, vision system, and dynamic controller. When designing the dynamic controller, non-ideal conditions such as parameter variation, frictional force, and external disturbance are considered. The proposed controller can learn bounded control input for repetitive or periodic dynamics compensation which provides robust and adaptive learning capability. Moreover, the usage of vision system makes mobile robot compensate the cumulative location error which exists when relative sensor like encoder is used to locate the position of mobile robot. The effectiveness of the proposed control scheme is shown through computer simulation.

  • PDF

3-D vision sensor system for arc welding robot with coordinated motion by transputer system

  • Ishida, Hirofumi;Kasagami, Fumio;Ishimatsu, Takakazu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1993.10b
    • /
    • pp.446-450
    • /
    • 1993
  • In this paper we propose an arc welding robot system, where two robots works coordinately and employ the vision sensor. In this system one robot arm holds a welding target as a positioning device, and the other robot moves the welding torch. The vision sensor consists of two laser slit-ray projectors and one CCD TV camera, and is mounted on the top of one robot. The vision sensor detects the 3-dimensional shape of the groove on the target work which needs to be weld. And two robots are moved coordinately to trace the grooves with accuracy. In order to realize fast image processing, totally five sets of high-speed parallel processing units (Transputer) are employed. The teaching tasks of the coordinated motions are simplified considerably due to this vision sensor. Experimental results reveal the applicability of our system.

  • PDF