• Title/Summary/Keyword: Robot vision

Search Result 878, Processing Time 0.029 seconds

A Parallel Implementation of Multiple Non-overlapping Cameras for Robot Pose Estimation

  • Ragab, Mohammad Ehab;Elkabbany, Ghada Farouk
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.11
    • /
    • pp.4103-4117
    • /
    • 2014
  • Image processing and computer vision algorithms are gaining larger concern in a variety of application areas such as robotics and man-machine interaction. Vision allows the development of flexible, intelligent, and less intrusive approaches than most of the other sensor systems. In this work, we determine the location and orientation of a mobile robot which is crucial for performing its tasks. In order to be able to operate in real time there is a need to speed up different vision routines. Therefore, we present and evaluate a method for introducing parallelism into the multiple non-overlapping camera pose estimation algorithm proposed in [1]. In this algorithm the problem has been solved in real time using multiple non-overlapping cameras and the Extended Kalman Filter (EKF). Four cameras arranged in two back-to-back pairs are put on the platform of a moving robot. An important benefit of using multiple cameras for robot pose estimation is the capability of resolving vision uncertainties such as the bas-relief ambiguity. The proposed method is based on algorithmic skeletons for low, medium and high levels of parallelization. The analysis shows that the use of a multiprocessor system enhances the system performance by about 87%. In addition, the proposed design is scalable, which is necaccery in this application where the number of features changes repeatedly.

Vision-based Self Localization Using Ceiling Artificial Landmark for Ubiquitous Mobile Robot (유비쿼터스 이동로봇용 천장 인공표식을 이용한 비젼기반 자기위치인식법)

  • Lee Ju-Sang;Lim Young-Cheol;Ryoo Young-Jae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.5
    • /
    • pp.560-566
    • /
    • 2005
  • In this paper, a practical technique for correction of a distorted image for vision-based localization of ubiquitous mobile robot. The localization of mobile robot is essential and is realized by using camera vision system. In order to wide the view angle of camera, the vision system includes a fish-eye lens, which distorts the image. Because a mobile robot moves rapidly, the image processing should he fast to recognize the localization. Thus, we propose the practical correction technique for a distorted image, verify the Performance by experimental test.

Design of Autonomous Stair Robot System (자율주행 형 계단 승하강용 로봇 시스템 설계)

  • 홍영호;김동환;임충혁
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.9 no.1
    • /
    • pp.73-81
    • /
    • 2003
  • An autonomous stair robot recognizing the stair, and climbing up and down the stair by utilizing a robot vision, photo sensors, and appropriate climbing algorithm is introduced. Four arms associated with four wheels make the robot climb up and down more safely and faster than a simple track typed robot. The robot can adjust wheel base according to the stair width, hence it can adopt to a variable width stair with different algorithms in climbing up and down. The command and image data acquired from the robot are transferred to the main computer through RF wireless modules, and the data are delivered to a remote computer via a network communication through a proper data compression, thus, the real time image monitoring is implemented effectively.

Self-localization of a Mobile Robot for Decreasing the Error and VRML Image Overlay (오차 감소를 위한 이동로봇 Self-Localization과 VRML 영상오버레이 기법)

  • Kwon Bang-Hyun;Shon Eun-Ho;Kim Young-Chul;Chong Kil-To
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.4
    • /
    • pp.389-394
    • /
    • 2006
  • Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-localization, and there are different modalities as well (vision, laser range finders, ultrasonic sonars). Since sensor information is generally uncertain and contains noise, there are many researches to reduce the noise. But, the correctness is limited because most researches are based on statistical approach. The goal of our research is to measure more exact robot location by matching between built VRML 3D model and real vision image. To determine the position of mobile robot, landmark-localization technique has been applied. Landmarks are any detectable structure in the physical environment. Some use vertical lines, others use specially designed markers, In this paper, specially designed markers are used as landmarks. Given known focal length and a single image of three landmarks it is possible to compute the angular separation between the lines of sight of the landmarks. The image-processing and neural network pattern matching techniques are employed to recognize landmarks placed in a robot working environment. After self-localization, the 2D scene of the vision is overlaid with the VRML scene.

Map-Building and Position Estimation based on Multi-Sensor Fusion for Mobile Robot Navigation in an Unknown Environment (이동로봇의 자율주행을 위한 다중센서융합기반의 지도작성 및 위치추정)

  • Jin, Tae-Seok;Lee, Min-Jung;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.5
    • /
    • pp.434-443
    • /
    • 2007
  • Presently, the exploration of an unknown environment is an important task for thee new generation of mobile service robots and mobile robots are navigated by means of a number of methods, using navigating systems such as the sonar-sensing system or the visual-sensing system. To fully utilize the strengths of both the sonar and visual sensing systems. This paper presents a technique for localization of a mobile robot using fusion data of multi-ultrasonic sensors and vision system. The mobile robot is designed for operating in a well-structured environment that can be represented by planes, edges, comers and cylinders in the view of structural features. In the case of ultrasonic sensors, these features have the range information in the form of the arc of a circle that is generally named as RCD(Region of Constant Depth). Localization is the continual provision of a knowledge of position which is deduced from it's a priori position estimation. The environment of a robot is modeled into a two dimensional grid map. we defines a vision-based environment recognition, phisically-based sonar sensor model and employs an extended Kalman filter to estimate position of the robot. The performance and simplicity of the approach is demonstrated with the results produced by sets of experiments using a mobile robot.

VRML image overlay method for Robot's Self-Localization (VRML 영상오버레이기법을 이용한 로봇의 Self-Localization)

  • Sohn, Eun-Ho;Kwon, Bang-Hyun;Kim, Young-Chul;Chong, Kil-To
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.318-320
    • /
    • 2006
  • Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-localization, and there are different modalities as well (vision, laser range finders, ultrasonic sonars). Since sensor information is generally uncertain and contains noise, there are many researches to reduce the noise. But, the correctness is limited because most researches are based on statistical approach. The goal of our research is to measure more exact robot location by matching between built VRML 3D model and real vision image. To determine the position of mobile robot, landmark-localitzation technique has been applied. Landmarks are any detectable structure in the physical environment. Some use vertical lines, others use specially designed markers, In this paper, specially designed markers are used as landmarks. Given known focal length and a single image of three landmarks it is possible to compute the angular separation between the lines of sight of the landmarks. The image-processing and neural network pattern matching techniques are employed to recognize landmarks placed in a robot working environment. After self-localization, the 2D scene of the vision is overlaid with the VRML scene.

  • PDF

The compensation of kinematic differences of a robot using image information (화상정보를 이용한 로봇기구학의 오차 보정)

  • Lee, Young-Jin;Lee, Min-Chul;Ahn, Chul-Ki;Son, Kwon;Lee, Jang-Myung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.1840-1843
    • /
    • 1997
  • The task environment of a robot is changing rapidly and task itself becomes complicated due to current industrial trends of multi-product and small lot size production. A convenient user-interfaced off-line programming(OLP) system is being developed in order to overcome the difficulty in teaching a robot task. Using the OLP system, operators can easily teach robot tasks off-line and verify feasibility of the task through simulation of a robot prior to the on-line execution. However, some task errors are inevitable by kinematic differences between the robot model in OLP and the actual robot. Three calibration methods using image information are proposed to compensate the kinematic differences. These methods compose of a relative position vector method, three point compensation method, and base line compensation method. To compensate a kinematic differences the vision system with one monochrome camera is used in the calibration experiment.

  • PDF

Mobile Robot Destination Generation by Tracking a Remote Controller Using a Vision-aided Inertial Navigation Algorithm

  • Dang, Quoc Khanh;Suh, Young-Soo
    • Journal of Electrical Engineering and Technology
    • /
    • v.8 no.3
    • /
    • pp.613-620
    • /
    • 2013
  • A new remote control algorithm for a mobile robot is proposed, where a remote controller consists of a camera and inertial sensors. Initially the relative position and orientation of a robot is estimated by capturing four circle landmarks on the plate of the robot. When the remote controller moves to point to the destination, the camera pointing trajectory is estimated using an inertial navigation algorithm. The destination is transmitted wirelessly to the robot and then the robot is controlled to move to the destination. A quick movement of the remote controller is possible since the destination is estimated using inertial sensors. Also unlike the vision only control, the robot can be out of camera's range of view.

A development of remote measurement robot with vision system (원격 화상 계측 로봇 개발)

  • 양광용;최현석;현웅근
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2001.10a
    • /
    • pp.375-379
    • /
    • 2001
  • This paper describes a development of remote measurement robot with vision system. The developed robot consists of robot controller and host PC program. The robot and camera can move with 2 degree of freedom by independent remote controlling a user friendly designe joystick. A visual image and command data translated through 900MHz and 447MHz RF controller, respectively. To show the validity of the developed system, operations of the robot in the field area were illustrated.

  • PDF

A development of remote controlled mobile robot working in a hazard environment (위해환경에서 구동가능한 원격제어 이동 로봇 개발)

  • 박제용;최현석;현웅근
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.11a
    • /
    • pp.457-461
    • /
    • 2002
  • This paper describes a development of a robot working in hazard environment. The developed robot consists of robot controller with vision system and host PC program. The robot and camera can move with 2 degree of freedom by independent remote controlling a user friendly designed joystick. An environment is recognized by the vision system and ultra sonic sensors. The visual image and command data translated through 900MHz and 447MHz RF controller, respectively. To show the validity of the developed system, operations of the robot in the field area were illustrated.

  • PDF