• Title/Summary/Keyword: Omnidirectional mobile robots

Search Result 17, Processing Time 0.021 seconds

Active omni-directional range sensor for mobile robot navigation (이동 로봇의 자율주행을 위한 전방향 능동거리 센서)

  • 정인수;조형석
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.824-827
    • /
    • 1996
  • Most autonomous mobile robots view things only in front of them. As a result, they may collide against objects moving from the side or behind. To overcome the problem we have built an Active Omni-directional Range Sensor that can obtain omnidirectional depth data by a laser conic plane and a conic mirror. In the navigation of the mobile robot, the proposed sensor system makes a laser conic plane by rotating the laser point source at high speed and achieves two dimensional depth map, in real time, once an image capture. The experimental results show that the proposed sensor system provides the best potential for navigation of the mobile robot in uncertain environment.

  • PDF

Development of High Precision Docking Sensor for Mobile Robot (이동로봇을 위한 고정밀 도킹센서 개발)

  • Yoon, Nam-Il;Choi, Jong-Kap;Byun, Kyung-Seok
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.4
    • /
    • pp.348-354
    • /
    • 2011
  • Mobile robots performed various missions in various environments. In order to move to target precisely, the mobile robots need a precise position sensing system In this paper, a new high precision docking sensor is proposed. Proposed docking sensor consists of linear CCD(charge coupled device) sensor and ultrasonic sensors. The docking sensor system can measure lateral position(X), longitudinal position(Y) and angle(${\theta}$) between the sensor and flat target with simple mark. Two ultrasonic sensors measure two distances which can be converted to longitudinal position and angle. Linear CCD sensor measures lateral position using center mark of the target. To verify performance of the sensor, the sensor is applied to an omnidirectional mobile robot. Several experimental results show highly precise performance of the sensor. Repeatability of the docking sensor is less than 1mm and $0.2^{\circ}$. Proposed docking sensor can be applied for precise docking of mobile robot.

2D Map generation Using Omnidirectional Image sensor and Stereo Vision for MobileRobot MAIRO (자율이동로봇MAIRO의 전방향 이미지센서와 스테레오 비전 시스템을 이용한 2차원 지도 생성)

  • Kim, Kyung-Ho;Lee, Hyung-Kyu;Son, Young-Jun;Song, Jae-Keun
    • Proceedings of the KIEE Conference
    • /
    • 2002.11c
    • /
    • pp.495-500
    • /
    • 2002
  • Recently, a service robot industry outstands as an up and coming industry of the next generation. Specially, there are so many research in self-steering movement(SSM). In order to implement SSM, robot must effectively recognize all around, detect objects and make a surrounding map with sensors. So, many robots have a sonar and a infrared sensor, etc. But, in these sensors, We only know informations about between the robot and the object as well as resolution faculty is of inferior quality. In this paper, we will introduce new algorithm that recognizes objects around robot and makes a two dimension surrounding map with a omni-direction vision camera and two stereo vision cameras.

  • PDF

Collision Avoidance Using Omni Vision SLAM Based on Fisheye Image (어안 이미지 기반의 전방향 영상 SLAM을 이용한 충돌 회피)

  • Choi, Yun Won;Choi, Jeong Won;Im, Sung Gyu;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.3
    • /
    • pp.210-216
    • /
    • 2016
  • This paper presents a novel collision avoidance technique for mobile robots based on omni-directional vision simultaneous localization and mapping (SLAM). This method estimates the avoidance path and speed of a robot from the location of an obstacle, which can be detected using the Lucas-Kanade Optical Flow in images obtained through fish-eye cameras mounted on the robots. The conventional methods suggest avoidance paths by constructing an arbitrary force field around the obstacle found in the complete map obtained through the SLAM. Robots can also avoid obstacles by using the speed command based on the robot modeling and curved movement path of the robot. The recent research has been improved by optimizing the algorithm for the actual robot. However, research related to a robot using omni-directional vision SLAM to acquire around information at once has been comparatively less studied. The robot with the proposed algorithm avoids obstacles according to the estimated avoidance path based on the map obtained through an omni-directional vision SLAM using a fisheye image, and returns to the original path. In particular, it avoids the obstacles with various speed and direction using acceleration components based on motion information obtained by analyzing around the obstacles. The experimental results confirm the reliability of an avoidance algorithm through comparison between position obtained by the proposed algorithm and the real position collected while avoiding the obstacles.

Biologically inspired modular neural control for a leg-wheel hybrid robot

  • Manoonpong, Poramate;Worgotter, Florentin;Laksanacharoen, Pudit
    • Advances in robotics research
    • /
    • v.1 no.1
    • /
    • pp.101-126
    • /
    • 2014
  • In this article we present modular neural control for a leg-wheel hybrid robot consisting of three legs with omnidirectional wheels. This neural control has four main modules having their functional origin in biological neural systems. A minimal recurrent control (MRC) module is for sensory signal processing and state memorization. Its outputs drive two front wheels while the rear wheel is controlled through a velocity regulating network (VRN) module. In parallel, a neural oscillator network module serves as a central pattern generator (CPG) controls leg movements for sidestepping. Stepping directions are achieved by a phase switching network (PSN) module. The combination of these modules generates various locomotion patterns and a reactive obstacle avoidance behavior. The behavior is driven by sensor inputs, to which additional neural preprocessing networks are applied. The complete neural circuitry is developed and tested using a physics simulation environment. This study verifies that the neural modules can serve a general purpose regardless of the robot's specific embodiment. We also believe that our neural modules can be important components for locomotion generation in other complex robotic systems or they can serve as useful modules for other module-based neural control applications.

Omni-directional Vision SLAM using a Motion Estimation Method based on Fisheye Image (어안 이미지 기반의 움직임 추정 기법을 이용한 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Dai, Yanyan;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.8
    • /
    • pp.868-874
    • /
    • 2014
  • This paper proposes a novel mapping algorithm in Omni-directional Vision SLAM based on an obstacle's feature extraction using Lucas-Kanade Optical Flow motion detection and images obtained through fish-eye lenses mounted on robots. Omni-directional image sensors have distortion problems because they use a fish-eye lens or mirror, but it is possible in real time image processing for mobile robots because it measured all information around the robot at one time. In previous Omni-Directional Vision SLAM research, feature points in corrected fisheye images were used but the proposed algorithm corrected only the feature point of the obstacle. We obtained faster processing than previous systems through this process. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we remove the feature points of the floor surface using a histogram filter, and label the candidates of the obstacle extracted. Third, we estimate the location of obstacles based on motion vectors using LKOF. Finally, it estimates the robot position using an Extended Kalman Filter based on the obstacle position obtained by LKOF and creates a map. We will confirm the reliability of the mapping algorithm using motion estimation based on fisheye images through the comparison between maps obtained using the proposed algorithm and real maps.

Driving Control System applying Position Recognition Method of Ball Robot using Image Processing (영상처리를 이용하는 볼 로봇의 위치 인식 방법을 적용한 주행 제어 시스템)

  • Heo, Nam-Gyu;Lee, Kwang-Min;Park, Seong-Hyun;Kim, Min-Ji;Park, Sung-Gu;Chung, Myung-Jin
    • Journal of IKEEE
    • /
    • v.25 no.1
    • /
    • pp.148-155
    • /
    • 2021
  • As robot technology advances, research on the driving system of mobile robots is actively being conducted. The driving system of a mobile robot configured based on two-wheels and four-wheels has an advantage in unidirectional driving such as a straight line, but has disadvantages in turning direction and rotating in place. A ball robot using a ball as a wheel has an advantage in omnidirectional movement, but due to its structurally unstable characteristics, balancing control to maintain attitude and driving control for movement are required. By estimating the position from an encoder attached to the motor, conventional ball robots have a limitation, which causes the accumulation of errors during driving control. In this study, a driving control system was proposed that estimates the position coordinates of a ball robot through image processing and uses it for driving control. A driving control system including an image processing unit, a communication unit, a display unit, and a control unit for estimating the position of the ball robot was designed and manufactured. Through the driving control experiment applying the driving control system of the ball robot, it was confirmed that the ball robot was controlled within the error range of ±50.3mm in the x-axis direction and ±53.9mm in the y-axis direction without accumulating errors.