• Title/Summary/Keyword: Mobile robot localization

Search Result 397, Processing Time 0.025 seconds

Simultaneous Estimation of Landmark Location and Robot Pose Using Particle Filter Method (파티클 필터 방법을 이용한 특징점과 로봇 위치의 동시 추정)

  • Kim, Tae-Gyun;Ko, Nak-Yong;Noh, Sung-Woo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.3
    • /
    • pp.353-360
    • /
    • 2012
  • This paper describes a SLAM method which estimates landmark locations and robot pose simultaneously. The particle filter can deal with nonlinearity of robot motion as well as the non Gaussian property of robot motion uncertainty and sensor error. The state to be estimated includes the locations of landmarks in addition to the robot pose. In the experiment, four beacons which transmit ultrasonic signal are used as landmarks. The robot receives the ultrasonic signals from the beacons and detects the distance to them. The method uses rang scanning sensor to build geometric feature of the environment. Since robot location and heading are estimated by the particle filter, the scanned range data can be converted to the geometric map. The performance of the method is compared with that of the deadreckoning and trilateration.

Development of a CAN-based Controllsr for Mobile Robots using a DSP TMS320C32 (DSP를 이용한 CAN 기반 이동로봇 제어기 개발)

  • Kim, Dong-Hun;You, Bum-Jae;Hwang-Bo, Myung;Lim, Myo-Taeg;Oh, Sang-Rok;Kim, Kwang-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.2784-2786
    • /
    • 2000
  • Mobile robots include control modules for autonomous obstacle avoidance and navigation. They are range modules to detect and avoid obstacles. motor control modules to operate two wheels. and encoder modules for localization. There is needed an appropriate controller for each modules. In this paper. a control system. including 18 channels for Sonar sensors. 4 channels for PWM modules. and 4 channels for encoder modules. is proposed using TMS320C32 DSP adopted with CAN. The board communicates with other modules by CAN. so that mobile robots can perform several tasks in real time. So we can realize on autonomous mobile robot with basic functions such as obstacle avoidance by using the developed controller. Especially. this controller has 100 msec scan time for 16 sonar sensors and can detect closer objects comparing with standard sonar sensors.

  • PDF

Collision Avoidance Using Omni Vision SLAM Based on Fisheye Image (어안 이미지 기반의 전방향 영상 SLAM을 이용한 충돌 회피)

  • Choi, Yun Won;Choi, Jeong Won;Im, Sung Gyu;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.3
    • /
    • pp.210-216
    • /
    • 2016
  • This paper presents a novel collision avoidance technique for mobile robots based on omni-directional vision simultaneous localization and mapping (SLAM). This method estimates the avoidance path and speed of a robot from the location of an obstacle, which can be detected using the Lucas-Kanade Optical Flow in images obtained through fish-eye cameras mounted on the robots. The conventional methods suggest avoidance paths by constructing an arbitrary force field around the obstacle found in the complete map obtained through the SLAM. Robots can also avoid obstacles by using the speed command based on the robot modeling and curved movement path of the robot. The recent research has been improved by optimizing the algorithm for the actual robot. However, research related to a robot using omni-directional vision SLAM to acquire around information at once has been comparatively less studied. The robot with the proposed algorithm avoids obstacles according to the estimated avoidance path based on the map obtained through an omni-directional vision SLAM using a fisheye image, and returns to the original path. In particular, it avoids the obstacles with various speed and direction using acceleration components based on motion information obtained by analyzing around the obstacles. The experimental results confirm the reliability of an avoidance algorithm through comparison between position obtained by the proposed algorithm and the real position collected while avoiding the obstacles.

The Pathplanning of Navigation Algorithm using Dynamic Window Approach and Dijkstra (동적창과 Dijkstra 알고리즘을 이용한 항법 알고리즘에서 경로 설정)

  • Kim, Jae Joon;Jee, Gui-In
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.94-96
    • /
    • 2021
  • In this paper, we develop a new navigation algorithm for industrial mobile robots to arrive at the destination in unknown environment. To achieve this, we suggest a navigation algorithm that combines Dynamic Window Approach (DWA) and Dijkstra path planning algorithm. We compare Local Dynamic Window Approach (LDWA), Global Dynamic Window Approach(GDWA), Rapidly-exploring Random Tree (RRT) Algorithm. The navigation algorithm using Dijkstra algorithm combined with LDWA and GDWA makes mobile robots to reach the destination. and obstacles faced during the path planning process of LDWA and GDWA. Then, we compare on time taken to arrive at the destination, obstacle avoidance and computation complexity of each algorithm. To overcome the limitation, we seek ways to use the optimized navigation algorithm for industrial use.

  • PDF

Passive RFID system for Efficient Area Coverage Algorithm (Passive RFID 시스템을 이용한 효율적인 영역 탐색 기법)

  • Lee, Sangyup;Lee, Choong-Yong;Jo, Wonse;Nam, Sang Yep;Kim, Dong-Han
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.2
    • /
    • pp.220-226
    • /
    • 2014
  • This paper proposes an enhanced fast scanning method for multi-agent robot system. Passive RFID tag can read and store the information within the range of recognizable RF tag reader. Based on this information of Passive RFID tag, the position of mobile robot can be estimated and at the same time, the efficiency of scanning process can be improved because it provides a scanning trace for other mobile robots. This paper proposes an dfficient motion planning algorithm for mobile robots in a smart floor environment.

3D Range Measurement using Infrared Light and a Camera (적외선 조명 및 단일카메라를 이용한 입체거리 센서의 개발)

  • Kim, In-Cheol;Lee, Soo-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.10
    • /
    • pp.1005-1013
    • /
    • 2008
  • This paper describes a new sensor system for 3D range measurement using the structured infrared light. Environment and obstacle sensing is the key issue for mobile robot localization and navigation. Laser scanners and infrared scanners cover $180^{\circ}$ and are accurate but too expensive. Those sensors use rotating light beams so that the range measurements are constrained on a plane. 3D measurements are much more useful in many ways for obstacle detection, map building and localization. Stereo vision is very common way of getting the depth information of 3D environment. However, it requires that the correspondence should be clearly identified and it also heavily depends on the light condition of the environment. Instead of using stereo camera, monocular camera and the projected infrared light are used in order to reduce the effects of the ambient light while getting 3D depth map. Modeling of the projected light pattern enabled precise estimation of the range. Identification of the cells from the pattern is the key issue in the proposed method. Several methods of correctly identifying the cells are discussed and verified with experiments.

3D Environment Perception using Stereo Infrared Light Sources and a Camera (스테레오 적외선 조명 및 단일카메라를 이용한 3차원 환경인지)

  • Lee, Soo-Yong;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.519-524
    • /
    • 2009
  • This paper describes a new sensor system for 3D environment perception using stereo structured infrared light sources and a camera. Environment and obstacle sensing is the key issue for mobile robot localization and navigation. Laser scanners and infrared scanners cover $180^{\circ}$ and are accurate but too expensive. Those sensors use rotating light beams so that the range measurements are constrained on a plane. 3D measurements are much more useful in many ways for obstacle detection, map building and localization. Stereo vision is very common way of getting the depth information of 3D environment. However, it requires that the correspondence should be clearly identified and it also heavily depends on the light condition of the environment. Instead of using stereo camera, monocular camera and two projected infrared light sources are used in order to reduce the effects of the ambient light while getting 3D depth map. Modeling of the projected light pattern enabled precise estimation of the range. Two successive captures of the image with left and right infrared light projection provide several benefits, which include wider area of depth measurement, higher spatial resolution and the visibility perception.

A new Observation Model to Improve the Consistency of EKF-SLAM Algorithm in Large-scale Environments (광범위 환경에서 EKF-SLAM의 일관성 향상을 위한 새로운 관찰모델)

  • Nam, Chang-Joo;Kang, Jae-Hyeon;Doh, Nak-Ju Lett
    • The Journal of Korea Robotics Society
    • /
    • v.7 no.1
    • /
    • pp.29-34
    • /
    • 2012
  • This paper suggests a new observation model for Extended Kalman Filter based Simultaneous Localization and Mapping (EKF-SLAM). Since the EKF framework linearizes non-linear functions around the current estimate, the conventional line model has large linearization errors when a mobile robot locates faraway from its initial position. On the other hand, the model that we propose yields less linearization error with respect to the landmark position and thus suitable in a large-scale environment. To achieve it, we build up a three-dimensional space by adding a virtual axis to the robot's two-dimensional coordinate system and extract a plane by using a detected line on the two-dimensional space and the virtual axis. Since Jacobian matrix with respect to the landmark position has small value, we can estimate the position of landmarks better than the conventional line model. The simulation results verify that the new model yields less linearization errors than the conventional line model.

Arc/Line Segments-based SLAM by Updating Accumulated Sensor Data (누적 센서 데이터 갱신을 이용한 아크/라인 세그먼트 기반 SLAM)

  • Yan, Rui-Jun;Choi, Youn-sung;Wu, Jing;Han, Chang-soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.10
    • /
    • pp.936-943
    • /
    • 2015
  • This paper presents arc/line segments-based Simultaneous Localization and Mapping (SLAM) by updating accumulated laser sensor data with a mobile robot moving in an unknown environment. For each scan, the sensor data in the set are stored by a small constant number of parameters that can recover the necessary information contained in the raw data of the group. The arc and line segments are then extracted according to different limit values, but based on the same parameters. If two segments, whether they are homogenous features or not, from two scans are matched successfully, the new segment is extracted from the union set with combined data information obtained by means of summing the equivalent parameters of these two sets, not combining the features directly. The covariance matrixes of the segments are also updated and calculated synchronously employing the same parameters. The experiment results obtained in an irregular indoor environment show the good performance of the proposed method.

The Implementation of Graph-based SLAM Using General Graph Optimization (일반 그래프 최적화를 활용한 그래프 기반 SLAM 구현)

  • Ko, Nak-Yong;Chung, Jun-Hyuk;Jeong, Da-Bin
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.4
    • /
    • pp.637-644
    • /
    • 2019
  • This paper describes an implementation of a graph-based simultaneous localization and mapping(SLAM) method called the General Graph Optimization. The General Graph Optimization formulates the SLAM problem using nodes and edges. The nodes represent the location and attitude of a robot in time sequence, and the edge between the nodes depict the constraint between the nodes. The constraints are imposed by sensor measurements. The General Graph Optimization solves the problem by optimizing the performance index determined by the constraints. The implementation is verified using the measurement data sets which are open for test of various SLAM methods.