• Title/Summary/Keyword: Dead Reckoning Method

Search Result 77, Processing Time 0.025 seconds

The Posture Estimation of Mobile Robots Using Sensor Data Fusion Algorithm (센서 데이터 융합을 이용한 이동 로보트의 자세 추정)

  • 이상룡;배준영
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.16 no.11
    • /
    • pp.2021-2032
    • /
    • 1992
  • A redundant sensor system, which consists of two incremental encoders and a gyro sensor, has been proposed for the estimation of the posture of mobile robots. A hardware system was built for estimating the heading angle change of the mobile robot from outputs of the gyro sensor. The proposed hardware system of the gyro sensor produced an accurate estimate for the heading angle change of the robot. A sensor data fusion algorithm has been developed to find the optimal estimates of the heading angle change based on the stochastic measurement equations of our readundant sensor system. The maximum likelihood estimation method is applied to combine the noisy measurement data from both encoders and gyro sensor. The proposed fusion algorithm demonstrated a satisfactory performance, showing significantly reduced estimation error compared to the conventional method, in various navigation experiments.

Research of MEMS INS Based 3D Positioning Technologies for Workers in Construction Field (MEMS INS 기반 건설현장작업자의 3D 위치결정기법에 관한 연구)

  • Jang, Yonggu;Kim, Hyunsoo;Do, Seungbok;Jeon, Heungsoo
    • Journal of the Korean GEO-environmental Society
    • /
    • v.14 no.3
    • /
    • pp.51-60
    • /
    • 2013
  • It is proposed the new method to calculate the absolute altitude and horizontal position of worker in construction field. For this research, we used a pressure sensor, MEMS INS sensor to acquire 3D position of worker. The easiest way to show the result of this research is to use smart phone which equipped various digital sensors in this hardware. So we made two softwares: Data acquisition software in Android smart phone and Data monitoring software in PC. During this research, we encountered several kind of problems which have to be overcame. This paper shows these processes and the results of 3D positioning technologies we suggested newly.

Vision-based Localization for AUVs using Weighted Template Matching in a Structured Environment (구조화된 환경에서의 가중치 템플릿 매칭을 이용한 자율 수중 로봇의 비전 기반 위치 인식)

  • Kim, Donghoon;Lee, Donghwa;Myung, Hyun;Choi, Hyun-Taek
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.8
    • /
    • pp.667-675
    • /
    • 2013
  • This paper presents vision-based techniques for underwater landmark detection, map-based localization, and SLAM (Simultaneous Localization and Mapping) in structured underwater environments. A variety of underwater tasks require an underwater robot to be able to successfully perform autonomous navigation, but the available sensors for accurate localization are limited. A vision sensor among the available sensors is very useful for performing short range tasks, in spite of harsh underwater conditions including low visibility, noise, and large areas of featureless topography. To overcome these problems and to a utilize vision sensor for underwater localization, we propose a novel vision-based object detection technique to be applied to MCL (Monte Carlo Localization) and EKF (Extended Kalman Filter)-based SLAM algorithms. In the image processing step, a weighted correlation coefficient-based template matching and color-based image segmentation method are proposed to improve the conventional approach. In the localization step, in order to apply the landmark detection results to MCL and EKF-SLAM, dead-reckoning information and landmark detection results are used for prediction and update phases, respectively. The performance of the proposed technique is evaluated by experiments with an underwater robot platform in an indoor water tank and the results are discussed.

Position Control of Mobile Robot for Human-Following in Intelligent Space with Distributed Sensors

  • Jin Tae-Seok;Lee Jang-Myung;Hashimoto Hideki
    • International Journal of Control, Automation, and Systems
    • /
    • v.4 no.2
    • /
    • pp.204-216
    • /
    • 2006
  • Latest advances in hardware technology and state of the art of mobile robot and artificial intelligence research can be employed to develop autonomous and distributed monitoring systems. And mobile service robot requires the perception of its present position to coexist with humans and support humans effectively in populated environments. To realize these abilities, robot needs to keep track of relevant changes in the environment. This paper proposes a localization of mobile robot using the images by distributed intelligent networked devices (DINDs) in intelligent space (ISpace) is used in order to achieve these goals. This scheme combines data from the observed position using dead-reckoning sensors and the estimated position using images of moving object, such as those of a walking human, used to determine the moving location of a mobile robot. The moving object is assumed to be a point-object and projected onto an image plane to form a geometrical constraint equation that provides position data of the object based on the kinematics of the intelligent space. Using the a priori known path of a moving object and a perspective camera model, the geometric constraint equations that represent the relation between image frame coordinates of a moving object and the estimated position of the robot are derived. The proposed method utilizes the error between the observed and estimated image coordinates to localize the mobile robot, and the Kalman filtering scheme is used to estimate the location of moving robot. The proposed approach is applied for a mobile robot in ISpace to show the reduction of uncertainty in the determining of the location of the mobile robot. Its performance is verified by computer simulation and experiment.

Experiments of Unmanned Underwater Vehicle's 3 Degrees of Freedom Motion Applied the SLAM based on the Unscented Kalman Filter (무인 잠수정 3자유도 운동 실험에 대한 무향 칼만 필터 기반 SLAM기법 적용)

  • Hwang, A-Rom;Seong, Woo-Jae;Jun, Bong-Huan;Lee, Pan-Mook
    • Journal of Ocean Engineering and Technology
    • /
    • v.23 no.2
    • /
    • pp.58-68
    • /
    • 2009
  • The increased use of unmanned underwater vehicles (UUV) has led to the development of alternative navigational methods that do not employ acoustic beacons and dead reckoning sensors. This paper describes a simultaneous localization and mapping (SLAM) scheme that uses range sonars mounted on a small UUV. A SLAM scheme is an alternative navigation method for measuring the environment through which the vehicle is passing and providing the relative position of the UUV. A technique for a SLAM algorithm that uses several ranging sonars is presented. This technique utilizes an unscented Kalman filter to estimate the locations of the UUV and surrounding objects. In order to work efficiently, the nearest neighbor standard filter is introduced as the data association algorithm in the SLAM for associating the stored targets returned by the sonar at each time step. The proposed SLAM algorithm was tested by experiments under various three degrees of freedom motion conditions. The results of these experiments showed that the proposed SLAM algorithm was capable of estimating the position of the UUV and the surrounding objects and demonstrated that the algorithm will perform well in various environments.

Development of an Autonomous Guide Robot for Campus Tour (캠퍼스 자율 안내로봇 개발)

  • Lim, Jong Hwan;Kim, Hee Jung
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.41 no.6
    • /
    • pp.543-551
    • /
    • 2017
  • A campus guide robot was developed that can autonomously guide people through a university campus. The robot is able to evaluate its location using Differential Global Positioning System (DGPS) and Dead-Reckoning using the encoders mounted on its wheels. The robot can navigate autonomously along a guide route that is set in advance. A new position-based guidance approach was suggested. Unlike the conventional method of setting the guide sequence in advance, the robot acquires guidance by judging whether there is guide information corresponding to its current position. The robot searches guide information from the guide database while it moves along the guide path autonomously. If there is any guide information available around the location of the robot, then it performs guide functions. We also suggested an effective guide scenario that can maximize the interest of people. The performance of the robot was tested through sets of experiments in a true campus environment.

Position Estimation of Autonomous Mobile Robot Using Geometric Information of a Moving Object (이동물체의 기하학적 위치정보를 이용한 자율이동로봇의 위치추정)

  • Jin, Tae-Seok;Lee, Jang-Myung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.4
    • /
    • pp.438-444
    • /
    • 2004
  • The intelligent robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, robots need to recognize their position and posture in known environment as well as unknown environment. Moreover, it is necessary for their localization to occur naturally. It is desirable for a robot to estimate of his position by solving uncertainty for mobile robot navigation, as one of the best important problems. In this paper, we describe a method for the localization of a mobile robot using image information of a moving object. This method combines the observed position from dead-reckoning sensors and the estimated position from the images captured by a fixed camera to localize a mobile robot. Using the a priori known path of a moving object in the world coordinates and a perspective camera model, we derive the geometric constraint equations which represent the relation between image frame coordinates for a moving object and the estimated robot's position. Since the equations are based or the estimated position, the measurement error may exist all the time. The proposed method utilizes the error between the observed and estimated image coordinates to localize the mobile robot. The Kalman filter scheme is applied for this method. its performance is verified by the computer simulation and the experiment.