• Title/Summary/Keyword: fused navigation

Search Result 44, Processing Time 0.033 seconds

3D LIDAR Based Vehicle Localization Using Synthetic Reflectivity Map for Road and Wall in Tunnel

  • Im, Jun-Hyuck;Im, Sung-Hyuck;Song, Jong-Hwa;Jee, Gyu-In
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.6 no.4
    • /
    • pp.159-166
    • /
    • 2017
  • The position of autonomous driving vehicle is basically acquired through the global positioning system (GPS). However, GPS signals cannot be received in tunnels. Due to this limitation, localization of autonomous driving vehicles can be made through sensors mounted on them. In particular, a 3D Light Detection and Ranging (LIDAR) system is used for longitudinal position error correction. Few feature points and structures that can be used for localization of vehicles are available in tunnels. Since lanes in the road are normally marked by solid line, it cannot be used to recognize a longitudinal position. In addition, only a small number of structures that are separated from the tunnel walls such as sign boards or jet fans are available. Thus, it is necessary to extract usable information from tunnels to recognize a longitudinal position. In this paper, fire hydrants and evacuation guide lights attached at both sides of tunnel walls were used to recognize a longitudinal position. These structures have highly distinctive reflectivity from the surrounding walls, which can be distinguished using LIDAR reflectivity data. Furthermore, reflectivity information of tunnel walls was fused with the road surface reflectivity map to generate a synthetic reflectivity map. When the synthetic reflectivity map was used, localization of vehicles was able through correlation matching with the local maps generated from the current LIDAR data. The experiments were conducted at an expressway including Maseong Tunnel (approximately 1.5 km long). The experiment results showed that the root mean square (RMS) position errors in lateral and longitudinal directions were 0.19 m and 0.35 m, respectively, exhibiting precise localization accuracy.

A Study on Mobile Robot Navigation Using a New Sensor Fusion

  • Tack, Han-Ho;Jin, Tae-Seok;Lee, Sang-Bae
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.471-475
    • /
    • 2003
  • This paper proposes a sensor-fusion technique where the data sets for the previous moments are properly transformed and fused into the current data sets to enable accurate measurement, such as, distance to an obstacle and location of the service robot itself. In the conventional fusion schemes, the measurement is dependent on the current data sets. As the results, more of sensors are required to measure a certain physical parameter or to improve the accuracy of the measurement. However, in this approach, instead of adding more sensors to the system, the temporal sequence of the data sets are stored and utilized for the measurement improvement. Theoretical basis is illustrated by examples and the effectiveness is proved through the simulations. Finally, the new space and time sensor fusion (STSF) scheme is applied to the control of a mobile robot in an unstructured environment as well as structured environment.

  • PDF

Control of the Mobile Robot Navigation Using a New Time Sensor Fusion

  • Tack, Han-Ho;Kim, Chang-Geun;Kim, Myeong-Kyu
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.1
    • /
    • pp.23-28
    • /
    • 2004
  • This paper proposes a sensor-fusion technique where the data sets for the previous moments are properly transformed and fused into the current data sets to enable accurate measurement, such as, distance to an obstacle and location of the service robot itself. In the conventional fusion schemes, the measurement is dependent on the current data sets. As the results, more of sensors are required to measure a certain physical parameter or to improve the accuracy of the measurement. However, in this approach, instead of adding more sensors to the system, the temporal sequence of the data sets are stored and utilized for the measurement improvement. Theoretical basis is illustrated by examples and the effectiveness is proved through the simulations. Finally, the new space and time sensor fusion(STSF) scheme is applied to the control of a mobile robot in an unstructured environment as well as structured environment.

A Tilt and Heading Estimation System for ROVs using Kalman Filters

  • Ha, Yun-Su;Ngo, Thanh-Hoan
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.32 no.7
    • /
    • pp.1068-1079
    • /
    • 2008
  • Tilt and heading angles information of a remotely operated vehicle (ROV) are very important in underwater navigation. This paper presents a low.cost tilt and heading estimation system. Three single.axis rate gyros, a tri-axis accelerometer, and a tri-axis magnetometer are used. Output signals coming from these sensors are fused by two Kalman filters. The first Kalman filter is used to estimate roll and pitch angles and the other is for heading angle estimation. By using this method, we have obtained tilt (roll and pitch angles) and heading information which are reliable over long period of time. Results from experiments have shown the performance of the presented system.

Multi-Filter Fusion Technique for INS/GPS (INS/GPS를 위한 다중 필터 융합 기법)

  • 조성윤;최완식;김병두;조영수
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.34 no.10
    • /
    • pp.48-55
    • /
    • 2006
  • A multi-filter fusion technique is proposed and this technique is applied to the INS/GPS integrated system. IIR-type EKF and FIR-type RHKF filter are fused to provide the advantages of these filters based on the adaptive mixing probability calculated by the residuals and the residual covariance matrices of the filters. In the INS/GPS, this fusion filter can provide more robust navigation information than the conventional stand-alone filter.

A Study on Indoor Mobile Robot Navigation Used Space and Time Sensor Fusion

  • Jin, Tae-Seok;Ko, Jae-Pyung;Lee, Jang-Myung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.104.2-104
    • /
    • 2002
  • This paper proposes a sensor-fusion technique where the data sets for the previous moments are properly transformed and fused into the current data sets to enable accurate measurement, such as, distance to an obstacle and location of the service robot itself. In the conventional fusion schemes, the measurement is dependent on the current data sets. As the results, more of sensors are required to measure a certain physical parameter or to improve the accuracy of the measurement. However, in this approach, instead of adding more sensors to the system , the temporal sequence of the data sets are stored and utilized for the measurement improvement. Theoretical basis is il lustrated by examples and...

  • PDF

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

A Method of Obstacle Detection in the Dust Environment for Unmanned Ground Vehicle (먼지 환경의 무인차량 운용을 위한 장애물 탐지 기법)

  • Choe, Tok-Son;Ahn, Seong-Yong;Park, Yong-Woon
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.13 no.6
    • /
    • pp.1006-1012
    • /
    • 2010
  • For the autonomous navigation of an unmanned ground vehicle in the rough terrain and combat, the dust environment should necessarily be overcome. Therefore, we propose a robust obstacle detection methodology using laser range sensor and radar. Laser range sensor has a good angle and distance accuracy, however, it has a weakness in the dust environment. On the other hand, radar has not better the angle and distance accuracy than laser range sensor, it has a robustness in the dust environment. Using these characteristics of laser range sensor and radar, we use laser range sensor as a main sensor for normal times and radar as a assist sensor for the dust environment. For fusion of laser range sensor and radar information, the angle and distance data of the laser range sensor and radar are separately transformed to the angle and distance data of virtual range sensor which is located in the center of the vehicle. Through distance comparison of laser range sensor and radar in the same angle, the distance data of a fused virtual range sensor are changed to the distance data of the laser range sensor, if the distance of laser range sensor and radar are similar. In the other case, the distance data of the fused virtual range sensor are changed to the distance data of the radar. The suggested methodology is verified by real experiment.

Experimental result of Real-time Sonar-based SLAM for underwater robot (소나 기반 수중 로봇의 실시간 위치 추정 및 지도 작성에 대한 실험적 검증)

  • Lee, Yeongjun;Choi, Jinwoo;Ko, Nak Yong;Kim, Taejin;Choi, Hyun-Taek
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.3
    • /
    • pp.108-118
    • /
    • 2017
  • This paper presents experimental results of realtime sonar-based SLAM (simultaneous localization and mapping) using probability-based landmark-recognition. The sonar-based SLAM is used for navigation of underwater robot. Inertial sensor as IMU (Inertial Measurement Unit) and DVL (Doppler Velocity Log) and external information from sonar image processing are fused by Extended Kalman Filter (EKF) technique to get the navigation information. The vehicle location is estimated by inertial sensor data, and it is corrected by sonar data which provides relative position between the vehicle and the landmark on the bottom of the basin. For the verification of the proposed method, the experiments were performed in a basin environment using an underwater robot, yShark.

Network Modeling and Analysis of Multi Radar Data Fusion for Efficient Detection of Aircraft Position (효율적인 항공기 위치 파악을 위한 다중 레이더 자료 융합의 네트워크 모델링 및 분석)

  • Kim, Jin-Wook;Cho, Tae-Hwan;Choi, Sang-Bang;Park, Hyo-Dal
    • Journal of Advanced Navigation Technology
    • /
    • v.18 no.1
    • /
    • pp.29-34
    • /
    • 2014
  • Data fusion techniques combine data from multiple radars and related information to achieve more accurate estimations than could be achieved by a single, independent radar. In this paper, we analyze delay and loss of packets to be processed by multiple radar and minimize data processing interval from centralized data processing operation as fusing multiple radar data. Therefore, we model radar network about central data fusion, and analyze delay and loss of packets inside queues on assuming queues respectively as the M/M/1/K using NS-2. We confirmed average delay time, processing fused multiple radar data, through the analysis data. And then, this delay time can be used as a reference time for radar data latency in fusion center.