• Title/Summary/Keyword: Landmark navigation

Search Result 84, Processing Time 0.029 seconds

Development of Image-based Assistant Algorithm for Vehicle Positioning by Detecting Road Facilities

  • Jung, Jinwoo;Kwon, Jay Hyoun;Lee, Yong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.5
    • /
    • pp.339-348
    • /
    • 2017
  • Due to recent improvements in computer processing speed and image processing technology, researches are being actively carried out to combine information from a camera with existing GNSS (Global Navigation Satellite System) and dead reckoning. In this study, the mathematical model based on SPR (Single Photo Resection) is derived for image-based assistant algorithm for vehicle positioning. Simulation test is performed to analyze factors affecting SPR. In addition, GNSS/on-board vehicle sensor/image based positioning algorithm is developed by combining image-based positioning algorithm with existing positioning algorithm. The performance of the integrated algorithm is evaluated by the actual driving test and landmark's position data, which is required to perform SPR, based on simulation. The precision of the horizontal position error is 1.79m in the case of the existing positioning algorithm, and that of the integrated positioning algorithm is 0.12m at the points where SPR is performed. In future research, it is necessary to develop an optimized algorithm based on the actual landmark's position data.

VRML image overlay method for Robot's Self-Localization (VRML 영상오버레이기법을 이용한 로봇의 Self-Localization)

  • Sohn, Eun-Ho;Kwon, Bang-Hyun;Kim, Young-Chul;Chong, Kil-To
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.318-320
    • /
    • 2006
  • Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-localization, and there are different modalities as well (vision, laser range finders, ultrasonic sonars). Since sensor information is generally uncertain and contains noise, there are many researches to reduce the noise. But, the correctness is limited because most researches are based on statistical approach. The goal of our research is to measure more exact robot location by matching between built VRML 3D model and real vision image. To determine the position of mobile robot, landmark-localitzation technique has been applied. Landmarks are any detectable structure in the physical environment. Some use vertical lines, others use specially designed markers, In this paper, specially designed markers are used as landmarks. Given known focal length and a single image of three landmarks it is possible to compute the angular separation between the lines of sight of the landmarks. The image-processing and neural network pattern matching techniques are employed to recognize landmarks placed in a robot working environment. After self-localization, the 2D scene of the vision is overlaid with the VRML scene.

  • PDF

Omni Camera Vision-Based Localization for Mobile Robots Navigation Using Omni-Directional Images (옴니 카메라의 전방향 영상을 이용한 이동 로봇의 위치 인식 시스템)

  • Kim, Jong-Rok;Lim, Mee-Seub;Lim, Joon-Hong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.3
    • /
    • pp.206-210
    • /
    • 2011
  • Vision-based robot localization is challenging due to the vast amount of visual information available, requiring extensive storage and processing time. To deal with these challenges, we propose the use of features extracted from omni-directional panoramic images and present a method for localization of a mobile robot equipped with an omni-directional camera. The core of the proposed scheme may be summarized as follows : First, we utilize an omni-directional camera which can capture instantaneous $360^{\circ}$ panoramic images around a robot. Second, Nodes around the robot are extracted by the correlation coefficients of Circular Horizontal Line between the landmark and the current captured image. Third, the robot position is determined from the locations by the proposed correlation-based landmark image matching. To accelerate computations, we have assigned the node candidates using color information and the correlation values are calculated based on Fast Fourier Transforms. Experiments show that the proposed method is effective in global localization of mobile robots and robust to lighting variations.

Mobile Robot Localization and Mapping using a Gaussian Sum Filter

  • Kwok, Ngai Ming;Ha, Quang Phuc;Huang, Shoudong;Dissanayake, Gamini;Fang, Gu
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.3
    • /
    • pp.251-268
    • /
    • 2007
  • A Gaussian sum filter (GSF) is proposed in this paper on simultaneous localization and mapping (SLAM) for mobile robot navigation. In particular, the SLAM problem is tackled here for cases when only bearing measurements are available. Within the stochastic mapping framework using an extended Kalman filter (EKF), a Gaussian probability density function (pdf) is assumed to describe the range-and-bearing sensor noise. In the case of a bearing-only sensor, a sum of weighted Gaussians is used to represent the non-Gaussian robot-landmark range uncertainty, resulting in a bank of EKFs for estimation of the robot and landmark locations. In our approach, the Gaussian parameters are designed on the basis of minimizing the representation error. The computational complexity of the GSF is reduced by applying the sequential probability ratio test (SPRT) to remove under-performing EKFs. Extensive experimental results are included to demonstrate the effectiveness and efficiency of the proposed techniques.

Extraction of Landmarks Using Building Attribute Data for Pedestrian Navigation Service (보행자 내비게이션 서비스를 위한 건물 속성정보를 이용한 랜드마크 추출)

  • Kim, Jinhyeong;Kim, Jiyoung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.37 no.1
    • /
    • pp.203-215
    • /
    • 2017
  • Recently, interest in Pedestrian Navigation Service (PNS) is being increased due to the diffusion of smart phone and the improvement of location determination technology and it is efficient to use landmarks in route guidance for pedestrians due to the characteristics of pedestrians' movement and success rate of path finding. Accordingly, researches on extracting landmarks have been progressed. However, preceding researches have a limit that they only considered the difference between buildings and did not consider visual attention of maps in display of PNS. This study improves this problem by defining building attributes as local variable and global variable. Local variables reflect the saliency of buildings by representing the difference between buildings and global variables reflects the visual attention by representing the inherent characteristics of buildings. Also, this study considers the connectivity of network and solves the overlapping problem of landmark candidate groups by network voronoi diagram. To extract landmarks, we defined building attribute data based on preceding researches. Next, we selected a choice point for pedestrians in pedestrian network data, and determined landmark candidate groups at each choice point. Building attribute data were calculated in the extracted landmark candidate groups and finally landmarks were extracted by principal component analysis. We applied the proposed method to a part of Gwanak-gu, Seoul and this study evaluated the extracted landmarks by making a comparison with labels and landmarks used by portal sites such as the NAVER and the DAUM. In conclusion, 132 landmarks (60.3%) among 219 landmarks of the NAVER and the DAUM were extracted by the proposed method and we confirmed that 228 landmarks which there are not labels or landmarks in the NAVER and the DAUM were helpful to determine a change of direction in path finding of local level.

Position Estimation Using Neural Network for Navigation of Wheeled Mobile Robot (WMR) in a Corridor

  • Choi, Kyung-Jin;Lee, Young-Hyun;Park, Chong-Kug
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1259-1263
    • /
    • 2004
  • This paper describes position estimation algorithm using neural network for the navigation of the vision-based wheeled mobile robot (WMR) in a corridor with taking ceiling lamps as landmark. From images of a corridor the lamp's line on the ceiling in corridor has a specific slope to the lateral position of the WMR. The vanishing point produced by the lamp's line also has a specific position to the orientation of WMR. The ceiling lamps have a limited size and shape like a circle in image. Simple image processing algorithms are used to extract lamps from the corridor image. Then the lamp's line and vanishing point's position are defined and calculated at known position of WMR in a corridor. To estimate the lateral position and orientation of WMR from an image, the relationship between the position of WMR and the features of ceiling lamps have to be defined. But it is hard because of nonlinearity. Therefore, data set between position of WMR and features of lamps are configured. Neural network are composed and learned with data set. Back propagation algorithm(BPN) is used for learning. And it is applied in navigation of WMR in a corridor.

  • PDF

Control of Mobile Robot Navigation Using Vision Sensor Data Fusion by Nonlinear Transformation (비선형 변환의 비젼센서 데이터융합을 이용한 이동로봇 주행제어)

  • Jin Tae-Seok;Lee Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.4
    • /
    • pp.304-313
    • /
    • 2005
  • The robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, robot need to recognize his position and direction for intelligent performance in an unknown environment. And the mobile robots may navigate by means of a number of monitoring systems such as the sonar-sensing system or the visual-sensing system. Notice that in the conventional fusion schemes, the measurement is dependent on the current data sets only. Therefore, more of sensors are required to measure a certain physical parameter or to improve the accuracy of the measurement. However, in this research, instead of adding more sensors to the system, the temporal sequence of the data sets are stored and utilized for the accurate measurement. As a general approach of sensor fusion, a UT -Based Sensor Fusion(UTSF) scheme using Unscented Transformation(UT) is proposed for either joint or disjoint data structure and applied to the landmark identification for mobile robot navigation. Theoretical basis is illustrated by examples and the effectiveness is proved through the simulations and experiments. The newly proposed, UT-Based UTSF scheme is applied to the navigation of a mobile robot in an unstructured environment as well as structured environment, and its performance is verified by the computer simulation and the experiment.

Selecting a Landmark for Repositioning Automated Driving Vehicles in a Tunnel (자율주행 차량의 터널내 측위오차 보정 지원시설 선정)

  • Kim, Hyoungsoo;Kim, Youngmin;Park, Bumjin
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.17 no.5
    • /
    • pp.200-209
    • /
    • 2018
  • This study proposed a method to select existing facilities as a landmark in order to reset accumulated errors of dead reckoning in a tunnel difficult to receive GNSS signals in automated driving. First, related standards and regulations were reviewed in order to survey 'variety' on shapes and installation locations as a feature of facilities. Second, 'recognition' on facilities was examined using image and Lidar sensors. Last, 'regularity' in terms of installation locations and intervals was surveyed through related references. The results of this study selected a fire fighting box / lamp (50m), an evacuation corridor lamp (300m), a lane control system (500m), a maximum / minimum speed limit sign and a jet fan as a candidate landmark to reset positioning errors. Based on those facilities, it was determined that error correction was possible. The results of this study are expected to be used in repositioning of automated driving vehicles in a tunnel.

Target Detection Based on Moment Invariants

  • Wang, Jiwu;Sugisaka, Masanori
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.677-680
    • /
    • 2003
  • Perceptual landmarks are an effective solution for a mobile robot realizing steady and reliable long distance navigation. But the prerequisite is those landmarks must be detected and recognized robustly at a higher speed under various lighting conditions. This made image processing more complicated so that its speed and reliability can not be both satisfied at the same time. Color based target detection technique can separate target color regions from non-target color regions in an image with a faster speed, and better results were obtained only under good lighting conditions. Moreover, in the case that there are other things with a target color, we have to consider other target features to tell apart the target from them. Such thing always happens when we detect a target with its single character. On the other hand, we can generally search for only one target for each time so that we can not make use of landmarks efficiently, especially when we want to make more landmarks work together. In this paper, by making use of the moment invariants of each landmark, we can not only search specified target from separated color region but also find multi-target at the same time if necessary. This made the finite landmarks carry on more functions. Because moment invariants were easily used with some low level image processing techniques, such as color based target detection and gradient runs based target detection etc, and moment invariants are more reliable features of each target, the ratio of target detection were improved. Some necessary experiments were carried on to verify its robustness and efficiency of this method.

  • PDF

Vision-based Navigation for VTOL Unmanned Aerial Vehicle Landing (수직이착륙 무인항공기 자동 착륙을 위한 영상기반 항법)

  • Lee, Sang-Hoon;Song, Jin-Mo;Bae, Jong-Sue
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.18 no.3
    • /
    • pp.226-233
    • /
    • 2015
  • Pose estimation is an important operation for many vision tasks. This paper presents a method of estimating the camera pose, using a known landmark for the purpose of autonomous vertical takeoff and landing(VTOL) unmanned aerial vehicle(UAV) landing. The proposed method uses a distinctive methodology to solve the pose estimation problem. We propose to combine extrinsic parameters from known and unknown 3-D(three-dimensional) feature points, and inertial estimation of camera 6-DOF(Degree Of Freedom) into one linear inhomogeneous equation. This allows us to use singular value decomposition(SVD) to neatly solve the given optimization problem. We present experimental results that demonstrate the ability of the proposed method to estimate camera 6DOF with the ease of implementation.