• Title/Summary/Keyword: Vision-based Localization

Search Result 134, Processing Time 0.022 seconds

Monocular Vision Based Localization System using Hybrid Features from Ceiling Images for Robot Navigation in an Indoor Environment (실내 환경에서의 로봇 자율주행을 위한 천장영상으로부터의 이종 특징점을 이용한 단일비전 기반 자기 위치 추정 시스템)

  • Kang, Jung-Won;Bang, Seok-Won;Atkeson, Christopher G.;Hong, Young-Jin;Suh, Jin-Ho;Lee, Jung-Woo;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.6 no.3
    • /
    • pp.197-209
    • /
    • 2011
  • This paper presents a localization system using ceiling images in a large indoor environment. For a system with low cost and complexity, we propose a single camera based system that utilizes ceiling images acquired from a camera installed to point upwards. For reliable operation, we propose a method using hybrid features which include natural landmarks in a natural scene and artificial landmarks observable in an infrared ray domain. Compared with previous works utilizing only infrared based features, our method reduces the required number of artificial features as we exploit both natural and artificial features. In addition, compared with previous works using only natural scene, our method has an advantage in the convergence speed and robustness as an observation of an artificial feature provides a crucial clue for robot pose estimation. In an experiment with challenging situations in a real environment, our method was performed impressively in terms of the robustness and accuracy. To our knowledge, our method is the first ceiling vision based localization method using features from both visible and infrared rays domains. Our system can be easily utilized with a variety of service robot applications in a large indoor environment.

Laser Scanner based Static Obstacle Detection Algorithm for Vehicle Localization on Lane Lost Section (차선 유실구간 측위를 위한 레이저 스캐너 기반 고정 장애물 탐지 알고리즘 개발)

  • Seo, Hotae;Park, Sungyoul;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.9 no.3
    • /
    • pp.24-30
    • /
    • 2017
  • This paper presents the development of laser scanner based static obstacle detection algorithm for vehicle localization on lane lost section. On urban autonomous driving, vehicle localization is based on lane information, GPS and digital map is required to ensure. However, in actual urban roads, the lane data may not come in due to traffic jams, intersections, weather conditions, faint lanes and so on. For lane lost section, lane based localization is limited or impossible. The proposed algorithm is designed to determine the lane existence by using reliability of front vision data and can be utilized on lane lost section. For the localization, the laser scanner is used to distinguish the static object through estimation and fusion process based on the speed information on radar data. Then, the laser scanner data are clustered to determine if the object is a static obstacle such as a fence, pole, curb and traffic light. The road boundary is extracted and localization is performed to determine the location of the ego vehicle by comparing with digital map by detection algorithm. It is shown that the localization using the proposed algorithm can contribute effectively to safe autonomous driving.

Point Pattern Matching Based Global Localization using Ceiling Vision (천장 조명을 이용한 점 패턴 매칭 기반의 광역적인 위치 추정)

  • Kang, Min-Tae;Sung, Chang-Hun;Roh, Hyun-Chul;Chung, Myung-Jin
    • Proceedings of the KIEE Conference
    • /
    • 2011.07a
    • /
    • pp.1934-1935
    • /
    • 2011
  • In order for a service robot to perform several tasks, basically autonomous navigation technique such as localization, mapping, and path planning is required. The localization (estimation robot's pose) is fundamental ability for service robot to navigate autonomously. In this paper, we propose a new system for point pattern matching based visual global localization using spot lightings in ceiling. The proposed algorithm us suitable for system that demands high accuracy and fast update rate such a guide robot in the exhibition. A single camera looking upward direction (called ceiling vision system) is mounted on the head of the mobile robot and image features such as lightings are detected and tracked through the image sequence. For detecting more spot lightings, we choose wide FOV lens, and inevitably there is serious image distortion. But by applying correction calculation only for the position of spot lightings not whole image pixels, we can decrease the processing time. And then using point pattern matching and least square estimation, finally we can get the precise position and orientation of the mobile robot. Experimental results demonstrate the accuracy and update rate of the proposed algorithm in real environments.

  • PDF

A Range-Based Monte Carlo Box Algorithm for Mobile Nodes Localization in WSNs

  • Li, Dan;Wen, Xianbin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.8
    • /
    • pp.3889-3903
    • /
    • 2017
  • Fast and accurate localization of randomly deployed nodes is required by many applications in wireless sensor networks (WSNs). However, mobile nodes localization in WSNs is more difficult than static nodes localization since the nodes mobility brings more data. In this paper, we propose a Range-based Monte Carlo Box (RMCB) algorithm, which builds upon the Monte Carlo Localization Boxed (MCB) algorithm to improve the localization accuracy. This algorithm utilizes Received Signal Strength Indication (RSSI) ranging technique to build a sample box and adds a preset error coefficient in sampling and filtering phase to increase the success rate of sampling and accuracy of valid samples. Moreover, simplified Particle Swarm Optimization (sPSO) algorithm is introduced to generate new samples and avoid constantly repeated sampling and filtering process. Simulation results denote that our proposed RMCB algorithm can reduce the location error by 24%, 14% and 14% on average compared to MCB, Range-based Monte Carlo Localization (RMCL) and RSSI Motion Prediction MCB (RMMCB) algorithm respectively and are suitable for high precision required positioning scenes.

A Real-time Vehicle Localization Algorithm for Autonomous Parking System (자율 주차 시스템을 위한 실시간 차량 추출 알고리즘)

  • Hahn, Jong-Woo;Choi, Young-Kyu
    • Journal of the Semiconductor & Display Technology
    • /
    • v.10 no.2
    • /
    • pp.31-38
    • /
    • 2011
  • This paper introduces a video based traffic monitoring system for detecting vehicles and obstacles on the road. To segment moving objects from image sequence, we adopt the background subtraction algorithm based on the local binary patterns (LBP). Recently, LBP based texture analysis techniques are becoming popular tools for various machine vision applications such as face recognition, object classification and so on. In this paper, we adopt an extension of LBP, called the Diagonal LBP (DLBP), to handle the background subtraction problem arise in vision-based autonomous parking systems. It reduces the code length of LBP by half and improves the computation complexity drastically. An edge based shadow removal and blob merging procedure are also applied to the foreground blobs, and a pose estimation technique is utilized for calculating the position and heading angle of the moving object precisely. Experimental results revealed that our system works well for real-time vehicle localization and tracking applications.

Error Correction Algorithm of Position-Coded Pattern for Hybrid Indoor Localization (위치패턴 기반 하이브리드 실내 측위를 위한 위치 인식 오류 보정 알고리즘)

  • Kim, Sanghoon;Lee, Seunggol;Kim, Yoo-Sung;Park, Jaehyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.2
    • /
    • pp.119-124
    • /
    • 2013
  • Recent increasing demand on the indoor localization requires more advanced and hybrid technology. This paper proposes an application of the hybrid indoor localization method based on a position-coded pattern that can be used with other existing indoor localization techniques such as vision, beacon, or landmark technique. To reduce the pattern-recognition error rate, the error detection and correction algorithm was applied based on Hamming code. The indoor localization experiments based on the proposed algorithm were performed by using a QCIF-grade CMOS sensor and a position-coded pattern with an area of $1.7{\times}1.7mm^2$. The experiments have shown that the position recognition error ratio was less than 0.9 % with 0.4 mm localization accuracy. The results suggest that the proposed method could be feasibly applied for the localization of the indoor mobile service robots.

Ceiling-Based Localization of Indoor Robots Using Ceiling-Looking 2D-LiDAR Rotation Module (천장지향 2D-LiDAR 회전 모듈을 이용한 실내 주행 로봇의 천장 기반 위치 추정)

  • An, Jae Won;Ko, Yun-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.7
    • /
    • pp.780-789
    • /
    • 2019
  • In this paper, we propose a new indoor localization method for indoor mobile robots using LiDAR. The indoor mobile robots operating in limited areas usually require high-precision localization to provide high level services. The performance of the widely used localization methods based on radio waves or computer vision are highly dependent on their usage environment. Therefore, the reproducibility of the localization is insufficient to provide high level services. To overcome this problem, we propose a new localization method based on the comparison between ceiling shape information obtained from LiDAR measurement and the blueprint. Specifically, the method includes a reliable segmentation method to classify point clouds into connected planes, an effective comparison method to estimate position by matching 3D point clouds and 2D blueprint information. Since the ceiling shape information is rarely changed, the proposed localization method is robust to its usage environment. Simulation results prove that the position error of the proposed localization method is less than 10 cm.

A study on localization and compensation of mobile robot using fusion of vision and ultrasound (영상 및 거리정보 융합을 이용한 이동로봇의 위치 인식 및 오차 보정에 관한 연구)

  • Jang, Cheol-Woong;Jung, Ki-Ho;Jung, Dae-Sub;Ryu, Je-Goon;Shim, Jae-Hong;Lee, Eung-Hyuk
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.554-556
    • /
    • 2006
  • A key component for autonomous mobile robot is to localize ifself. In this paper we suggest a vision-based localization and compensation of robot's location using ultrasound. Mobile robot travels along wall and searches each feature in indoor environment and transformed absolute coordinates of actuality environment using these points and builds a map. And we obtain information of the environment because mobile robot travels along wall. Localzation search robot's location candidate point by ultrasound and decide position among candidate point by features matching.

  • PDF

A Bimodal Approach for Land Vehicle Localization

  • Kim, Seong-Baek;Choi, Kyung-Ho;Lee, Seung-Yong;Choi, Ji-Hoon;Hwang, Tae-Hyun;Jang, Byung-Tae;Lee, Jong-Hun
    • ETRI Journal
    • /
    • v.26 no.5
    • /
    • pp.497-500
    • /
    • 2004
  • In this paper, we present a novel idea to integrate a low cost inertial measurement unit (IMU) and Global Positioning System (GPS) for land vehicle localization. By taking advantage of positioning data calculated from an image based on photogrammetry and stereo-vision techniques, errors caused by a GPS outage for land vehicle localization were significantly reduced in the proposed bimodal approach. More specifically, positioning data from the photogrammetric approach are fed back into the Kalman filter to reduce and compensate for IMU errors and improve the performance. Experimental results are presented to show the robustness of the proposed method, which can be used to reduce positioning errors caused by a low cost IMU when a GPS signal is not available in urban areas.

  • PDF

Accurate Human Localization for Automatic Labelling of Human from Fisheye Images

  • Than, Van Pha;Nguyen, Thanh Binh;Chung, Sun-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.5
    • /
    • pp.769-781
    • /
    • 2017
  • Deep learning networks like Convolutional Neural Networks (CNNs) show successful performances in many computer vision applications such as image classification, object detection, and so on. For implementation of deep learning networks in embedded system with limited processing power and memory, deep learning network may need to be simplified. However, simplified deep learning network cannot learn every possible scene. One realistic strategy for embedded deep learning network is to construct a simplified deep learning network model optimized for the scene images of the installation place. Then, automatic training will be necessitated for commercialization. In this paper, as an intermediate step toward automatic training under fisheye camera environments, we study more precise human localization in fisheye images, and propose an accurate human localization method, Automatic Ground-Truth Labelling Method (AGTLM). AGTLM first localizes candidate human object bounding boxes by utilizing GoogLeNet-LSTM approach, and after reassurance process by GoogLeNet-based CNN network, finally refines them more correctly and precisely(tightly) by applying saliency object detection technique. The performance improvement of the proposed human localization method, AGTLM with respect to accuracy and tightness is shown through several experiments.