• Title/Summary/Keyword: Image-based localization

Search Result 258, Processing Time 0.038 seconds

Visibility Sensor with Stereo Infrared Light Sources for Mobile Robot Motion Estimation (주행 로봇 움직임 추정용 스테레오 적외선 조명 기반 Visibility 센서)

  • Lee, Min-Young;Lee, Soo-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.2
    • /
    • pp.108-115
    • /
    • 2011
  • This paper describes a new sensor system for mobile robot motion estimation using stereo infrared light sources and a camera. Visibility is being applied to robotic obstacle avoidance path planning and localization. Using simple visibility computation, the environment is partitioned into many visibility sectors. Based on the recognized edges, the sector a robot belongs to is identified and this greatly reduces the search area for localization. Geometric modeling of the vision system enables the estimation of the characteristic pixel position with respect to the robot movement. Finite difference analysis is used for incremental movement and the error sources are investigated. With two characteristic points in the image such as vertices, the robot position and orientation are successfully estimated.

Mobile Robot Localization Based on Hexagon Distributed Repeated Color Patches in Large Indoor Area (넓은 실내 공간에서 반복적인 칼라패치의 6각형 배열에 의한 이동로봇의 위치계산)

  • Chen, Hong-Xin;Wang, Shi;Han, Hoo-Sek;Kim, Hyong-Suk
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.4
    • /
    • pp.445-450
    • /
    • 2009
  • This paper presents a new mobile robot localization method for indoor robot navigation. The method uses hexagon distributed color-coded patches on the ceiling and a camera is installed on the robot facing the ceiling to recognize these patches. The proposed "cell-coded map", with the use of only seven different kinds of color-coded landmarks distributed in hexagonal way, helps reduce the complexity of the landmark structure and the error of landmark recognition. This technique is applicable for navigation in an unlimited size of indoor space. The structure of the landmarks and the recognition method are introduced. And 2 rigid rules are also used to ensure the correctness of the recognition. Experimental results prove that the method is useful.

Indoor Positioning System Based on Camera Sensor Network for Mobile Robot Localization in Indoor Environments (실내 환경에서의 이동로봇의 위치추정을 위한 카메라 센서 네트워크 기반의 실내 위치 확인 시스템)

  • Ji, Yonghoon;Yamashita, Atsushi;Asama, Hajime
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.11
    • /
    • pp.952-959
    • /
    • 2016
  • This paper proposes a novel indoor positioning system (IPS) that uses a calibrated camera sensor network and dense 3D map information. The proposed IPS information is obtained by generating a bird's-eye image from multiple camera images; thus, our proposed IPS can provide accurate position information when objects (e.g., the mobile robot or pedestrians) are detected from multiple camera views. We evaluate the proposed IPS in a real environment with moving objects in a wireless camera sensor network. The results demonstrate that the proposed IPS can provide accurate position information for moving objects. This can improve the localization performance for mobile robot operation.

A Bimodal Approach for Land Vehicle Localization

  • Kim, Seong-Baek;Choi, Kyung-Ho;Lee, Seung-Yong;Choi, Ji-Hoon;Hwang, Tae-Hyun;Jang, Byung-Tae;Lee, Jong-Hun
    • ETRI Journal
    • /
    • v.26 no.5
    • /
    • pp.497-500
    • /
    • 2004
  • In this paper, we present a novel idea to integrate a low cost inertial measurement unit (IMU) and Global Positioning System (GPS) for land vehicle localization. By taking advantage of positioning data calculated from an image based on photogrammetry and stereo-vision techniques, errors caused by a GPS outage for land vehicle localization were significantly reduced in the proposed bimodal approach. More specifically, positioning data from the photogrammetric approach are fed back into the Kalman filter to reduce and compensate for IMU errors and improve the performance. Experimental results are presented to show the robustness of the proposed method, which can be used to reduce positioning errors caused by a low cost IMU when a GPS signal is not available in urban areas.

  • PDF

Autonomous swimming technology for an AUV operating in the underwater jacket structure environment

  • Li, Ji-Hong;Park, Daegil;Ki, Geonhui
    • International Journal of Naval Architecture and Ocean Engineering
    • /
    • v.11 no.2
    • /
    • pp.679-687
    • /
    • 2019
  • This paper presents the autonomous swimming technology developed for an Autonomous Underwater Vehicle (AUV) operating in the underwater jacket structure environment. To prevent the position divergence of the inertial navigation system constructed for the primary navigation solution for the vehicle, we've developed kinds of marker-recognition based underwater localization methods using both of optical and acoustic cameras. However, these two methods all require the artificial markers to be located near to the cameras mounted on the vehicle. Therefore, in the case of the vehicle far away from the structure where the markers are usually mounted on, we may need alternative position-aiding solution to guarantee the navigation accuracy. For this purpose, we develop a sonar image processing based underwater localization method using a Forward Looking Sonar (FLS) mounted in front of the vehicle. The primary purpose of this FLS is to detect the obstacles in front of the vehicle. According to the detected obstacle(s), we apply an Occupancy Grid Map (OGM) based path planning algorithm to derive an obstacle collision-free reference path. Experimental studies are carried out in the water tank and also in the Pohang Yeongilman port sea environment to demonstrate the effectiveness of the proposed autonomous swimming technology.

Recognition of Colors of Image Code Using Hue and Saturation Values (색상 및 채도 값에 의한 이미지 코드의 칼라 인식)

  • Kim Tae-Woo;Park Hung-Kook;Yoo Hyeon-Joong
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.4
    • /
    • pp.150-159
    • /
    • 2005
  • With the increase of interest in ubiquitous computing, image code is attracting attention in various areas. Image code is important in ubiquitous computing in that it can complement or replace RFID (radio frequency identification) in quite a few areas as well as it is more economical. However, because of the difficulty in reading precise colors due to the severe distortion of colors, its application is quite restricted by far. In this paper, we present an efficient method of image code recognition including automatically locating the image code using the hue and saturation values. In our experiments, we use an image code whose design seems most practical among currently commercialized ones. This image code uses six safe colors, i.e., R, G, B, C, M, and Y. We tested for 72 true-color field images with the size of $2464{\times}1632$ pixels. With the color calibration based on the histogram, the localization accuracy was about 96%, and the accuracy of color classification for localized codes was about 91.28%. It took approximately 5 seconds to locate and recognize the image code on a PC with 2 GHz P4 CPU.

  • PDF

Bridge Inspection and condition assessment using Unmanned Aerial Vehicles (UAVs): Major challenges and solutions from a practical perspective

  • Jung, Hyung-Jo;Lee, Jin-Hwan;Yoon, Sungsik;Kim, In-Ho
    • Smart Structures and Systems
    • /
    • v.24 no.5
    • /
    • pp.669-681
    • /
    • 2019
  • Bridge collapses may deliver a huge impact on our society in a very negative way. Out of many reasons why bridges collapse, poor maintenance is becoming a main contributing factor to many recent collapses. Furthermore, the aging of bridges is able to make the situation much worse. In order to prevent this unwanted event, it is indispensable to conduct continuous bridge monitoring and timely maintenance. Visual inspection is the most widely used method, but it is heavily dependent on the experience of the inspectors. It is also time-consuming, labor-intensive, costly, disruptive, and even unsafe for the inspectors. In order to address its limitations, in recent years increasing interests have been paid to the use of unmanned aerial vehicles (UAVs), which is expected to make the inspection process safer, faster and more cost-effective. In addition, it can cover the area where it is too hard to reach by inspectors. However, this strategy is still in a primitive stage because there are many things to be addressed for real implementation. In this paper, a typical procedure of bridge inspection using UAVs consisting of three phases (i.e., pre-inspection, inspection, and post-inspection phases) and the detailed tasks by phase are described. Also, three major challenges, which are related to a UAV's flight, image data acquisition, and damage identification, respectively, are identified from a practical perspective (e.g., localization of a UAV under the bridge, high-quality image capture, etc.) and their possible solutions are discussed by examining recently developed or currently developing techniques such as the graph-based localization algorithm, and the image quality assessment and enhancement strategy. In particular, deep learning based algorithms such as R-CNN and Mask R-CNN for classifying, localizing and quantifying several damage types (e.g., cracks, corrosion, spalling, efflorescence, etc.) in an automatic manner are discussed. This strategy is based on a huge amount of image data obtained from unmanned inspection equipment consisting of the UAV and imaging devices (vision and IR cameras).

Omnidirectional Distance Measurement based on Active Structured Light Image (능동 구조광 영상기반 전방향 거리측정)

  • Shin, Jin;Yi, Soo-Yeong;Hong, Young-Jin;Suh, Jin-Ho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.8
    • /
    • pp.751-755
    • /
    • 2010
  • In this paper, we proposed an omnidirectional ranging system that is able to obtain $360^{\circ}$ all directional distances effectively based on structured light image. The omnidirectional ranging system consists of laser structured light source and a catadioptric omnidirectional camera with a curved mirror. The proposed integro-differential structured light image processing algorithm makes the ranging system robust against environmental illumination condition. The omnidirectional ranging system is useful for map-building and self-localization of a mobile robot.

A Vehicular License Plate Recognition Framework For Skewed Images

  • Arafat, M.Y.;Khairuddin, A.S.M.;Paramesran, R.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.11
    • /
    • pp.5522-5540
    • /
    • 2018
  • Vehicular license plate (LP) recognition system has risen as a significant field of research recently because various explorations are currently being conducted by the researchers to cope with the challenges of LPs which include different illumination and angular situations. This research focused on restricted conditions such as using image of only one vehicle, stationary background, no angular adjustment of the skewed images. A real time vehicular LP recognition scheme is proposed for the skewed images for detection, segmentation and recognition of LP. In this research, a polar co-ordinate transformation procedure is implemented to adjust the skewed vehicular images. Besides that, window scanning procedure is utilized for the candidate localization that is based on the texture characteristics of the image. Then, connected component analysis (CCA) is implemented to the binary image for character segmentation where the pixels get connected in an eight-point neighbourhood process. Finally, optical character recognition is implemented for the recognition of the characters. For measuring the performance of this experiment, 300 skewed images of different illumination conditions with various tilt angles have been tested. The results show that proposed method able to achieve accuracy of 96.3% in localizing, 95.4% in segmenting and 94.2% in recognizing the LPs with an average localization time of 0.52s.

The Performance Analysis of Integrated Navigation System Based on the Tactical Communication and VISION for the Accurate Localization of Unmanned Robot (무인로봇 정밀위치추정을 위한 전술통신 및 영상 기반의 통합항법 성능 분석)

  • Choi, Ji-Hoon;Park, Yong-Woon;Song, Jae-Bok;Kweon, In-So
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.14 no.2
    • /
    • pp.271-280
    • /
    • 2011
  • This paper presents a navigation system based on the tactical communication and vision system in outdoor environments which is applied to unmanned robot for perimeter surveillance operations. GPS errors of robot are compensated by the reference station of C2(command and control) vehicle and WiBro(Wireless Broadband) is used for the communication between two systems. In the outdoor environments, GPS signals can be easily blocked due to trees and buildings. In this environments, however, vision system is very efficient because there are many features. With the feature MAP around the operation environments, the robot can estimate the position by the image matching and pose estimation. In the navigation system, thus, operation modes is switched by navigation manager according to some environment conditions. The experimental results show that the unmanned robot can estimate the position very accurately in outdoor environment.