• 제목/요약/키워드: Image based localization

검색결과 258건 처리시간 0.026초

주행 로봇 움직임 추정용 스테레오 적외선 조명 기반 Visibility 센서 (Visibility Sensor with Stereo Infrared Light Sources for Mobile Robot Motion Estimation)

  • 이민영;이수용
    • 제어로봇시스템학회논문지
    • /
    • 제17권2호
    • /
    • pp.108-115
    • /
    • 2011
  • This paper describes a new sensor system for mobile robot motion estimation using stereo infrared light sources and a camera. Visibility is being applied to robotic obstacle avoidance path planning and localization. Using simple visibility computation, the environment is partitioned into many visibility sectors. Based on the recognized edges, the sector a robot belongs to is identified and this greatly reduces the search area for localization. Geometric modeling of the vision system enables the estimation of the characteristic pixel position with respect to the robot movement. Finite difference analysis is used for incremental movement and the error sources are investigated. With two characteristic points in the image such as vertices, the robot position and orientation are successfully estimated.

넓은 실내 공간에서 반복적인 칼라패치의 6각형 배열에 의한 이동로봇의 위치계산 (Mobile Robot Localization Based on Hexagon Distributed Repeated Color Patches in Large Indoor Area)

  • 진홍신;왕실;한후석;김형석
    • 제어로봇시스템학회논문지
    • /
    • 제15권4호
    • /
    • pp.445-450
    • /
    • 2009
  • This paper presents a new mobile robot localization method for indoor robot navigation. The method uses hexagon distributed color-coded patches on the ceiling and a camera is installed on the robot facing the ceiling to recognize these patches. The proposed "cell-coded map", with the use of only seven different kinds of color-coded landmarks distributed in hexagonal way, helps reduce the complexity of the landmark structure and the error of landmark recognition. This technique is applicable for navigation in an unlimited size of indoor space. The structure of the landmarks and the recognition method are introduced. And 2 rigid rules are also used to ensure the correctness of the recognition. Experimental results prove that the method is useful.

실내 환경에서의 이동로봇의 위치추정을 위한 카메라 센서 네트워크 기반의 실내 위치 확인 시스템 (Indoor Positioning System Based on Camera Sensor Network for Mobile Robot Localization in Indoor Environments)

  • 지용훈
    • 제어로봇시스템학회논문지
    • /
    • 제22권11호
    • /
    • pp.952-959
    • /
    • 2016
  • This paper proposes a novel indoor positioning system (IPS) that uses a calibrated camera sensor network and dense 3D map information. The proposed IPS information is obtained by generating a bird's-eye image from multiple camera images; thus, our proposed IPS can provide accurate position information when objects (e.g., the mobile robot or pedestrians) are detected from multiple camera views. We evaluate the proposed IPS in a real environment with moving objects in a wireless camera sensor network. The results demonstrate that the proposed IPS can provide accurate position information for moving objects. This can improve the localization performance for mobile robot operation.

A Bimodal Approach for Land Vehicle Localization

  • Kim, Seong-Baek;Choi, Kyung-Ho;Lee, Seung-Yong;Choi, Ji-Hoon;Hwang, Tae-Hyun;Jang, Byung-Tae;Lee, Jong-Hun
    • ETRI Journal
    • /
    • 제26권5호
    • /
    • pp.497-500
    • /
    • 2004
  • In this paper, we present a novel idea to integrate a low cost inertial measurement unit (IMU) and Global Positioning System (GPS) for land vehicle localization. By taking advantage of positioning data calculated from an image based on photogrammetry and stereo-vision techniques, errors caused by a GPS outage for land vehicle localization were significantly reduced in the proposed bimodal approach. More specifically, positioning data from the photogrammetric approach are fed back into the Kalman filter to reduce and compensate for IMU errors and improve the performance. Experimental results are presented to show the robustness of the proposed method, which can be used to reduce positioning errors caused by a low cost IMU when a GPS signal is not available in urban areas.

  • PDF

Autonomous swimming technology for an AUV operating in the underwater jacket structure environment

  • Li, Ji-Hong;Park, Daegil;Ki, Geonhui
    • International Journal of Naval Architecture and Ocean Engineering
    • /
    • 제11권2호
    • /
    • pp.679-687
    • /
    • 2019
  • This paper presents the autonomous swimming technology developed for an Autonomous Underwater Vehicle (AUV) operating in the underwater jacket structure environment. To prevent the position divergence of the inertial navigation system constructed for the primary navigation solution for the vehicle, we've developed kinds of marker-recognition based underwater localization methods using both of optical and acoustic cameras. However, these two methods all require the artificial markers to be located near to the cameras mounted on the vehicle. Therefore, in the case of the vehicle far away from the structure where the markers are usually mounted on, we may need alternative position-aiding solution to guarantee the navigation accuracy. For this purpose, we develop a sonar image processing based underwater localization method using a Forward Looking Sonar (FLS) mounted in front of the vehicle. The primary purpose of this FLS is to detect the obstacles in front of the vehicle. According to the detected obstacle(s), we apply an Occupancy Grid Map (OGM) based path planning algorithm to derive an obstacle collision-free reference path. Experimental studies are carried out in the water tank and also in the Pohang Yeongilman port sea environment to demonstrate the effectiveness of the proposed autonomous swimming technology.

색상 및 채도 값에 의한 이미지 코드의 칼라 인식 (Recognition of Colors of Image Code Using Hue and Saturation Values)

  • 김태우;박흥국;유현중
    • 한국콘텐츠학회논문지
    • /
    • 제5권4호
    • /
    • pp.150-159
    • /
    • 2005
  • 유비쿼터스 컴퓨팅에 대한 관심이 증가함에 따라, 이미지 코드도 다양한 영역에서 관심을 끌고 있다. 유비쿼터스 컴퓨팅에서 이미지 코드가 중요한 이유는 비용면과 함께 많은 영역에서 RFID(radio frequency identification)를 보완하거나 대체할 수 있기 때문이다. 그렇지만, 칼라의 왜곡이 심하여 정확한 칼라를 읽는데 어려움이 있기 때문에, 그 응용은 아직까지는 매우 제한적이다. 이 논문에서는, 칼라의 색상 및 채도 값을 이용하여 자동으로 이미지 코드를 찾아내는 것을 포함하여, 이미지 코드 인식에 관한 효율적인 방법을 제시한다. 이 논문의 실험에서는 현재 상용되고 있는 것들 중 가장 실용적이라고 판단되는 디자인을 사용하였다. 이 이미지 코드에는 여섯 개의 안전 칼라, 즉, R, G, B, C, M, Y가 사용되었다. 실험 영상들로는 크기가 $2464{\times}1632$인 72개의 트루 칼라 필드 영상들을 사용하였다. 히스토그램에 의해 칼라를 보정한 경우, 코드 검출 정확도는 96%, 검출된 코드에 대한 칼라 분류 정확도는 91.28% 이었다. 이미지 코드를 검출 및 인식하는데 2 GHz P4 PC에서 약 5초가 소요되었다.

  • PDF

Bridge Inspection and condition assessment using Unmanned Aerial Vehicles (UAVs): Major challenges and solutions from a practical perspective

  • Jung, Hyung-Jo;Lee, Jin-Hwan;Yoon, Sungsik;Kim, In-Ho
    • Smart Structures and Systems
    • /
    • 제24권5호
    • /
    • pp.669-681
    • /
    • 2019
  • Bridge collapses may deliver a huge impact on our society in a very negative way. Out of many reasons why bridges collapse, poor maintenance is becoming a main contributing factor to many recent collapses. Furthermore, the aging of bridges is able to make the situation much worse. In order to prevent this unwanted event, it is indispensable to conduct continuous bridge monitoring and timely maintenance. Visual inspection is the most widely used method, but it is heavily dependent on the experience of the inspectors. It is also time-consuming, labor-intensive, costly, disruptive, and even unsafe for the inspectors. In order to address its limitations, in recent years increasing interests have been paid to the use of unmanned aerial vehicles (UAVs), which is expected to make the inspection process safer, faster and more cost-effective. In addition, it can cover the area where it is too hard to reach by inspectors. However, this strategy is still in a primitive stage because there are many things to be addressed for real implementation. In this paper, a typical procedure of bridge inspection using UAVs consisting of three phases (i.e., pre-inspection, inspection, and post-inspection phases) and the detailed tasks by phase are described. Also, three major challenges, which are related to a UAV's flight, image data acquisition, and damage identification, respectively, are identified from a practical perspective (e.g., localization of a UAV under the bridge, high-quality image capture, etc.) and their possible solutions are discussed by examining recently developed or currently developing techniques such as the graph-based localization algorithm, and the image quality assessment and enhancement strategy. In particular, deep learning based algorithms such as R-CNN and Mask R-CNN for classifying, localizing and quantifying several damage types (e.g., cracks, corrosion, spalling, efflorescence, etc.) in an automatic manner are discussed. This strategy is based on a huge amount of image data obtained from unmanned inspection equipment consisting of the UAV and imaging devices (vision and IR cameras).

능동 구조광 영상기반 전방향 거리측정 (Omnidirectional Distance Measurement based on Active Structured Light Image)

  • 신진;이수영;홍영진;서진호
    • 제어로봇시스템학회논문지
    • /
    • 제16권8호
    • /
    • pp.751-755
    • /
    • 2010
  • In this paper, we proposed an omnidirectional ranging system that is able to obtain $360^{\circ}$ all directional distances effectively based on structured light image. The omnidirectional ranging system consists of laser structured light source and a catadioptric omnidirectional camera with a curved mirror. The proposed integro-differential structured light image processing algorithm makes the ranging system robust against environmental illumination condition. The omnidirectional ranging system is useful for map-building and self-localization of a mobile robot.

A Vehicular License Plate Recognition Framework For Skewed Images

  • Arafat, M.Y.;Khairuddin, A.S.M.;Paramesran, R.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권11호
    • /
    • pp.5522-5540
    • /
    • 2018
  • Vehicular license plate (LP) recognition system has risen as a significant field of research recently because various explorations are currently being conducted by the researchers to cope with the challenges of LPs which include different illumination and angular situations. This research focused on restricted conditions such as using image of only one vehicle, stationary background, no angular adjustment of the skewed images. A real time vehicular LP recognition scheme is proposed for the skewed images for detection, segmentation and recognition of LP. In this research, a polar co-ordinate transformation procedure is implemented to adjust the skewed vehicular images. Besides that, window scanning procedure is utilized for the candidate localization that is based on the texture characteristics of the image. Then, connected component analysis (CCA) is implemented to the binary image for character segmentation where the pixels get connected in an eight-point neighbourhood process. Finally, optical character recognition is implemented for the recognition of the characters. For measuring the performance of this experiment, 300 skewed images of different illumination conditions with various tilt angles have been tested. The results show that proposed method able to achieve accuracy of 96.3% in localizing, 95.4% in segmenting and 94.2% in recognizing the LPs with an average localization time of 0.52s.

무인로봇 정밀위치추정을 위한 전술통신 및 영상 기반의 통합항법 성능 분석 (The Performance Analysis of Integrated Navigation System Based on the Tactical Communication and VISION for the Accurate Localization of Unmanned Robot)

  • 최지훈;박용운;송재복;권인소
    • 한국군사과학기술학회지
    • /
    • 제14권2호
    • /
    • pp.271-280
    • /
    • 2011
  • This paper presents a navigation system based on the tactical communication and vision system in outdoor environments which is applied to unmanned robot for perimeter surveillance operations. GPS errors of robot are compensated by the reference station of C2(command and control) vehicle and WiBro(Wireless Broadband) is used for the communication between two systems. In the outdoor environments, GPS signals can be easily blocked due to trees and buildings. In this environments, however, vision system is very efficient because there are many features. With the feature MAP around the operation environments, the robot can estimate the position by the image matching and pose estimation. In the navigation system, thus, operation modes is switched by navigation manager according to some environment conditions. The experimental results show that the unmanned robot can estimate the position very accurately in outdoor environment.