• Title/Summary/Keyword: Upward-looking camera

Search Result 4, Processing Time 0.023 seconds

Artificial Landmark based Pose-Graph SLAM for AGVs in Factory Environments (공장환경에서 AGV를 위한 인공표식 기반의 포즈그래프 SLAM)

  • Heo, Hwan;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.2
    • /
    • pp.112-118
    • /
    • 2015
  • This paper proposes a pose-graph based SLAM method using an upward-looking camera and artificial landmarks for AGVs in factory environments. The proposed method provides a way to acquire the camera extrinsic matrix and improves the accuracy of feature observation using a low-cost camera. SLAM is conducted by optimizing AGV's explored path using the artificial landmarks installed on the ceiling at various locations. As the AGV explores, the pose nodes are added based on the certain distance from odometry and the landmark nodes are registered when AGV recognizes the fiducial marks. As a result of the proposed scheme, a graph network is created and optimized through a G2O optimization tool so that the accumulated error due to the slip is minimized. The experiment shows that the proposed method is robust for SLAM in real factory environments.

Global Localization Based on Ceiling Image Map (천장 영상지도 기반의 전역 위치추정)

  • Heo, Hwan;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.9 no.3
    • /
    • pp.170-177
    • /
    • 2014
  • This paper proposes a novel upward-looking camera-based global localization using a ceiling image map. The ceiling images obtained through the SLAM process are integrated into the ceiling image map using a particle filter. Global localization is performed by matching the ceiling image map with the current ceiling image using SURF keypoint correspondences. The robot pose is then estimated by the coordinate transformation from the ceiling image map to the global coordinate system. A series of experiments show that the proposed method is robust in real environments.

Robust Global Localization based on Environment map through Sensor Fusion (센서 융합을 통한 환경지도 기반의 강인한 전역 위치추정)

  • Jung, Min-Kuk;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.9 no.2
    • /
    • pp.96-103
    • /
    • 2014
  • Global localization is one of the essential issues for mobile robot navigation. In this study, an indoor global localization method is proposed which uses a Kinect sensor and a monocular upward-looking camera. The proposed method generates an environment map which consists of a grid map, a ceiling feature map from the upward-looking camera, and a spatial feature map obtained from the Kinect sensor. The method selects robot pose candidates using the spatial feature map and updates sample poses by particle filter based on the grid map. Localization success is determined by calculating the matching error from the ceiling feature map. In various experiments, the proposed method achieved a position accuracy of 0.12m and a position update speed of 10.4s, which is robust enough for real-world applications.

Point Pattern Matching Based Global Localization using Ceiling Vision (천장 조명을 이용한 점 패턴 매칭 기반의 광역적인 위치 추정)

  • Kang, Min-Tae;Sung, Chang-Hun;Roh, Hyun-Chul;Chung, Myung-Jin
    • Proceedings of the KIEE Conference
    • /
    • 2011.07a
    • /
    • pp.1934-1935
    • /
    • 2011
  • In order for a service robot to perform several tasks, basically autonomous navigation technique such as localization, mapping, and path planning is required. The localization (estimation robot's pose) is fundamental ability for service robot to navigate autonomously. In this paper, we propose a new system for point pattern matching based visual global localization using spot lightings in ceiling. The proposed algorithm us suitable for system that demands high accuracy and fast update rate such a guide robot in the exhibition. A single camera looking upward direction (called ceiling vision system) is mounted on the head of the mobile robot and image features such as lightings are detected and tracked through the image sequence. For detecting more spot lightings, we choose wide FOV lens, and inevitably there is serious image distortion. But by applying correction calculation only for the position of spot lightings not whole image pixels, we can decrease the processing time. And then using point pattern matching and least square estimation, finally we can get the precise position and orientation of the mobile robot. Experimental results demonstrate the accuracy and update rate of the proposed algorithm in real environments.

  • PDF