• 제목/요약/키워드: Vision Based Navigation

검색결과 193건 처리시간 0.026초

야지환경에서 연합형 필터 기반의 다중센서 융합을 이용한 무인지상로봇 위치추정 (UGV Localization using Multi-sensor Fusion based on Federated Filter in Outdoor Environments)

  • 최지훈;박용운;주상현;심성대;민지홍
    • 한국군사과학기술학회지
    • /
    • 제15권5호
    • /
    • pp.557-564
    • /
    • 2012
  • This paper presents UGV localization using multi-sensor fusion based on federated filter in outdoor environments. The conventional GPS/INS integrated system does not guarantee the robustness of localization because GPS is vulnerable to external disturbances. In many environments, however, vision system is very efficient because there are many features compared to the open space and these features can provide much information for UGV localization. Thus, this paper uses the scene matching and pose estimation based vision navigation, magnetic compass and odometer to cope with the GPS-denied environments. NR-mode federated filter is used for system safety. The experiment results with a predefined path demonstrate enhancement of the robustness and accuracy of localization in outdoor environments.

센서 융합 기반 정밀 측위를 위한 노면 표시 검출 (Road Surface Marking Detection for Sensor Fusion-based Positioning System)

  • 김동석;정호기
    • 한국자동차공학회논문집
    • /
    • 제22권7호
    • /
    • pp.107-116
    • /
    • 2014
  • This paper presents camera-based road surface marking detection methods suited to sensor fusion-based positioning system that consists of low-cost GPS (Global Positioning System), INS (Inertial Navigation System), EDM (Extended Digital Map), and vision system. The proposed vision system consists of two parts: lane marking detection and RSM (Road Surface Marking) detection. The lane marking detection provides ROIs (Region of Interest) that are highly likely to contain RSM. The RSM detection generates candidates in the regions and classifies their types. The proposed system focuses on detecting RSM without false detections and performing real time operation. In order to ensure real time operation, the gating varies for lane marking detection and changes detection methods according to the FSM (Finite State Machine) about the driving situation. Also, a single template matching is used to extract features for both lane marking detection and RSM detection, and it is efficiently implemented by horizontal integral image. Further, multiple step verification is performed to minimize false detections.

이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합 (Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot)

  • 김민영;안상태;조형석
    • 제어로봇시스템학회논문지
    • /
    • 제16권4호
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

실내 환경에서의 로봇 자율주행을 위한 천장영상으로부터의 이종 특징점을 이용한 단일비전 기반 자기 위치 추정 시스템 (Monocular Vision Based Localization System using Hybrid Features from Ceiling Images for Robot Navigation in an Indoor Environment)

  • 강정원;방석원;크리스토퍼 쥐 애키슨;홍영진;서진호;이정우;정명진
    • 로봇학회논문지
    • /
    • 제6권3호
    • /
    • pp.197-209
    • /
    • 2011
  • This paper presents a localization system using ceiling images in a large indoor environment. For a system with low cost and complexity, we propose a single camera based system that utilizes ceiling images acquired from a camera installed to point upwards. For reliable operation, we propose a method using hybrid features which include natural landmarks in a natural scene and artificial landmarks observable in an infrared ray domain. Compared with previous works utilizing only infrared based features, our method reduces the required number of artificial features as we exploit both natural and artificial features. In addition, compared with previous works using only natural scene, our method has an advantage in the convergence speed and robustness as an observation of an artificial feature provides a crucial clue for robot pose estimation. In an experiment with challenging situations in a real environment, our method was performed impressively in terms of the robustness and accuracy. To our knowledge, our method is the first ceiling vision based localization method using features from both visible and infrared rays domains. Our system can be easily utilized with a variety of service robot applications in a large indoor environment.

GPS를 활용한 Vision/IMU/OBD 시각동기화 기법 (A Time Synchronization Scheme for Vision/IMU/OBD by GPS)

  • 임준후;최광호;유원재;김라우;이유담;이형근
    • 한국항행학회논문지
    • /
    • 제21권3호
    • /
    • pp.251-257
    • /
    • 2017
  • 차량의 정확한 위치 추정을 위하여 GPS (global positioning system)와 영상 센서, 관성 센서 등을 결합한 복합 측위에 대한 연구가 활발히 진행되고 있다. 본 논문에서는 복합 측위에 있어 중요한 요소 중 하나인 각 센서 간의 시각동기화 기법을 제안한다. 제안된 기법은 GPS 시각 정보를 기준으로 시각동기화 된 영상 센서, 관성 센서와 OBD (on-board diagnostics) 측정치를 획득하는 기법이다. GPS로부터 시각 정보와 위치 정보를 획득하며, 관성 센서로부터 차량의 자세에 관련된 측정치와 OBD를 활용하여 차량의 속력을 획득한다. 영상 센서로부터 획득한 영상에 GPS 시각 정보와 위치 정보, 관성 센서와 OBD의 측정치를 색상으로 변환하여 영상 픽셀에 삽입하는 기법을 제안한다. 또한, 영상에 삽입된 시각동기화 된 센서 측정치들은 변환 과정을 통하여 추출할 수 있다. 각 센서들의 결합을 위하여 임베디드 리눅스 보드를 활용하였으며, 제안된 기법의 성능 평가를 위하여 실제 차량 주행을 통한 실험을 수행하였다.

3D-Lidar 기반 도로 반사도 지도와 IPM 영상을 이용한 위치추정 (Localization Using 3D-Lidar Based Road Reflectivity Map and IPM Image)

  • 정태기;송종화;임준혁;이병현;지규인
    • 제어로봇시스템학회논문지
    • /
    • 제22권12호
    • /
    • pp.1061-1067
    • /
    • 2016
  • Position of the vehicle for driving is essential to autonomous navigation. However, there appears GPS position error due to multipath which is occurred by tall buildings in downtown area. In this paper, GPS position error is corrected by using camera sensor and highly accurate map made with 3D-Lidar. Input image through inverse perspective mapping is converted into top-view image, and it works out map matching with the map which has intensity of 3D-Lidar. Performance comparison was conducted between this method and traditional way which does map matching with input image after conversion of map to pinhole camera image. As a result, longitudinal error declined 49% and complexity declined 90%.

Global Map Building and Navigation of Mobile Robot Based on Ultrasonic Sensor Data Fusion

  • Kang, Shin-Chul;Jin, Tae-Seok
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제7권3호
    • /
    • pp.198-204
    • /
    • 2007
  • In mobile robotics, ultrasonic sensors became standard devices for collision avoiding. Moreover, their applicability for map building and navigation has exploited in recent years. In this paper, as the preliminary step for developing a multi-purpose autonomous carrier mobile robot to transport trolleys or heavy goods and serve as robotic nursing assistant in hospital wards. The aim of this paper is to present the use of multi-sensor data fusion such as ultrasonic sensor, IR sensor for mobile robot to navigate, and presents an experimental mobile robot designed to operate autonomously within both indoor and outdoor environments. The global map building based on multi-sensor data fusion is applied for recognition an obstacle free path from a starting position to a known goal region, and simultaneously build a map of straight line segment geometric primitives based on the application of the Hough transform from the actual and noisy sonar data. We will give an explanation for the robot system architecture designed and implemented in this study and a short review of existing techniques, Hough transform, since there exist several recent thorough books and review paper on this paper. Experimental results with a real Pioneer DX2 mobile robot will demonstrate the effectiveness of the discussed methods.

Position Estimation Using Neural Network for Navigation of Wheeled Mobile Robot (WMR) in a Corridor

  • Choi, Kyung-Jin;Lee, Young-Hyun;Park, Chong-Kug
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1259-1263
    • /
    • 2004
  • This paper describes position estimation algorithm using neural network for the navigation of the vision-based wheeled mobile robot (WMR) in a corridor with taking ceiling lamps as landmark. From images of a corridor the lamp's line on the ceiling in corridor has a specific slope to the lateral position of the WMR. The vanishing point produced by the lamp's line also has a specific position to the orientation of WMR. The ceiling lamps have a limited size and shape like a circle in image. Simple image processing algorithms are used to extract lamps from the corridor image. Then the lamp's line and vanishing point's position are defined and calculated at known position of WMR in a corridor. To estimate the lateral position and orientation of WMR from an image, the relationship between the position of WMR and the features of ceiling lamps have to be defined. But it is hard because of nonlinearity. Therefore, data set between position of WMR and features of lamps are configured. Neural network are composed and learned with data set. Back propagation algorithm(BPN) is used for learning. And it is applied in navigation of WMR in a corridor.

  • PDF

손 동작 인식을 이용한 이동로봇의 주행 (Navigation of a Mobile Robot Using Hand Gesture Recognition)

  • 김일명;김완철;윤경식;이장명
    • 제어로봇시스템학회논문지
    • /
    • 제8권7호
    • /
    • pp.599-606
    • /
    • 2002
  • A new method to govern the navigation of a mobile robot using hand gesture recognition is proposed based on the following two procedures. One is to achieve vision information by using a 2-DOF camera as a communicating medium between a man and a mobile robot and the other is to analyze and to control the mobile robot according to the recognized hand gesture commands. In the previous researches, mobile robots are passively to move through landmarks, beacons, etc. In this paper, to incorporate various changes of situation, a new control system that manages the dynamical navigation of mobile robot is proposed. Moreover, without any generally used expensive equipments or complex algorithms for hand gesture recognition, a reliable hand gesture recognition system is efficiently implemented to convey the human commands to the mobile robot with a few constraints.

무인 쿼드로터 로봇 횡 방향 제어를 위한 Fuzzy-PI 제어기 설계 (Design of Lateral Fuzzy-PI Controller for Unmanned Quadrotor Robot)

  • 백승준;이덕진;박종호;정길도
    • 제어로봇시스템학회논문지
    • /
    • 제19권2호
    • /
    • pp.164-170
    • /
    • 2013
  • Quadrotor UAV (Unmanned Aerial Vehicle) is a flying robotic platform which has drawn lots of attention in the recent years. The attraction comes from the fact that it is able to perform agile VTOL (Vertical Take-Off Landing) and hovering functions. In addition, the efficient modular structure composed of four electric rotors makes its design easier compared to other single-rotor type helicopters. In many cases, a quadrotor often utilizes vision systems in order to obtain altitude control and navigation solution in hostile environments where GPS receivers are not working or deniable. For carrying out their successful missions, it is essential for flight control systems to have fast and stable control responses of heading angle outputs. This paper presents a Fuzzy Logic based lateral PI controller to stabilize and control the quadrotor vehicle equipped with vision systems. The advantage of using the fuzzy based PI controller lies in the fact that it could acquire a desired output response of a heading angle even in presence of disturbances and uncertainties. The performance comparison of the newly proposed Fuzzy-PI controller and the conventional PI controller was carried out with various simulation results.