• 제목/요약/키워드: Vision Based Navigation

검색결과 195건 처리시간 0.033초

이동로봇의 물체인식 기반 전역적 자기위치 추정 (Object Recognition-based Global Localization for Mobile Robots)

  • 박순용;박민용;박성기
    • 로봇학회논문지
    • /
    • 제3권1호
    • /
    • pp.33-41
    • /
    • 2008
  • Based on object recognition technology, we present a new global localization method for robot navigation. For doing this, we model any indoor environment using the following visual cues with a stereo camera; view-based image features for object recognition and those 3D positions for object pose estimation. Also, we use the depth information at the horizontal centerline in image where optical axis passes through, which is similar to the data of the 2D laser range finder. Therefore, we can build a hybrid local node for a topological map that is composed of an indoor environment metric map and an object location map. Based on such modeling, we suggest a coarse-to-fine strategy for estimating the global localization of a mobile robot. The coarse pose is obtained by means of object recognition and SVD based least-squares fitting, and then its refined pose is estimated with a particle filtering algorithm. With real experiments, we show that the proposed method can be an effective vision- based global localization algorithm.

  • PDF

Planar Region Extraction for Visual Navigation using Stereo Cameras

  • Lee, Se-Na;You, Bum-Jae;Ko, Sung-Jea
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.681-686
    • /
    • 2003
  • In this paper, we propose an algorithm to extract valid planar regions from stereo images for visual navigation of mobile robots. The algorithm is based on the difference image between the stereo images obtained by applying Homography matrix between stereo cameras. Illegal planar regions are filtered out by the use of labeling of the difference images and filtering of invalid blobs using the size of each blob. Also, illegal large planar regions such as walls are removed by adopting a weighted low-pass filtering of the difference image using the past difference images. The algorithms are experimented successfully by the use of stereo camera system built in a mobile robot and a PC-based real-time vision system.

  • PDF

수중 구조물 형상의 영상 정보를 이용한 수중로봇 위치인식 기법 (Localization of AUV Using Visual Shape Information of Underwater Structures)

  • 정종대;최수영;최현택;명현
    • 한국해양공학회지
    • /
    • 제29권5호
    • /
    • pp.392-397
    • /
    • 2015
  • An autonomous underwater vehicle (AUV) can perform flexible operations even in complex underwater environments because of its autonomy. Localization is one of the key components of this autonomous navigation. Because the inertial navigation system of an AUV suffers from drift, observing fixed objects in an inertial reference system can enhance the localization performance. In this paper, we propose a method of AUV localization using visual measurements of underwater structures. A camera measurement model that emulates the camera’s observations of underwater structures is designed in a particle filtering framework. Then, the particle weight is updated based on the extracted visual information of the underwater structures. The proposed method is validated based on the results of experiments performed in a structured basin environment.

곡선모델 차선검출 기반의 GPS 횡방향 오차보정 성능향상 기법 (Curve-Modeled Lane Detection based GPS Lateral Error Correction Enhancement)

  • 이병현;임성혁;허문범;지규인
    • 제어로봇시스템학회논문지
    • /
    • 제21권2호
    • /
    • pp.81-86
    • /
    • 2015
  • GPS position errors were corrected for guidance of autonomous vehicles. From the vision, we can obtain the lateral distance from the center of lane and the angle difference between the left and right detected line. By using a controller which makes these two measurements zero, a lane following system can be easily implemented. However, the problem is that if there's no lane, such as crossroad, the guidance system of autonomous vehicle does not work. In addition, Line detection has problems working on curved areas. In this case, the lateral distance measurement has an error because of a modeling mismatch. For this reason, we propose GPS error correction filter based on curve-modeled lane detection and evaluated the performance applying it to an autonomous vehicle at the test site.

교통 표지판의 3차원 추적 경로를 이용한 자동차의 주행 차로 추정 (Lane-Level Positioning based on 3D Tracking Path of Traffic Signs)

  • 박순용;김성주
    • 로봇학회논문지
    • /
    • 제11권3호
    • /
    • pp.172-182
    • /
    • 2016
  • Lane-level vehicle positioning is an important task for enhancing the accuracy of in-vehicle navigation systems and the safety of autonomous vehicles. GPS (Global Positioning System) and DGPS (Differential GPS) are generally used in navigation service systems, which however only provide an accuracy level up to 2~3 m. In this paper, we propose a 3D vision based lane-level positioning technique which can provides accurate vehicle position. The proposed method determines the current driving lane of a vehicle by tracking the 3D position of traffic signs which stand at the side of the road. Using a stereo camera, the 3D tracking paths of traffic signs are computed and their projections to the 2D road plane are used to determine the distance from the vehicle to the signs. Several experiments are performed to analyze the feasibility of the proposed method in many real roads. According to the experimental results, the proposed method can achieve 90.9% accuracy in lane-level positioning.

머신비젼 기반의 자율주행 차량을 위한 카메라 교정 (Camera Calibration for Machine Vision Based Autonomous Vehicles)

  • 이문규;안택진
    • 제어로봇시스템학회논문지
    • /
    • 제8권9호
    • /
    • pp.803-811
    • /
    • 2002
  • Machine vision systems are usually used to identify traffic lanes and then determine the steering angle of an autonomous vehicle in real time. The steering angle is calculated using a geometric model of various parameters including the orientation, position, and hardware specification of a camera in the machine vision system. To find the accurate values of the parameters, camera calibration is required. This paper presents a new camera-calibration algorithm using known traffic lane features, line thickness and lane width. The camera parameters considered are divided into two groups: Group I (the camera orientation, the uncertainty image scale factor, and the focal length) and Group II(the camera position). First, six control points are extracted from an image of two traffic lines and then eight nonlinear equations are generated based on the points. The least square method is used to find the estimates for the Group I parameters. Finally, values of the Group II parameters are determined using point correspondences between the image and its corresponding real world. Experimental results prove the feasibility of the proposed algorithm.

비전 센서를 갖는 이동 로봇의 복도 주행 시 직진 속도 제어 (Linear Velocity Control of the Mobile Robot with the Vision System at Corridor Navigation)

  • 권지욱;홍석교;좌동경
    • 제어로봇시스템학회논문지
    • /
    • 제13권9호
    • /
    • pp.896-902
    • /
    • 2007
  • This paper proposes a vision-based kinematic control method for mobile robots with camera-on-board. In the previous literature on the control of mobile robots using camera vision information, the forward velocity is set to be a constant, and only the rotational velocity of the robot is controlled. More efficient motion, however, is needed by controlling the forward velocity, depending on the position in the corridor. Thus, both forward and rotational velocities are controlled in the proposed method such that the mobile robots can move faster when the comer of the corridor is far away, and it slows down as it approaches the dead end of the corridor. In this way, the smooth turning motion along the corridor is possible. To this end, visual information using the camera is used to obtain the perspective lines and the distance from the current robot position to the dead end. Then, the vanishing point and the pseudo desired position are obtained, and the forward and rotational velocities are controlled by the LOS(Line Of Sight) guidance law. Both numerical and experimental results are included to demonstrate the validity of the proposed method.

LRF 를 이용한 이동로봇의 실시간 차선 인식 및 자율주행 (A Real Time Lane Detection Algorithm Using LRF for Autonomous Navigation of a Mobile Robot)

  • 김현우;황요섭;김윤기;이동혁;이장명
    • 제어로봇시스템학회논문지
    • /
    • 제19권11호
    • /
    • pp.1029-1035
    • /
    • 2013
  • This paper proposes a real time lane detection algorithm using LRF (Laser Range Finder) for autonomous navigation of a mobile robot. There are many technologies for safety of the vehicles such as airbags, ABS, EPS etc. The real time lane detection is a fundamental requirement for an automobile system that utilizes outside information of automobiles. Representative methods of lane recognition are vision-based and LRF-based systems. By the vision-based system, recognition of environment for three dimensional space becomes excellent only in good conditions for capturing images. However there are so many unexpected barriers such as bad illumination, occlusions, and vibrations that the vision cannot be used for satisfying the fundamental requirement. In this paper, we introduce a three dimensional lane detection algorithm using LRF, which is very robust against the illumination. For the three dimensional lane detections, the laser reflection difference between the asphalt and lane according to the color and distance has been utilized with the extraction of feature points. Also a stable tracking algorithm is introduced empirically in this research. The performance of the proposed algorithm of lane detection and tracking has been verified through the real experiments.

레이저 센서에서 두 개의 특징점을 이용한 이동로봇의 항법 (Two Feature Points Based Laser Scanner for Mobile Robot Navigation)

  • 김주완;심덕선
    • 한국항행학회논문지
    • /
    • 제18권2호
    • /
    • pp.134-141
    • /
    • 2014
  • 이동로봇의 주행에는 주로 바퀴 엔코더, 비전, 초음파, 레이저 센서가 많이 사용된다. 바퀴의 엔코더는 추측항법으로 시간에 따라 오차가 누적되기 때문에 단독 사용으로는 정확한 로봇의 위치를 계산할 수가 없다. 비전 센서는 풍부한 정보를 제공하지만 정보추출에 시간이 많이 소요되고, 초음파 센서는 거리정보의 정확도가 떨어지기 때문에 항행에 사용하기에는 어려움이 있다. 반면 레이저 센서는 비교적 정확한 거리정보를 제공하여 주므로 주행 센서로 사용하기 적합하다. 본 논문에서는 레이저 거리계에서 각도를 추출하는 방법을 제안하고 칼만 필터를 사용하여 레이저 거리계에서 추출한 거리 및 각도와 바퀴 엔코더에서 추출한 거리 및 각도에 대한 정합을 수행한다. 일반적으로 레이저 거리계 사용시 특징점 하나를 사용한 경우에 그 특징점이 변하거나 새로운 특징점으로 이동할 때 오차가 커질 수가 있다. 이를 보완하기 위해 이동 로봇의 주행 시 레이저 스캐너에서 두 개의 특징점들을 사용하는 방법을 사용하여 이동 로봇의 항법 성능이 향상됨을 보인다.

Korean Wide Area Differential Global Positioning System Development Status and Preliminary Test Results

  • Yun, Ho;Kee, Chang-Don;Kim, Do-Yoon
    • International Journal of Aeronautical and Space Sciences
    • /
    • 제12권3호
    • /
    • pp.274-282
    • /
    • 2011
  • This paper is focused on dynamic modeling and control system design as well as vision based collision avoidance for multi-rotor unmanned aerial vehicles (UAVs). Multi-rotor UAVs are defined as rotary-winged UAVs with multiple rotors. These multi-rotor UAVs can be utilized in various military situations such as surveillance and reconnaissance. They can also be used for obtaining visual information from steep terrains or disaster sites. In this paper, a quad-rotor model is introduced as well as its control system, which is designed based on a proportional-integral-derivative controller and vision-based collision avoidance control system. Additionally, in order for a UAV to navigate safely in areas such as buildings and offices with a number of obstacles, there must be a collision avoidance algorithm installed in the UAV's hardware, which should include the detection of obstacles, avoidance maneuvering, etc. In this paper, the optical flow method, one of the vision-based collision avoidance techniques, is introduced, and multi-rotor UAV's collision avoidance simulations are described in various virtual environments in order to demonstrate its avoidance performance.