• 제목/요약/키워드: vision based navigation

검색결과 194건 처리시간 0.028초

천장 조명의 위치와 방위 정보를 이용한 모노카메라와 오도메트리 정보 기반의 SLAM (Monocular Vision and Odometry-Based SLAM Using Position and Orientation of Ceiling Lamps)

  • 황서연;송재복
    • 제어로봇시스템학회논문지
    • /
    • 제17권2호
    • /
    • pp.164-170
    • /
    • 2011
  • This paper proposes a novel monocular vision-based SLAM (Simultaneous Localization and Mapping) method using both position and orientation information of ceiling lamps. Conventional approaches used corner or line features as landmarks in their SLAM algorithms, but these methods were often unable to achieve stable navigation due to a lack of reliable visual features on the ceiling. Since lamp features are usually placed some distances from each other in indoor environments, they can be robustly detected and used as reliable landmarks. We used both the position and orientation of a lamp feature to accurately estimate the robot pose. Its orientation is obtained by calculating the principal axis from the pixel distribution of the lamp area. Both corner and lamp features are used as landmarks in the EKF (Extended Kalman Filter) to increase the stability of the SLAM process. Experimental results show that the proposed scheme works successfully in various indoor environments.

건물 복도의 비전기반로봇 주행 (The Mobile Robot For Vision-Based Navigation In a Corridor)

  • 배성훈;최경진;이용현;박종국
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2002년도 합동 추계학술대회 논문집 정보 및 제어부문
    • /
    • pp.154-158
    • /
    • 2002
  • This paper describes a path tracking method for vision-based and autonomous mobile robot in a corridor. At first, we extract the ceiling-lamp of the corridor through simple preprocessing (gray, thresholding, labeling, etc.) for robot position and orientation. Then, we design the controller for path-tracking. Simulations conducted, and acceptable vehicle localization results were obtained to prove the feasibility of the proposed approach.

  • PDF

전개형 생체모방로봇을 위한 안전한 자율주행시스템 설계 (Design of Safe Autonomous Navigation System for Deployable Bio-inspired Robot)

  • 최근하;한상권;이진이;이진우;안정도;김경수;김수현
    • 제어로봇시스템학회논문지
    • /
    • 제20권4호
    • /
    • pp.456-462
    • /
    • 2014
  • In this paper, we present a deployable bio-inspired robot called the Pillbot-light, which utilizes a safe autonomous navigation system. The Pillbot-light is mounted the station robot, and can be operated in a disaster relief operation or military operation. However, the Pilbot-light has a challenge to navigate autonomously because the Pilbot-light cannot be equipped with various sensors. As a result, we propose a new robot system for autonomous navigation that the station robot controls Pillbot-light equipped with vision camera and CPU of high performance. This system detects obstacles based on the edge extraction using vision camera. Also, it cannot only achieve path planning using the hazard cost function, but also localization using the Particle Filter. And this system is verified by simulation and experiment.

A Study on the Implementation of RFID-based Autonomous Navigation System for Robotic Cellular Phone(RCP)

  • Choe, Jae-Il;Choi, Jung-Wook;Oh, Dong-Ik;Kim, Seung-Woo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.457-462
    • /
    • 2005
  • Industrial and economical importance of CP(Cellular Phone) is growing rapidly. Combined with IT technology, CP is currently one of the most attractive technologies for all. However, unless we find a breakthrough to the technology, its growth may slow down soon. RT(Robot Technology) is considered one of the most promising next generation technology. Unlike the industrial robot of the past, today's robots require advanced technologies, such as soft computing, human-friendly interface, interaction technique, speech recognition, object recognition, and many others. In this study, we present a new technological concept named RCP(Robotic Cellular Phone), which combines RT & CP, in the vision of opening a new direction to the advance of CP, IT, and RT all together. RCP consists of 3 sub-modules. They are $RCP^{Mobility}$, $RCP^{Interaction}$, and $RCP^{Interaction}$. $RCP^{Mobility}$ is the main focus of this paper. It is an autonomous navigation system that combines RT mobility with CP. Through $RCP^{Mobility}$, we should be able to provide CP with robotic functionalities such as auto-charging and real-world robotic entertainments. Eventually, CP may become a robotic pet to the human being. $RCP^{Mobility}$ consists of various controllers. Two of the main controllers are trajectory controller and self-localization controller. While Trajectory Controller is responsible for the wheel-based navigation of RCP, Self-Localization Controller provides localization information of the moving RCP. With the coordinate information acquired from RFID-based self-localization controller, Trajectory Controller refines RCP's movement to achieve better RCP navigations. In this paper, a prototype system we developed for $RCP^{Mobility}$ is presented. We describe overall structure of the system and provide experimental results of the RCP navigation.

  • PDF

복도 주행 로봇을 위한 단일 카메라 영상에서의 사람 검출 (Human Detection in the Images of a Single Camera for a Corridor Navigation Robot)

  • 김정대;도용태
    • 로봇학회논문지
    • /
    • 제8권4호
    • /
    • pp.238-246
    • /
    • 2013
  • In this paper, a robot vision technique is presented to detect obstacles, particularly approaching humans, in the images acquired by a mobile robot that autonomously navigates in a narrow building corridor. A single low-cost color camera is attached to the robot, and a trapezoidal area is set as a region of interest (ROI) in front of the robot in the camera image. The lower parts of a human such as feet and legs are first detected in the ROI from their appearances in real time as the distance between the robot and the human becomes smaller. Then, the human detection is confirmed by detecting his/her face within a small search region specified above the part detected in the trapezoidal ROI. To increase the credibility of detection, a final decision about human detection is made when a face is detected in two consecutive image frames. We tested the proposed method using images of various people in corridor scenes, and could get promising results. This method can be used for a vision-guided mobile robot to make a detour for avoiding collision with a human during its indoor navigation.

Simultaneous Localization and Mobile Robot Navigation using a Sensor Network

  • Jin Tae-Seok;Bashimoto Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제6권2호
    • /
    • pp.161-166
    • /
    • 2006
  • Localization of mobile agent within a sensing network is a fundamental requirement for many applications, using networked navigating systems such as the sonar-sensing system or the visual-sensing system. To fully utilize the strengths of both the sonar and visual sensing systems, This paper describes a networked sensor-based navigation method in an indoor environment for an autonomous mobile robot which can navigate and avoid obstacle. In this method, the self-localization of the robot is done with a model-based vision system using networked sensors, and nonstop navigation is realized by a Kalman filter-based STSF(Space and Time Sensor Fusion) method. Stationary obstacles and moving obstacles are avoided with networked sensor data such as CCD camera and sonar ring. We will report on experiments in a hallway using the Pioneer-DX robot. In addition to that, the localization has inevitable uncertainties in the features and in the robot position estimation. Kalman filter scheme is used for the estimation of the mobile robot localization. And Extensive experiments with a robot and a sensor network confirm the validity of the approach.

A Parallel Implementation of Multiple Non-overlapping Cameras for Robot Pose Estimation

  • Ragab, Mohammad Ehab;Elkabbany, Ghada Farouk
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권11호
    • /
    • pp.4103-4117
    • /
    • 2014
  • Image processing and computer vision algorithms are gaining larger concern in a variety of application areas such as robotics and man-machine interaction. Vision allows the development of flexible, intelligent, and less intrusive approaches than most of the other sensor systems. In this work, we determine the location and orientation of a mobile robot which is crucial for performing its tasks. In order to be able to operate in real time there is a need to speed up different vision routines. Therefore, we present and evaluate a method for introducing parallelism into the multiple non-overlapping camera pose estimation algorithm proposed in [1]. In this algorithm the problem has been solved in real time using multiple non-overlapping cameras and the Extended Kalman Filter (EKF). Four cameras arranged in two back-to-back pairs are put on the platform of a moving robot. An important benefit of using multiple cameras for robot pose estimation is the capability of resolving vision uncertainties such as the bas-relief ambiguity. The proposed method is based on algorithmic skeletons for low, medium and high levels of parallelization. The analysis shows that the use of a multiprocessor system enhances the system performance by about 87%. In addition, the proposed design is scalable, which is necaccery in this application where the number of features changes repeatedly.

신형회로망을 이용한 비젼기반 자율주행차량의 횡방향제어 (Lateral Control of Vision-Based Autonomous Vehicle using Neural Network)

  • 김영주;이경백;김영배
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2000년도 추계학술대회 논문집
    • /
    • pp.687-690
    • /
    • 2000
  • Lately, many studies have been progressed for the protection human's lives and property as holding in check accidents happened by human's carelessness or mistakes. One part of these is the development of an autonomouse vehicle. General control method of vision-based autonomous vehicle system is to determine the navigation direction by analyzing lane images from a camera, and to navigate using proper control algorithm. In this paper, characteristic points are abstracted from lane images using lane recognition algorithm with sobel operator. And then the vehicle is controlled using two proposed auto-steering algorithms. Two steering control algorithms are introduced in this paper. First method is to use the geometric relation of a camera. After transforming from an image coordinate to a vehicle coordinate, a steering angle is calculated using Ackermann angle. Second one is using a neural network algorithm. It doesn't need to use the geometric relation of a camera and is easy to apply a steering algorithm. In addition, It is a nearest algorithm for the driving style of human driver. Proposed controller is a multilayer neural network using Levenberg-Marquardt backpropagation learning algorithm which was estimated much better than other methods, i.e. Conjugate Gradient or Gradient Decent ones.

  • PDF

GPS 취약 환경에서 전술급 무인항공기의 주/야간 영상정보를 기반으로 한 실시간 비행체 위치 보정 시스템 개발 (Development of Real-Time Vision Aided Navigation Using EO/IR Image Information of Tactical Unmanned Aerial System in GPS Denied Environment)

  • 최승기;조신제;강승모;이길태;이원근;정길순
    • 한국항공우주학회지
    • /
    • 제48권6호
    • /
    • pp.401-410
    • /
    • 2020
  • 본 연구에서는 전술급 무인항공기의 GPS 신호간섭 및 재밍(Jamming)/기만(Spoofing) 공격 시 위치항법 정보의 취약성을 보완하기 위해 개발한 영상정보 기반 실시간 비행체 위치보정 시스템을 기술하고자 한다. 전술급 무인항공기는 GPS 두절 시 항법장비가 GPS/INS 통합항법에서 DR/AHRS 모드로 전환하여 자동비행이 가능하나, 위치 항법의 경우 대기속도 및 방위각을 활용한 추측항법(DR, Dead Reckoning)으로 인해 시간이 지나면 오차가 누적되어 비행체 위치 파악 및 데이터링크 안테나 자동추적이 제한되는 등의 문제점을 갖고 있다. 이러한 위치 오차의 누적을 최소화하기 위해 영상감지기를 이용한 특정지역 위치보정점을 바탕으로 비행체 자세, 영상감지기 방위각/고각 및 수치지도 데이터(DTED)를 활용하여 비행체 위치를 계산하고 이를 실시간으로 항법장비에 보정하는 시스템을 개발하였다. 또한 GPS 시뮬레이터를 이용한 지상시험과 추측항법 모드의 비행시험으로 영상정보 기반 실시간 비행체 위치보정 시스템의 기능 및 성능을 검증하였다.

비선형 변환의 비젼센서 데이터융합을 이용한 이동로봇 주행제어 (Control of Mobile Robot Navigation Using Vision Sensor Data Fusion by Nonlinear Transformation)

  • 진태석;이장명
    • 제어로봇시스템학회논문지
    • /
    • 제11권4호
    • /
    • pp.304-313
    • /
    • 2005
  • The robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, robot need to recognize his position and direction for intelligent performance in an unknown environment. And the mobile robots may navigate by means of a number of monitoring systems such as the sonar-sensing system or the visual-sensing system. Notice that in the conventional fusion schemes, the measurement is dependent on the current data sets only. Therefore, more of sensors are required to measure a certain physical parameter or to improve the accuracy of the measurement. However, in this research, instead of adding more sensors to the system, the temporal sequence of the data sets are stored and utilized for the accurate measurement. As a general approach of sensor fusion, a UT -Based Sensor Fusion(UTSF) scheme using Unscented Transformation(UT) is proposed for either joint or disjoint data structure and applied to the landmark identification for mobile robot navigation. Theoretical basis is illustrated by examples and the effectiveness is proved through the simulations and experiments. The newly proposed, UT-Based UTSF scheme is applied to the navigation of a mobile robot in an unstructured environment as well as structured environment, and its performance is verified by the computer simulation and the experiment.