• 제목/요약/키워드: Obstacle Recognition System

검색결과 86건 처리시간 0.028초

실외 자율 로봇 주행을 위한 센서 퓨전 시스템 구현 (Implementation of a sensor fusion system for autonomous guided robot navigation in outdoor environments)

  • 이승환;이헌철;이범희
    • 센서학회지
    • /
    • 제19권3호
    • /
    • pp.246-257
    • /
    • 2010
  • Autonomous guided robot navigation which consists of following unknown paths and avoiding unknown obstacles has been a fundamental technique for unmanned robots in outdoor environments. The unknown path following requires techniques such as path recognition, path planning, and robot pose estimation. In this paper, we propose a novel sensor fusion system for autonomous guided robot navigation in outdoor environments. The proposed system consists of three monocular cameras and an array of nine infrared range sensors. The two cameras equipped on the robot's right and left sides are used to recognize unknown paths and estimate relative robot pose on these paths through bayesian sensor fusion method, and the other camera equipped at the front of the robot is used to recognize abrupt curves and unknown obstacles. The infrared range sensor array is used to improve the robustness of obstacle avoidance. The forward camera and the infrared range sensor array are fused through rule-based method for obstacle avoidance. Experiments in outdoor environments show the mobile robot with the proposed sensor fusion system performed successfully real-time autonomous guided navigation.

신경망을 이용한 차선과 장애물 인식에 관한 연구 (Lane and Obstacle Recognition Using Artificial Neural Network)

  • 김명수;양성훈;이상호;이석
    • 한국정밀공학회지
    • /
    • 제16권10호
    • /
    • pp.25-34
    • /
    • 1999
  • In this paper, an algorithm is presented to recognize lane and obstacles based on highway road image. The road images obtained by a video camera undergoes a pre-processing that includes filtering, edge detection, and identification of lanes. After this pre-processing, a part of image is grouped into 27 sub-windows and fed into a three-layer feed-forward neural network. The neural network is trained to indicate the road direction and the presence of absence of an obstacle. The proposed algorithm has been tested with the images different from the training images, and demonstrated its efficacy for recognizing lane and obstacles. Based on the test results, it can be said that the algorithm successfully combines the traditional image processing and the neural network principles towards a simpler and more efficient driver warning of assistance system

  • PDF

Fusion of Sonar and Laser Sensor for Mobile Robot Environment Recognition

  • Kim, Kyung-Hoon;Cho, Hyung-Suck
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.91.3-91
    • /
    • 2001
  • A sensor fusion scheme for mobile robot environment recognition that incorporates range data and contour data is proposed. Ultrasonic sensor provides coarse spatial description but guarantees open space with no obstacle within sonic cone with relatively high belief. Laser structured light system provides detailed contour description of environment but prone to light noise and is easily affected by surface reflectivity. Overall fusion process is composed of two stages: Noise elimination and belief updates. Dempster Shafer´s evidential reasoning is applied at each stage. Open space estimation from sonar range measurements brings elimination of noisy lines from laser sensor. Comparing actual sonar data to the simulated sonar data enables ...

  • PDF

레이저 슬릿빔과 CCD 카메라를 이용한 3차원 영상인식 (3D image processing using laser slit beam and CCD camera)

  • 김동기;윤광의;강이석
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1997년도 한국자동제어학술회의논문집; 한국전력공사 서울연수원; 17-18 Oct. 1997
    • /
    • pp.40-43
    • /
    • 1997
  • This paper presents a 3D object recognition method for generation of 3D environmental map or obstacle recognition of mobile robots. An active light source projects a stripe pattern of light onto the object surface, while the camera observes the projected pattern from its offset point. The system consists of a laser unit and a camera on a pan/tilt device. The line segment in 2D camera image implies an object surface plane. The scaling, filtering, edge extraction, object extraction and line thinning are used for the enhancement of the light stripe image. We can get faithful depth informations of the object surface from the line segment interpretation. The performance of the proposed method has demonstrated in detail through the experiments for varies type objects. Experimental results show that the method has a good position accuracy, effectively eliminates optical noises in the image, greatly reduces memory requirement, and also greatly cut down the image processing time for the 3D object recognition compared to the conventional object recognition.

  • PDF

Development of IoT System Based on Context Awareness to Assist the Visually Impaired

  • Song, Mi-Hwa
    • International Journal of Advanced Culture Technology
    • /
    • 제9권4호
    • /
    • pp.320-328
    • /
    • 2021
  • As the number of visually impaired people steadily increases, interest in independent walking is also increasing. However, there are various inconveniences in the independent walking of the visually impaired at present, reducing the quality of life of the visually impaired. The white cane, which is an existing walking aid for the visually impaired, has difficulty in recognizing upper obstacles and obstacles outside the effective distance. In addition, it is inconvenient to cross the street because the sound signal to help the visually impaired cross the crosswalk is lacking or damaged. These factors make it difficult for the visually impaired to walk independently. Therefore, we propose the design of an embedded system that provides traffic light recognition through object recognition technology, voice guidance using TTS, and upper obstacle recognition through ultrasonic sensors so that blind people can realize safe and high-quality independent walking.

무인 이동체의 충돌 회피 시스템 설계 (The Design of Evading Collision System of Unman Vehicle)

  • 김태형;장종욱
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2016년도 추계학술대회
    • /
    • pp.254-255
    • /
    • 2016
  • 현대 과학 기술이 발전함에 따라 인간은 편리함을 추구하게 되었고 사람이 기계를 제어 하지 않는 그런 시대가 도래 했다. 이러한 무인 이동체는 자동차, 항공, 선박 등 다양한 곳에서 사용되고 있고 또한 연구되고 있다. 그러나 무인 이동체는 중요한 장점이자 단점이 사람이 제어하지 않는 것이고, 이는 무인 이동체가 주행 중에 장애물과 충돌을 할 가능성이 높다는 것을 의미했다. 이 시스템에서는 퍼지 제어, 영상 기반 인식, 센서 인식을 통해 충돌 회피 시스템을 만들 것이며, 이 논문을 통해 충돌 회피에 있어 더 나아진 효과를 기대 한다.

  • PDF

V.F. 모델을 이용한 주행차량의 전방 영상계측시스템 설계 (Design of a Front Image Measurement System for the Traveling Vehicle Using V.F. Model)

  • 정용배;김태효
    • 융합신호처리학회논문지
    • /
    • 제7권3호
    • /
    • pp.108-115
    • /
    • 2006
  • 본 논문에서는 주행하는 차선 내에 존재하는 선행차량의 위치측정 및 차량까지의 거리를 정확히 인식하기 위해 차량의 요동에 기인되는 피칭오차를 보정하여 계측하는 알고리듬을 제안하였다. 선행 차량까지의 거리를 정확히 계측하기 위하여 2 차원 영상좌표로부터 사영기하학을 적용하여 3 차원 좌표를 얻을 수 있는 카메라 Calibration 알고리듬을 확립하고, View Frustum(V.F.) 모델을 이용하여 직선부분에 대한 도로의 영상 모델을 제시하였다. 기 발표된 많은 연구논문들은 도로가 평면이라 가정하여 도로와 차량사이의 기하적인 변화에 따른 오차 특성을 고려하지 않은 경우가 많다. 이를 보완하고자 본 논문에서는 카메라 Calibration 알고리듬을 적용하여 실세계 좌표계와 영상좌표 계 사이의 기하해석으로 사영행렬을 추출하였고, V.F. 모델을 이용하여 소실점의 기하적인 해석을 통하여 차량의 피칭변화 에 따른 오차특성을 보정하였다. 실험결과 거리의 오차를 현저하게 줄일 수 있어 피칭변화에 강인함을 확인할 수 있었다.

  • PDF

영상 기반 센서 융합을 이용한 이쪽로봇에서의 환경 인식 시스템의 개발 (Vision Based Sensor Fusion System of Biped Walking Robot for Environment Recognition)

  • 송희준;이선구;강태구;김동원;서삼준;박귀태
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2006년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.123-125
    • /
    • 2006
  • This paper discusses the method of vision based sensor fusion system for biped robot walking. Most researches on biped walking robot have mostly focused on walking algorithm itself. However, developing vision systems for biped walking robot is an important and urgent issue since biped walking robots are ultimately developed not only for researches but to be utilized in real life. In the research, systems for environment recognition and tole-operation have been developed for task assignment and execution of biped robot as well as for human robot interaction (HRI) system. For carrying out certain tasks, an object tracking system using modified optical flow algorithm and obstacle recognition system using enhanced template matching and hierarchical support vector machine algorithm by wireless vision camera are implemented with sensor fusion system using other sensors installed in a biped walking robot. Also systems for robot manipulating and communication with user have been developed for robot.

  • PDF

Obstacle Avoidance and Lane Recognition for the Directional Control of Unmanned Vehicle

  • Kim, Chang-Man;Moon, Hee-Chang;Kim, Sang-Gyum;Kim, Jung-Ha
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2002년도 ICCAS
    • /
    • pp.34.6-34
    • /
    • 2002
  • 1. Introduction 2. System Configuration 2.1 Control System 2.1.1 Longitudinal control 2.1.2 Lateral control 2.2 Sensor System 2.2.1 Photo interrupt 2.2.2 Ultrasonic sensor 2.3 Vision system 2.4 Communication system 2.4.1 Data communication 2.4.2 Image Communication 3. Test and Result 3.1 Vision test 3.2 Ultrasonic sensor test 4. Conculsion. Acknowledgment References.

  • PDF

실시간 처리를 위한 타이어 자동 선별 비젼 시스템 (The automatic tire classfying vision system for real time processing)

  • 박귀태;김진헌;정순원;송승철
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1992년도 한국자동제어학술회의논문집(국내학술편); KOEX, Seoul; 19-21 Oct. 1992
    • /
    • pp.358-363
    • /
    • 1992
  • The tire manufacturing process demands classification of tire types when the tires are transferred between the inner processes. Though most processes are being well automated, the classification relies greatly upon the visual inspection of humen. This has been an obstacle to the factory automation of tire manufacturing companies. This paper proposes an effective vision systems which can be usefully applied to the tire classification process in real time. The system adopts a parallel architecture using multiple transputers and contains the algorithms of preprocesssing for character recognition. The system can be easily expandable to manipulate the large data that can be processed seperately.

  • PDF