• 제목/요약/키워드: robot localization

검색결과 591건 처리시간 0.027초

자율 주행 용접 로봇을 위한 시각 센서 개발과 환경 모델링 (Visual Sensor Design and Environment Modeling for Autonomous Mobile Welding Robots)

  • 김민영;조형석;김재훈
    • 제어로봇시스템학회논문지
    • /
    • 제8권9호
    • /
    • pp.776-787
    • /
    • 2002
  • Automation of welding process in shipyards is ultimately necessary, since the welding site is spatially enclosed by floors and girders, and therefore welding operators are exposed to hostile working conditions. To solve this problem, a welding mobile robot that can navigate autonomously within the enclosure has been developed. To achieve the welding task in the closed space, the robotic welding system needs a sensor system for the working environment recognition and the weld seam tracking, and a specially designed environment recognition strategy. In this paper, a three-dimensional laser vision system is developed based on the optical triangulation technology in order to provide robots with 3D work environmental map. Using this sensor system, a spatial filter based on neural network technology is designed for extracting the center of laser stripe, and evaluated in various situations. An environment modeling algorithm structure is proposed and tested, which is composed of the laser scanning module for 3D voxel modeling and the plane reconstruction module for mobile robot localization. Finally, an environmental recognition strategy for welding mobile robot is developed in order to recognize the work environments efficiently. The design of the sensor system, the algorithm for sensing the partially structured environment with plane segments, and the recognition strategy and tactics for sensing the work environment are described and discussed with a series of experiments in detail.

서비스 로봇을 위한 유비쿼터스 센서 네트워크 기반 위치 인식 시스템 (Ubiquitous Sensor Network based Localization System for Public Guide Robot)

  • 최형윤;박진주;문용선
    • 한국정보통신학회논문지
    • /
    • 제10권10호
    • /
    • pp.1920-1926
    • /
    • 2006
  • 서비스 로봇의 사회 적 관심으로 인하여 서비스 로봇의 개발에 있어 많은 연구가 실시되고 있으나, 단일 플랫폼의 한계에 봉착해 있다. 이 한 한계를 극복하기 위하여 유비쿼터스 네트워크와 연계한 유비쿼터스 기반 서비스로 봇이 대안으로 자리 잡고 있다. 이를 위하여 유비쿼터스 센서 네트워크를 통하여 주변 환경에 대한 상황인식 및 위치 인식과 같은 기능을 위하여 RFID 및 초음파 센서를 이용한 시스템이 등장하여 실제 로봇에 적용되어 좋은 결과를 낳고 있다. 하지만 RFID의 경우 수동형 센서를 이용할 경우 거리에 따른 인식률의 제한이 따르며 초음파 센서의 경우 이를 구동하기 위하여 높은 전압을 요구하므로 저 전력 기반의 센서 네트워크에 응용하기에는 많은 한계가 따른다. 따라서 본 논문에서는 센서 네트워크 기반 위치인식을 위하여 센서 네트워크 모듈을 구현하고 이를 기반으로 RSSI 위치인식 시스템을 구현하고자 한다. 이러한 RSSI 위치인식 시스템의 경우 각 센서 노드에서 들어오는 신호의 RSSI만을 측정하고 이에 따른 거리로 환산하여 위치를 산출함으로 인하여 저 전력의 센서 네트워크를 그대로 활용할 수 있으며, Ad-Hoc 네트워크 설계시 거리에 따른 제한도 극복할 수 있을 것이다.

지능형 감시 시스템 구축을 위한 영상과 음원 추적 기반 임베디드 모바일로봇 개발 (A Design of Mobile Robot based on Camera and Sound Source Localization for Intelligent Surveillance System)

  • 박정현;김형복;오정석;심귀보
    • 한국지능시스템학회논문지
    • /
    • 제19권4호
    • /
    • pp.532-537
    • /
    • 2009
  • 보안이 중요시되는 공간에서 임의의 사람을 추적하고 인식할 수 있는 시스템의 필요성이 점차 중요시 되고 있다. 본 논문에서는 영상과 음원 추적 기반의 임베디드 모바일 로봇을 개발함으로써 무인 지능형 시스템을 구현하였고 영상에서 객체 추적을 위해 블록 매칭 알고리즘을 이용하고 음원 추적을 위해 소리의 시간차와 세기차를 이용하여 시스템을 구현 하였다. 본 논문에서는 Pan-Tilt 카메라와 음원 추적 모듈을 이용한 시스템, Network 카메라와 모바일 로봇을 이용한 시스템과 모바일 로봇을 이용한 시스템을 구현함으로써 침입자 추적 알고리즘을 검증하였다. 각 구현된 시스템에서 문제점을 보완하고 서로 연동이 가능한 시스템을 구현하여 지능형 무인 감시 시스템으로 신뢰성을 더할 수 있을 것이다.

세계 AI 로봇 카레이스 대회를 위한 자율 주행 시스템 구현 (Implementation of an Autonomous Driving System for the Segye AI Robot Car Race Competition)

  • 최정현;임예은;박종훈;정현수;변승재;사공의훈;박정현;김창현;이재찬;김도형;황면중
    • 로봇학회논문지
    • /
    • 제17권2호
    • /
    • pp.198-208
    • /
    • 2022
  • In this paper, an autonomous driving system is implemented for the Segye AI Robot Race Competition that multiple vehicles drive simultaneously. By utilizing the ERP42-racing platform, RTK-GPS, and LiDAR sensors provided in the competition, we propose an autonomous driving system that can drive safely and quickly in a road environment with multiple vehicles. This system consists of a recognition, judgement, and control parts. In the recognition stage, vehicle localization and obstacle detection through waypoint-based LiDAR ROI were performed. In the judgement stage, target velocity setting and obstacle avoidance judgement are determined in consideration of the straight/curved section and the distance between the vehicle and the neighboring vehicle. In the control stage, adaptive cruise longitudinal velocity control based on safe distance and lateral velocity control based on pure-pursuit are performed. To overcome the limited experimental environment, simulation and partial actual experiments were conducted together to develop and verify the proposed algorithms. After that, we participated in the Segye AI Robot Race Competition and performed autonomous driving racing with verified algorithms.

두 개의 하이드로폰을 이용한 수중 음원 방향 추정 및 위치 추정 알고리즘 (Direction and Location Estimating Algorithm for Sound Sources with Two Hydrophones in Underwater Environment)

  • 신재욱;송주만;이석영;최현택;박부견
    • 제어로봇시스템학회논문지
    • /
    • 제19권8호
    • /
    • pp.676-681
    • /
    • 2013
  • For underwater vehicles, the use of sensors such as cameras and laser scanners is limited by the difference in environment compared to robots designed to work on dry land. In underwater environments, if use is made of sound signals, valuable information can be obtained. The most important application is the localization of underwater sound sources. The estimated location of a sound source can be used to control underwater robots or submarines. Thus, the purpose of this research is to estimate the source's direction and location in a noisy underwater environment. The direction of the sound source is obtained using two hydrophones. Furthermore, if we assume that the robot or sound source is moving, the location of the sound source is estimated using more than two estimated directions. The feasibility of the developed algorithm is examined by experiments in a water tank and in the ocean.

Global Ultrasonic System for Autonomous Navigation of Indoor Mobile Robots

  • Park, Seong-Hoon;Yi, Soo-Yeong;Jin, Sang-Yoon;Kim, Jin-Won
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.846-851
    • /
    • 2004
  • In this paper, we propose a global ultrasonic system for the self-localization and autonomous navigation of indoor mobile robots. The ultrasonic sensor is regarded as the most cost-effective ranging system among the possible alternatives, and it is widely used for general purpose, since it requires simple electronic drivers and has relatively high accuracy. The global ultrasonic system presented in this paper consists of four or more ultrasonic generators fixed at reference positions in the global coordinates of an indoor environment and two receivers mounted on the mobile robots. By using the RF (Radio Frequency) modules added to the ultrasonic sensors, the robot is able to control the ultrasonic generation and to obtain the critical distances from the reference positions, which are required in order to localize is position in the global coordinates. A kalman filter algorithm designed for the self-localization using the global ultrasonic system and the experimental results of the autonomous navigation are presented in this paper.

  • PDF

적외선 조명 및 단일카메라를 이용한 입체거리 센서의 개발 (3D Range Measurement using Infrared Light and a Camera)

  • 김인철;이수용
    • 제어로봇시스템학회논문지
    • /
    • 제14권10호
    • /
    • pp.1005-1013
    • /
    • 2008
  • This paper describes a new sensor system for 3D range measurement using the structured infrared light. Environment and obstacle sensing is the key issue for mobile robot localization and navigation. Laser scanners and infrared scanners cover $180^{\circ}$ and are accurate but too expensive. Those sensors use rotating light beams so that the range measurements are constrained on a plane. 3D measurements are much more useful in many ways for obstacle detection, map building and localization. Stereo vision is very common way of getting the depth information of 3D environment. However, it requires that the correspondence should be clearly identified and it also heavily depends on the light condition of the environment. Instead of using stereo camera, monocular camera and the projected infrared light are used in order to reduce the effects of the ambient light while getting 3D depth map. Modeling of the projected light pattern enabled precise estimation of the range. Identification of the cells from the pattern is the key issue in the proposed method. Several methods of correctly identifying the cells are discussed and verified with experiments.

스테레오 적외선 조명 및 단일카메라를 이용한 3차원 환경인지 (3D Environment Perception using Stereo Infrared Light Sources and a Camera)

  • 이수용;송재복
    • 제어로봇시스템학회논문지
    • /
    • 제15권5호
    • /
    • pp.519-524
    • /
    • 2009
  • This paper describes a new sensor system for 3D environment perception using stereo structured infrared light sources and a camera. Environment and obstacle sensing is the key issue for mobile robot localization and navigation. Laser scanners and infrared scanners cover $180^{\circ}$ and are accurate but too expensive. Those sensors use rotating light beams so that the range measurements are constrained on a plane. 3D measurements are much more useful in many ways for obstacle detection, map building and localization. Stereo vision is very common way of getting the depth information of 3D environment. However, it requires that the correspondence should be clearly identified and it also heavily depends on the light condition of the environment. Instead of using stereo camera, monocular camera and two projected infrared light sources are used in order to reduce the effects of the ambient light while getting 3D depth map. Modeling of the projected light pattern enabled precise estimation of the range. Two successive captures of the image with left and right infrared light projection provide several benefits, which include wider area of depth measurement, higher spatial resolution and the visibility perception.

이동 로봇을 위한 온라인 동시 지도작성 및 자가 위치 추적 알고리즘 (Online SLAM algorithm for mobile robot)

  • 김병주
    • Journal of the Korean Data and Information Science Society
    • /
    • 제22권6호
    • /
    • pp.1029-1040
    • /
    • 2011
  • 본 연구에서는 실제 환경에 적용 가능한 지능형 자율 이동 방법을 개발하기 위해 위치정보를 사용하지 않고 지도 작성이 가능한 지능형 이동 알고리즘을 제안한다. 제안한 알고리즘은 온라인으로 동작하면서 위치 정보를 사용하지 않고 지도 작성이 가능 할 뿐 아니라 현실 세계에 적용 가능하기 위해 많은 계산량을 요구하지도 않는다. 이는 이동 로봇의 실세계 주행과 같은 대용량의 이미지 처리가 필요한 경우에는 매우 유용하게 사용될 수 있다. 토이 자료와 대용량 자료에 대해 제안한 알고리즘을 적용한 결과 기존의 방법에 비해 적은 메모리와 새로운 입력에 대해 고유공간을 새로 계산하지 않아도 되어 로봇의 현실세계의 주행에도 문제가 없는 것으로 판단되었다.

장애인을 위한 스마트 모빌리티 시스템 개발 (Development of Smart Mobility System for Persons with Disabilities)

  • 유영준;박세은;안태준;양지호;이명규;이철희
    • 드라이브 ㆍ 컨트롤
    • /
    • 제19권4호
    • /
    • pp.97-103
    • /
    • 2022
  • Low fertility rates and increased life expectancy further exacerbate the process of an aging society. This is also reflected in the gradual increase in the proportion of vulnerable groups in the social population. The demand for improved mobility among vulnerable groups such as the elderly or the disabled has greatly driven the growth of the electric-assisted mobility device market. However, such mobile devices generally require a certain operating capability, which limits the range of vulnerable groups who can use the device and increases the cost of learning. Therefore, autonomous driving technology needs to be introduced to make mobility easier for a wider range of vulnerable groups to meet their needs of work and leisure in different environments. This study uses mini PC Odyssey, Velodyne Lidar VLP-16, electronic device and Linux-based ROS program to realize the functions of working environment recognition, simultaneous localization, map generation and navigation of electric powered mobile devices for vulnerable groups. This autonomous driving mobility device is expected to be of great help to the vulnerable who lack the immediate response in dangerous situations.