• Title/Summary/Keyword: robot localization

Search Result 587, Processing Time 0.033 seconds

A Design of Mobile Robot based on Camera and Sound Source Localization for Intelligent Surveillance System (지능형 감시 시스템 구축을 위한 영상과 음원 추적 기반 임베디드 모바일로봇 개발)

  • Park, Jung-Hyun;Kim, Hyung-Bok;Oh, Jung-Suk;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.532-537
    • /
    • 2009
  • The necessity of intelligent surveillance system is gradually considered seriously from the space where the security is important. In this paper, we embodied unmanned intelligent system by developing embedded mobile robot based on images and sounds tracking. For objects tracking, we used block-matching algorithm and for sound source tracking, we calculated time differences and magnitude dissimilarities of sound. And we demonstrated the superiority of intruder tracking algorithm through the embodiment of Pan-Tilt camera and sound source tracking module using system, Network camera and mobile robot using system and mobile robot using system. By linking security system, the suggested system can provide some interfacing functions for the security service of the public facilities as well as that of home.

Implementation of an Autonomous Driving System for the Segye AI Robot Car Race Competition (세계 AI 로봇 카레이스 대회를 위한 자율 주행 시스템 구현)

  • Choi, Jung Hyun;Lim, Ye Eun;Park, Jong Hoon;Jeong, Hyeon Soo;Byun, Seung Jae;Sagong, Ui Hun;Park, Jeong Hyun;Kim, Chang Hyun;Lee, Jae Chan;Kim, Do Hyeong;Hwang, Myun Joong
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.2
    • /
    • pp.198-208
    • /
    • 2022
  • In this paper, an autonomous driving system is implemented for the Segye AI Robot Race Competition that multiple vehicles drive simultaneously. By utilizing the ERP42-racing platform, RTK-GPS, and LiDAR sensors provided in the competition, we propose an autonomous driving system that can drive safely and quickly in a road environment with multiple vehicles. This system consists of a recognition, judgement, and control parts. In the recognition stage, vehicle localization and obstacle detection through waypoint-based LiDAR ROI were performed. In the judgement stage, target velocity setting and obstacle avoidance judgement are determined in consideration of the straight/curved section and the distance between the vehicle and the neighboring vehicle. In the control stage, adaptive cruise longitudinal velocity control based on safe distance and lateral velocity control based on pure-pursuit are performed. To overcome the limited experimental environment, simulation and partial actual experiments were conducted together to develop and verify the proposed algorithms. After that, we participated in the Segye AI Robot Race Competition and performed autonomous driving racing with verified algorithms.

Direction and Location Estimating Algorithm for Sound Sources with Two Hydrophones in Underwater Environment (두 개의 하이드로폰을 이용한 수중 음원 방향 추정 및 위치 추정 알고리즘)

  • Shin, JaeWook;Song, Ju-Man;Lee, SeokYoung;Choi, Hyun-Taek;Park, PooGyeon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.8
    • /
    • pp.676-681
    • /
    • 2013
  • For underwater vehicles, the use of sensors such as cameras and laser scanners is limited by the difference in environment compared to robots designed to work on dry land. In underwater environments, if use is made of sound signals, valuable information can be obtained. The most important application is the localization of underwater sound sources. The estimated location of a sound source can be used to control underwater robots or submarines. Thus, the purpose of this research is to estimate the source's direction and location in a noisy underwater environment. The direction of the sound source is obtained using two hydrophones. Furthermore, if we assume that the robot or sound source is moving, the location of the sound source is estimated using more than two estimated directions. The feasibility of the developed algorithm is examined by experiments in a water tank and in the ocean.

Global Ultrasonic System for Autonomous Navigation of Indoor Mobile Robots

  • Park, Seong-Hoon;Yi, Soo-Yeong;Jin, Sang-Yoon;Kim, Jin-Won
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.846-851
    • /
    • 2004
  • In this paper, we propose a global ultrasonic system for the self-localization and autonomous navigation of indoor mobile robots. The ultrasonic sensor is regarded as the most cost-effective ranging system among the possible alternatives, and it is widely used for general purpose, since it requires simple electronic drivers and has relatively high accuracy. The global ultrasonic system presented in this paper consists of four or more ultrasonic generators fixed at reference positions in the global coordinates of an indoor environment and two receivers mounted on the mobile robots. By using the RF (Radio Frequency) modules added to the ultrasonic sensors, the robot is able to control the ultrasonic generation and to obtain the critical distances from the reference positions, which are required in order to localize is position in the global coordinates. A kalman filter algorithm designed for the self-localization using the global ultrasonic system and the experimental results of the autonomous navigation are presented in this paper.

  • PDF

3D Range Measurement using Infrared Light and a Camera (적외선 조명 및 단일카메라를 이용한 입체거리 센서의 개발)

  • Kim, In-Cheol;Lee, Soo-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.10
    • /
    • pp.1005-1013
    • /
    • 2008
  • This paper describes a new sensor system for 3D range measurement using the structured infrared light. Environment and obstacle sensing is the key issue for mobile robot localization and navigation. Laser scanners and infrared scanners cover $180^{\circ}$ and are accurate but too expensive. Those sensors use rotating light beams so that the range measurements are constrained on a plane. 3D measurements are much more useful in many ways for obstacle detection, map building and localization. Stereo vision is very common way of getting the depth information of 3D environment. However, it requires that the correspondence should be clearly identified and it also heavily depends on the light condition of the environment. Instead of using stereo camera, monocular camera and the projected infrared light are used in order to reduce the effects of the ambient light while getting 3D depth map. Modeling of the projected light pattern enabled precise estimation of the range. Identification of the cells from the pattern is the key issue in the proposed method. Several methods of correctly identifying the cells are discussed and verified with experiments.

3D Environment Perception using Stereo Infrared Light Sources and a Camera (스테레오 적외선 조명 및 단일카메라를 이용한 3차원 환경인지)

  • Lee, Soo-Yong;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.519-524
    • /
    • 2009
  • This paper describes a new sensor system for 3D environment perception using stereo structured infrared light sources and a camera. Environment and obstacle sensing is the key issue for mobile robot localization and navigation. Laser scanners and infrared scanners cover $180^{\circ}$ and are accurate but too expensive. Those sensors use rotating light beams so that the range measurements are constrained on a plane. 3D measurements are much more useful in many ways for obstacle detection, map building and localization. Stereo vision is very common way of getting the depth information of 3D environment. However, it requires that the correspondence should be clearly identified and it also heavily depends on the light condition of the environment. Instead of using stereo camera, monocular camera and two projected infrared light sources are used in order to reduce the effects of the ambient light while getting 3D depth map. Modeling of the projected light pattern enabled precise estimation of the range. Two successive captures of the image with left and right infrared light projection provide several benefits, which include wider area of depth measurement, higher spatial resolution and the visibility perception.

Online SLAM algorithm for mobile robot (이동 로봇을 위한 온라인 동시 지도작성 및 자가 위치 추적 알고리즘)

  • Kim, Byung-Joo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.6
    • /
    • pp.1029-1040
    • /
    • 2011
  • In this paper we propose an intelligent navigation algorithm for real world problem which can build a map without localization. Proposed algorithm operates online and furthermore does not require many memories for applying real world problem. After applying proposed algorithm to toy and huge data set, it does not require to calculate a whole eigenspace and need less memory compared to existing algorithm. Thus we can obtain that proposed algorithm is suitable for real world mobile navigation algorithm.

Development of Smart Mobility System for Persons with Disabilities (장애인을 위한 스마트 모빌리티 시스템 개발)

  • Yu, Yeong Jun;Park, Se Eun;An, Tae Jun;Yang, Ji Ho;Lee, Myeong-Gyu;Lee, Chul-Hee
    • Journal of Drive and Control
    • /
    • v.19 no.4
    • /
    • pp.97-103
    • /
    • 2022
  • Low fertility rates and increased life expectancy further exacerbate the process of an aging society. This is also reflected in the gradual increase in the proportion of vulnerable groups in the social population. The demand for improved mobility among vulnerable groups such as the elderly or the disabled has greatly driven the growth of the electric-assisted mobility device market. However, such mobile devices generally require a certain operating capability, which limits the range of vulnerable groups who can use the device and increases the cost of learning. Therefore, autonomous driving technology needs to be introduced to make mobility easier for a wider range of vulnerable groups to meet their needs of work and leisure in different environments. This study uses mini PC Odyssey, Velodyne Lidar VLP-16, electronic device and Linux-based ROS program to realize the functions of working environment recognition, simultaneous localization, map generation and navigation of electric powered mobile devices for vulnerable groups. This autonomous driving mobility device is expected to be of great help to the vulnerable who lack the immediate response in dangerous situations.

Single Outlier Removal Technology for TWR based High Precision Localization (TWR 기반 고정밀 측위를 위한 단일 이상측정치 제거 기술)

  • Lee, Chang-Eun;Sung, Tae-Kyung
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.3
    • /
    • pp.350-355
    • /
    • 2017
  • UWB (Ultra Wide Band) refers to a system with a bandwidth of over 500 MHz or a bandwidth of 20% of the center frequency. It is robust against channel fading and has a wide signal bandwidth. Using the IR-UWB based ranging system, it is possible to obtain decimeter-level ranging accuracy. Furthermore, IR-UWB system enables acquisition over glass or cement with high resolution. In recent years, IR-UWB-based ranging chipsets have become cheap and popular, and it has become possible to implement positioning systems of several tens of centimeters. The system can be configured as one-way ranging (OWR) positioning system for fast ranging and TWR (two-way ranging) positioning system for cheap and robust ranging. On the other hand, the ranging based positioning system has a limitation on the number of terminals for localization because it takes time to perform a communication procedure to perform ranging. To overcome this problem, code multiplexing and channel multiplexing are performed. However, errors occur in measurement due to interference between channels and code, multipath, and so on. The measurement filtering is used to reduce the measurement error, but more fundamentally, techniques for removing these measurements should be studied. First, the TWR based positioning was analyzed from a stochastic point of view and the effects of outlier measurements were summarized. The positioning algorithm for analytically identifying and removing single outlier is summarized and extended to three dimensions. Through the simulation, we have verified the algorithm to detect and remove single outliers.

Implementation of the SLAM System Using a Single Vision and Distance Sensors (단일 영상과 거리센서를 이용한 SLAM시스템 구현)

  • Yoo, Sung-Goo;Chong, Kil-To
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.6
    • /
    • pp.149-156
    • /
    • 2008
  • SLAM(Simultaneous Localization and Mapping) system is to find a global position and build a map with sensing data when an unmanned-robot navigates an unknown environment. Two kinds of system were developed. One is used distance measurement sensors such as an ultra sonic and a laser sensor. The other is used stereo vision system. The distance measurement SLAM with sensors has low computing time and low cost, but precision of system can be somewhat worse by measurement error or non-linearity of the sensor In contrast, stereo vision system can accurately measure the 3D space area, but it needs high-end system for complex calculation and it is an expensive tool. In this paper, we implement the SLAM system using a single camera image and a PSD sensors. It detects obstacles from the front PSD sensor and then perceive size and feature of the obstacles by image processing. The probability SLAM was implemented using the data of sensor and image and we verify the performance of the system by real experiment.