• Title/Summary/Keyword: robot localization

Search Result 587, Processing Time 0.028 seconds

Design and Control of an Omni-directional Cleaning Robot Based on Landmarks (랜드마크 기반의 전방향 청소로봇 설계 및 제어)

  • Kim, Dong Won;Igor, Yugay;Kang, Eun Seok;Jung, Seul
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.2
    • /
    • pp.100-106
    • /
    • 2013
  • This paper presents design and control of an 'Omni-directional Cleaning Robot (OdCR)' which employs omni-wheels at three edges of its triangular configuration. Those omni-wheels enable the OdCR to move in any directions so that lateral movement is possible. For OdCR to be localized, a StarGazer sensor is used to provide accurate position and heading angle based on landmarks on the ceiling. In addition to that, ultrasonic sensors are installed to detect obstacles around OdCR's way. Experimental studies are conducted to test the functionality of the system.

Improved Localization Algorithm for Ultrasonic Satellite System (초음파위성시스템을 위한 개선된 위치추정 알고리즘)

  • Yoon, Kang-Sup
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.6 no.5
    • /
    • pp.775-781
    • /
    • 2011
  • For the measurement of absolute position of mobile robot in indoor environments, the ultrasonic positioning systems using ultrasound have been researched for several years. Most of these ultrasonic positioning systems to avoid interference between the ultrasound are used for sequential transmitting. However, due to the use of sequential transmitting, the positions of transmitter to receive an ultrasound will change when the mobile robot moves. Therefore the accuracy of positioning is reduced. In this paper, the new position estimation algorithm with weighting factor according to the time of receipt is proposed. By applying the proposed algorithm to existing Ultrasonic Satellite System(USAT), the improved USAT is configured. The positioning performance of the improved USAT with the proposed position estimation algorithm are verified by experiments.

Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images

  • Choi, Yun-Won;Kwon, Kee-Koo;Lee, Soo-In;Choi, Jeong-Won;Lee, Suk-Gyu
    • ETRI Journal
    • /
    • v.36 no.6
    • /
    • pp.913-923
    • /
    • 2014
  • This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas-Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object's feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.

Development of Evaluation Technique of Mobility and Navigation Performance for Personal Robots (퍼스널 로봇을 위한 운동과 이동 성능평가 기술의 개발)

  • Ahn Chang-hyun;Kim Jin-Oh;Yi Keon Young;Lee Ho Gil;Kim Kyu-ro
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.52 no.2
    • /
    • pp.85-92
    • /
    • 2003
  • In this paper, we propose a method to evaluate performances of mobile personal robots. A set of performance measures is proposed and the corresponding evaluation methods are developed. Different from industrial manipulators, personal robots need to be evaluated with its mobility, navigation, task and intelligent performance in environments where human beings exist. The proposed performance measures are composed of measures for mobility including vibration, repeatability, path accuracy and so on, as well as measures for navigation performance including wall following, overcoming doorsill, obstacle avoidance and localization. But task and intelligent behavior performances such as cleaning capability and high-level decision-making are not considered in this paper. To measure the proposed performances through a series of tests, we designed a test environment and developed measurement systems including a 3D Laser tracking system, a vision monitoring system and a vibration measurement system. We measured the proposed performances with a mobile robot to show the result as an example. The developed systems, which are installed at Korea Agency for Technology and Standards, are going to be used for many robot companies in Korea.

A Deep Convolutional Neural Network Based 6-DOF Relocalization with Sensor Fusion System (센서 융합 시스템을 이용한 심층 컨벌루션 신경망 기반 6자유도 위치 재인식)

  • Jo, HyungGi;Cho, Hae Min;Lee, Seongwon;Kim, Euntai
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.87-93
    • /
    • 2019
  • This paper presents a 6-DOF relocalization using a 3D laser scanner and a monocular camera. A relocalization problem in robotics is to estimate pose of sensor when a robot revisits the area. A deep convolutional neural network (CNN) is designed to regress 6-DOF sensor pose and trained using both RGB image and 3D point cloud information in end-to-end manner. We generate the new input that consists of RGB and range information. After training step, the relocalization system results in the pose of the sensor corresponding to each input when a new input is received. However, most of cases, mobile robot navigation system has successive sensor measurements. In order to improve the localization performance, the output of CNN is used for measurements of the particle filter that smooth the trajectory. We evaluate our relocalization method on real world datasets using a mobile robot platform.

Efficient Implementation of IFFT and FFT for PHAT Weighting Speech Source Localization System (PHAT 가중 방식 음성신호방향 추정시스템의 FFT 및 IFFT의 효율적인 구현)

  • Kim, Yong-Eun;Hong, Sun-Ah;Chung, Jin-Gyun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.1
    • /
    • pp.71-78
    • /
    • 2009
  • Sound source localization systems in service robot applications estimate the direction of a human voice. Time delay information obtained from a few separate microphones is widely used for the estimation of the sound direction. Correlation is computed in order to calculate the time delay between two signals. In addition, PHAT weighting function can be applied to significantly improve the accuracy of the estimation. However, FFT and IFFT operations in the PHAT weighting function occupy more than half of the area of the sound source localization system. Thus efficient FFT and IFFT designs are essential for the IP implementation of sound source localization system. In this paper, we propose an efficient FFT/IFFT design method based on the characteristics of human voice.

Research for robot kidnap problem in the indoor of utilizing external image information and the absolute spatial coordinates (실내 공간에서 이동 로봇의 납치 문제 해결을 위한 외부 영상 정보 및 절대 공간 좌표 활용 연구)

  • Jeon, Young-Pil;Park, Jong-Ho;Lim, Shin-Teak;Chong, Kil-To
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.3
    • /
    • pp.2123-2130
    • /
    • 2015
  • For such automatic monitoring robot or a robot cleaner that is utilized indoors, if it deviates from someone by replacement or, or of a mobile robot such as collisions with unexpected object direction or planned path, based on the planned path There is a need to come back to, it is necessary to tough self-position estimation ability of mobile robot in this, which is also associated with resolution of the kidnap problem of conventional mobile robot. In this study, the case of a mobile robot, operates indoors, you want to take advantage of the low cost of the robot. Therefore, in this paper, by using the acquisition device to an external image information such as the CCTV which is installed in a room, it acquires the environment image and take advantage of marker recognition of the mobile robot at the same time and converted it absolutely spatial coordinates it is, we are trying to solve the self-position estimation of the mobile robot in the room and kidnap problem and actual implementation methods potential field to try utilizing robotic systems. Thus, by implementing the method proposed in this study to the actual robot system, and is promoting the relevant experiment was to verify the results.

Study of Robust Position Recognition System of a Mobile Robot Using Multiple Cameras and Absolute Space Coordinates (다중 카메라와 절대 공간 좌표를 활용한 이동 로봇의 강인한 실내 위치 인식 시스템 연구)

  • Mo, Se Hyun;Jeon, Young Pil;Park, Jong Ho;Chong, Kil To
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.41 no.7
    • /
    • pp.655-663
    • /
    • 2017
  • With the development of ICT technology, the indoor utilization of robots is increasing. Research on transportation, cleaning, guidance robots, etc., that can be used now or increase the scope of future use will be advanced. To facilitate the use of mobile robots in indoor spaces, the problem of self-location recognition is an important research area to be addressed. If an unexpected collision occurs during the motion of a mobile robot, the position of the mobile robot deviates from the initially planned navigation path. In this case, the mobile robot needs a robust controller that enables the mobile robot to accurately navigate toward the goal. This research tries to address the issues related to self-location of the mobile robot. A robust position recognition system was implemented; the system estimates the position of the mobile robot using a combination of encoder information of the mobile robot and the absolute space coordinate transformation information obtained from external video sources such as a large number of CCTVs installed in the room. Furthermore, vector field histogram method of the pass traveling algorithm of the mobile robot system was applied, and the results of the research were confirmed after conducting experiments.

Visual Sensor Design and Environment Modeling for Autonomous Mobile Welding Robots (자율 주행 용접 로봇을 위한 시각 센서 개발과 환경 모델링)

  • Kim, Min-Yeong;Jo, Hyeong-Seok;Kim, Jae-Hun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.9
    • /
    • pp.776-787
    • /
    • 2002
  • Automation of welding process in shipyards is ultimately necessary, since the welding site is spatially enclosed by floors and girders, and therefore welding operators are exposed to hostile working conditions. To solve this problem, a welding mobile robot that can navigate autonomously within the enclosure has been developed. To achieve the welding task in the closed space, the robotic welding system needs a sensor system for the working environment recognition and the weld seam tracking, and a specially designed environment recognition strategy. In this paper, a three-dimensional laser vision system is developed based on the optical triangulation technology in order to provide robots with 3D work environmental map. Using this sensor system, a spatial filter based on neural network technology is designed for extracting the center of laser stripe, and evaluated in various situations. An environment modeling algorithm structure is proposed and tested, which is composed of the laser scanning module for 3D voxel modeling and the plane reconstruction module for mobile robot localization. Finally, an environmental recognition strategy for welding mobile robot is developed in order to recognize the work environments efficiently. The design of the sensor system, the algorithm for sensing the partially structured environment with plane segments, and the recognition strategy and tactics for sensing the work environment are described and discussed with a series of experiments in detail.

Ubiquitous Sensor Network based Localization System for Public Guide Robot (서비스 로봇을 위한 유비쿼터스 센서 네트워크 기반 위치 인식 시스템)

  • Choi, Hyoung-Youn;Park, Jin-Joo;Moon, Young-Sun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.10
    • /
    • pp.1920-1926
    • /
    • 2006
  • With social interest, there hie been a lot of research on the Service Robot but now we are faced with the limitation of single platform. The alternative plan is the Ubiquitous-based Service Robot connected with a Ubiquitous network to overcome this limitation. Systems using RFID(Radio frequency Identification) and supersonic waves appeared for functions such as recognition of surroundings through Ubiquitous Sensor Networks. This was applied to the real robot and we have got good results. However, this has several limitations to applying to low power-based Sensor Network For example, if RFID uses a passive Sensor, the rate of recognition with the distance is limited. In case of supersonic waves, high power is required to drive them. Therefore, we intend to develop RSSI position recognition system on the basis of embodying a Sensor Network Module in this thesis. This RSSI position recognition system only measures RSSI of signals from each sensor nod. then converts them into distances and calculates the position. As a result, we can still use low power-based Sensor Network and overcome the limitation according to distance as planning Ad-Hoc Network.