• Title/Summary/Keyword: Vision Correction Algorithm

Search Result 55, Processing Time 0.024 seconds

Implementation of Autonomous Mobile Wheeled Robot for Path Correction through Deep Learning Object Recognition (딥러닝 객체인식을 통한 경로보정 자율 주행 로봇의 구현)

  • Lee, Hyeong-il;Kim, Jin-myeong;Lee, Jai-weun
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.12
    • /
    • pp.164-172
    • /
    • 2019
  • In this paper, we implement a wheeled mobile robot that accurately and autonomously finds the optimal route from the starting point to the destination point based on computer vision in a complex indoor environment. We get a number of waypoints from the starting point to get the best route to the target through deep reinforcement learning. However, in the case of autonomous driving, the majority of cases do not reach their destination accurately due to external factors such as surface curvature and foreign objects. Therefore, we propose an algorithm to deepen the waypoints and destinations included in the planned route and then correct the route through the waypoint recognition while driving to reach the planned destination. We built an autonomous wheeled mobile robot controlled by Arduino and equipped with Raspberry Pi and Pycamera and tested the planned route in the indoor environment using the proposed algorithm through real-time linkage with the server in the OSX environment.

A Study on the Construction of Near-Real Time Drone Image Preprocessing System to use Drone Data in Disaster Monitoring (재난재해 분야 드론 자료 활용을 위한 준 실시간 드론 영상 전처리 시스템 구축에 관한 연구)

  • Joo, Young-Do
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.3
    • /
    • pp.143-149
    • /
    • 2018
  • Recently, due to the large-scale damage of natural disasters caused by global climate change, a monitoring system applying remote sensing technology is being constructed in disaster areas. Among remote sensing platforms, the drone has been actively used in the private sector due to recent technological developments, and has been applied in the disaster areas owing to advantages such as timeliness and economical efficiency. This paper deals with the development of a preprocessing system that can map the drone image data in a near-real time manner as a basis for constructing the disaster monitoring system using the drones. For the research purpose, our system is based on the SURF algorithm which is one of the computer vision technologies. This system aims to performs the desired correction through the feature point matching technique between reference images and shot images. The study area is selected as the lower part of the Gahwa River and the Daecheong dam basin. The former area has many characteristic points for matching whereas the latter area has a relatively low number of difference, so it is possible to effectively test whether the system can be applied in various environments. The results show that the accuracy of the geometric correction is 0.6m and 1.7m respectively, in both areas, and the processing time is about 30 seconds per 1 scene. This indicates that the applicability of this study may be high in disaster areas requiring timeliness. However, in case of no reference image or low-level accuracy, the results entail the limit of the decreased calibration.

Development of vision-based security and service robot (영상 기반의 보안 및 서비스 로봇 개발)

  • Kim Jung-Nyun;Park Sang-Sung;Jang Dong-Sik
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.4
    • /
    • pp.308-316
    • /
    • 2004
  • As we know that there are so many restrictions controlling the autonomous robot to turn and move in an indoor space. In this research, Ive adopted the concept ‘Omni-directional wheel’ as a driving equipment, which makes it possible for the robot to move in horizontal and diagonal directions. Most of all, we eliminated the slip error problem, which can occur when the system generates power by means of slip. In order to solve this problem, we developed a ‘slip error correction algorithm’. Following this program, whenever the robot moves in any directions, it defines its course by comparing pre-programmed direction and the current moving way, which can be decided by extracted image of floor line. Additionally, this robot also provides the limited security and service function. It detects the motion of vehicle, transmits pictures to multiple users and can be moved by simple order's. In this paper, we tried to propose a practical model which can be used in an office.

  • PDF

Establishment of Thermal Infrared Observation System on Ieodo Ocean Research Station for Time-series Sea Surface Temperature Extraction (시계열 해수면온도 산출을 위한 이어도 종합해양과학기지 열적외선 관측 시스템 구축)

  • KANG, KI-MOOK;KIM, DUK-JIN;HWANG, JI-HWAN;CHOI, CHANGHYUN;NAM, SUNGHYUN;KIM, SEONGJUNG;CHO, YANG-KI;BYUN, DO-SEONG;LEE, JOOYOUNG
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.22 no.3
    • /
    • pp.57-68
    • /
    • 2017
  • Continuous monitoring of spatial and temporal changes in key marine environmental parameters such as SST (sea surface temperature) near IORS (Ieodo Ocean Research Station) is demanded to investigate the ocean ecosystem, climate change, and sea-air interaction processes. In this study, we aimed to develop the system for continuously measuring SST using a TIR (thermal infrared) sensor mounted at the IORS. New SST algorithm is developed to provide SST of better quality that includes automatic atmospheric correction and emissivity calculation for different oceanic conditions. Then, the TIR-based SST products were validated against in-situ water temperature measurements during May 17-26, 2015 and July 15-18, 2015 at the IORS, yielding the accuracy of 0.72-0.85 R-square, and $0.37-0.90^{\circ}C$ RMSE. This TIR-based SST observing system can be installed easily at similar Ocean Research Stations such as Sinan Gageocho and Ongjin Socheongcho, which provide a vision to be utilized as calibration site for SST remotely sensed from satellites to be launched in future.

RFID Based Mobile Robot Docking Using Estimated DOA (방향 측정 RFID를 이용한 로봇 이동 시스템)

  • Kim, Myungsik;Kim, Kwangsoo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37C no.9
    • /
    • pp.802-810
    • /
    • 2012
  • This paper describes RFID(Radio Frequency Identification) based target acquisition and docking system. RFID is non-contact identification system, which can send relatively large amount of information using RF signal. Robot employing RFID reader can identify neighboring tag attached objects without any other sensing or supporting systems such as vision sensor. However, the current RFID does not provide spatial information of the identified object, the target docking problem remains in order to execute a task in a real environment. For the problem, the direction sensing RFID reader is developed using a dual-directional antenna. The dual-directional antenna is an antenna set, which is composed of perpendicularly positioned two identical directional antennas. By comparing the received signal strength in each antenna, the robot can know the DOA (Direction of Arrival) of transmitted RF signal. In practice, the DOA estimation poses a significant technical challenge, since the RF signal is easily distorted by the surrounded environmental conditions. Therefore, the robot loses its way to the target in an electromagnetically disturbed environment. For the problem, the g-filter based error correction algorithm is developed in this paper. The algorithm reduces the error using the difference of variances between current estimated and the previously filtered directions. The simulation and experiment results clearly demonstrate that the robot equipped with the developed system can successfully dock to a target tag in obstacles-cluttered environment.