• Title/Summary/Keyword: SLAM(Simultaneous Localization and Mapping)

Search Result 117, Processing Time 0.023 seconds

Development of P-SURO II Hybrid Autonomous Underwater Vehicle and its Experimental Studies (P-SURO II 하이브리드 자율무인잠수정 기술 개발 및 현장 검증)

  • Li, Ji-Hong;Lee, Mun-Jik;Park, Sang-Heon;Kim, Jung-Tae;Kim, Jong-Geol;Suh, Jin-Ho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.9
    • /
    • pp.813-821
    • /
    • 2013
  • In this paper, we present the development of P-SURO II hybrid AUV (Autonomous Underwater Vehicle) which can be operated in both of AUV and ROV (Remotely Operated Vehicle) modes. In its AUV mode, the vehicle is supposed to carry out some of underwater missions which are difficult to be achieved in ROV mode due to the tether cable. To accomplish its missions such as inspection and maintenance of complex underwater structures in AUV mode, the vehicle is required to have high level of autonomy including environmental recognition, obstacle avoidance, autonomous navigation, and so on. In addition to its systematic development issues, some of algorithmic issues are also discussed in this paper. Various experimental studies are also presented to demonstrate these developed autonomy algorithms.

Width Estimation of Stationary Objects using Radar Image for Autonomous Driving of Unmanned Ground Vehicles (무인차량 자율주행을 위한 레이다 영상의 정지물체 너비추정 기법)

  • Kim, Seongjoon;Yang, Dongwon;Kim, Sujin;Jung, Younghun
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.18 no.6
    • /
    • pp.711-720
    • /
    • 2015
  • Recently many studies of Radar systems mounted on ground vehicles for autonomous driving, SLAM (Simultaneous localization and mapping) and collision avoidance have been reported. Since several pixels per an object may be generated in a close-range radar application, a width of an object can be estimated automatically by various signal processing techniques. In this paper, we tried to attempt to develop an algorithm to estimate obstacle width using Radar images. The proposed method consists of 5 steps - 1) background clutter reduction, 2) local peak pixel detection, 3) region growing, 4) contour extraction and 5)width calculation. For the performance validation of our method, we performed the test width estimation using a real data of two cars acquired by commercial radar system - I200 manufactured by Navtech. As a result, we verified that the proposed method can estimate the widths of targets.

Range-Doppler Clustering of Radar Data for Detecting Moving Objects (이동물체 탐지를 위한 레이다 데이터의 거리-도플러 클러스터링 기법)

  • Kim, Seongjoon;Yang, Dongwon;Jung, Younghun;Kim, Sujin;Yoon, Joohong
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.17 no.6
    • /
    • pp.810-820
    • /
    • 2014
  • Recently many studies of Radar systems mounted on ground vehicles for autonomous driving, SLAM (Simultaneous localization and mapping) and collision avoidance are reported. In near field, several hits per an object are generated after signal processing of Radar data. Hence, clustering is an essential technique to estimate their shapes and positions precisely. This paper proposes a method of grouping hits in range-doppler domains into clusters which represent each object, according to the pre-defined rules. The rules are based on the perceptual cues to separate hits by object. The morphological connectedness between hits and the characteristics of SNR distribution of hits are adopted as the perceptual cues for clustering. In various simulations for the performance assessment, the proposed method yielded more effective performance than other techniques.

Sensor System for Autonomous Mobile Robot Capable of Floor-to-floor Self-navigation by Taking On/off an Elevator (엘리베이터를 통한 층간 이동이 가능한 실내 자율주행 로봇용 센서 시스템)

  • Min-ho Lee;Kun-woo Na;Seungoh Han
    • Journal of Sensor Science and Technology
    • /
    • v.32 no.2
    • /
    • pp.118-123
    • /
    • 2023
  • This study presents sensor system for autonomous mobile robot capable of floor-to-floor self-navigation. The robot was modified using the Turtlebot3 hardware platform and ROS2 (robot operating system 2). The robot utilized the Navigation2 package to estimate and calibrate the moving path acquiring a map with SLAM (simultaneous localization and mapping). For elevator boarding, ultrasonic sensor data and threshold distance are compared to determine whether the elevator door is open. The current floor information of the elevator is determined using image processing results of the ceiling-fixed camera capturing the elevator LCD (liquid crystal display)/LED (light emitting diode). To realize seamless communication at any spot in the building, the LoRa (long-range) communication module was installed on the self-navigating autonomous mobile robot to support the robot in deciding if the elevator door is open, when to get off the elevator, and how to reach at the destination.

Object Detection of AGV in Manufacturing Plants using Deep Learning (딥러닝 기반 제조 공장 내 AGV 객체 인식에 대한 연구)

  • Lee, Gil-Won;Lee, Hwally;Cheong, Hee-Woon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.1
    • /
    • pp.36-43
    • /
    • 2021
  • In this research, the accuracy of YOLO v3 algorithm in object detection during AGV (Automated Guided Vehicle) operation was investigated. First of all, AGV with 2D LiDAR and stereo camera was prepared. AGV was driven along the route scanned with SLAM (Simultaneous Localization and Mapping) using 2D LiDAR while front objects were detected through stereo camera. In order to evaluate the accuracy of YOLO v3 algorithm, recall, AP (Average Precision), and mAP (mean Average Precision) of the algorithm were measured with a degree of machine learning. Experimental results show that mAP, precision, and recall are improved by 10%, 6.8%, and 16.4%, respectively, when YOLO v3 is fitted with 4000 training dataset and 500 testing dataset which were collected through online search and is trained additionally with 1200 dataset collected from the stereo camera on AGV.

Study on the Shortest Path finding of Engine Room Patrol Robots Using the A* Algorithm (A* 알고리즘을 이용한 기관실 순찰로봇의 최단 경로 탐색에 관한 연구)

  • Kim, Seon-Deok
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.28 no.2
    • /
    • pp.370-376
    • /
    • 2022
  • Smart ships related studies are being conducted in various fields owing to the development of technology, and an engine room patrol robot that can patrol the unmanned engine room is one such study. A patrol robot moves around the engine room based on the information learned through artificial intelligence and checks the machine normality and occurrence of abnormalities such as water leakage, oil leakage, and fire. Study on engine room patrol robots is mainly conducted on machine detection using artificial intelligence, however study on movement and control is insufficient. This causes a problem in that even if a patrol robot detects an object, there is no way to move to the detected object. To secure maneuverability to quickly identify the presence of abnormality in the engine room, this study experimented with whether a patrol robot can determine the shortest path by applying the A* algorithm. Data were obtained by driving a small car equipped with LiDAR in the ship engine room and creating a map by mapping the obtained data with SLAM(Simultaneous Localization And Mapping). The starting point and arrival point of the patrol robot were set on the map, and the A* algorithm was applied to determine whether the shortest path from the starting point to the arrival point was found. Simulation confirmed that the shortest route was well searched while avoiding obstacles from the starting point to the arrival point on the map. Applying this to the engine room patrol robot is believed to help improve ship safety.

A Study on Implementation of Motion Graphics Virtual Camera with AR Core

  • Jung, Jin-Bum;Lee, Jae-Soo;Lee, Seung-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.8
    • /
    • pp.85-90
    • /
    • 2022
  • In this study, to reduce the time and cost disadvantages of the traditional motion graphic production method in order to realize the movement of a virtual camera identical to that of the real camera, motion graphics virtualization using AR Core-based mobile device real-time tracking data A method for creating a camera is proposed. The proposed method is a method that simplifies the tracking operation in the video file stored after shooting, and simultaneously proceeds with shooting on an AR Core-based mobile device to determine whether or not tracking is successful in the shooting stage. As a result of the experiment, there was no difference in the motion graphic result image compared to the conventional method, but the time of 6 minutes and 10 seconds was consumed based on the 300frame image, whereas the proposed method has very high time efficiency because this step can be omitted. At a time when interest in image production using virtual augmented reality and various studies are underway, this study will be utilized in virtual camera creation and match moving.