• Title/Summary/Keyword: ROS(Robot operating system)

Search Result 62, Processing Time 0.026 seconds

State Estimator and Controller Design of an AR Drone with ROS (ROS를 이용한 드론의 상태 추정과 제어기 설계)

  • Kim, Kwan-Soo;Kang, Hyun-Ho;Lee, Sang-Su;You, Sung-Hyun;Lee, Dhong-Hun;Lee, Dong-Kyu;Kim, Young-Eun;Ahn, Choon-Ki
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.434-437
    • /
    • 2018
  • 본 논문에서는 ROS (Robot Operating System)에 대해서 소개하고 ROS를 이용해 드론의 제어기와 필터를 구현해본다. 드론이 강인한 성능을 보이기 위해서는 기체의 상태에 대한 더 정확한 추정이 필요하다. 드론이 기체좌표계로 출력하는 각 축(x축, y축, z축)에 대한 선속도, 선가속도를 더 정확히 추정하기 위해 칼만 필터를 설계하며 칼만 필터를 통과한 상태 변수를 제어 입력으로 하는 PID(Proportional Integral Derivative) 제어기를 설계한다. 실험적인 부분에서는 제어기와 자율 주행 알고리즘을 접목시켜 드론이 자신의 상태를 추정하고 알고리즘을 순차적으로 진행하는 과정을 살펴본다. 마지막으로 알고리즘을 통해 드론의 임무 수행 여부를 살펴보고 정밀한 제어를 위한 추가적인 제어기 설계 방법과 연구 방향을 제시하고자 한다.

Autonomous Flight System of UAV through Global and Local Path Generation (전역 및 지역 경로 생성을 통한 무인항공기 자율비행 시스템 연구)

  • Ko, Ha-Yoon;Baek, Joong-Hwan;Choi, Hyung-Sik
    • Journal of Aerospace System Engineering
    • /
    • v.13 no.3
    • /
    • pp.15-22
    • /
    • 2019
  • In this paper, a global and local flight path system for autonomous flight of the UAV is proposed. The overall system is based on the ROS robot operating system. The UAV in-built computer detects obstacles through 2-D Lidar and generates real-time local path and global path based on VFH and Modified $RRT^*$-Smart, respectively. Additionally, a movement command is issued based on the generated path on the UAV flight controller. The ground station computer receives the obstacle information and generates a 2-D SLAM map, transmits the destination point to the embedded computer, and manages the state of the UAV. The autonomous UAV flight system of the is verified through a simulator and actual flight.

Independent Object based Situation Awareness for Autonomous Driving in On-Road Environment (도로 환경에서 자율주행을 위한 독립 관찰자 기반 주행 상황 인지 방법)

  • Noh, Samyeul;Han, Woo-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.2
    • /
    • pp.87-94
    • /
    • 2015
  • This paper proposes a situation awareness method based on data fusion and independent objects for autonomous driving in on-road environment. The proposed method, designed to achieve an accurate analysis of driving situations in on-road environment, executes preprocessing tasks that include coordinate transformations, data filtering, and data fusion and independent object based situation assessment to evaluate the collision risks of driving situations and calculate a desired velocity. The method was implemented in an open-source robot operating system called ROS and tested on a closed road with other vehicles. It performed successfully in several scenarios similar to a real road environment.

A Study of Object Tracking Drones Combining Image Processing and Distance Sensor (영상처리와 거리센서를 융합한 객체 추적용 드론의 연구)

  • Yang, Woo-Seok;Chun, Myung-Hyun;Jang, Gun-Woo;Kim, Sang-Hoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.11a
    • /
    • pp.961-964
    • /
    • 2017
  • 드론의 대중화에 따른 사고위험의 증가로 안전한 조종 방법에 대한 연구의 필요성이 대두되었다. 따라서 조종자의 조종능력에 구애받지 않는 자율비행제어기술이 필요하게 되었고, 이를 보다 안정적으로 구현하기 위하여 자율주행 소프트웨어 플랫폼으로 주목받고 있는 Robot Operating System(ROS)를 사용하였다. ROS를 기반으로 Laser Range Finder(LRF)와 Particle Filter를 사용하여 자율적으로 객체추적이 가능하며 지능적으로 장애물을 회피하여 비행 할 수 있는 안정적인 자율비행제어시스템을 구현하고자 한다.

Box Feature Estimation from LiDAR Point Cluster using Maximum Likelihood Method (최대우도법을 이용한 라이다 포인트군집의 박스특징 추정)

  • Kim, Jongho;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.123-128
    • /
    • 2021
  • This paper present box feature estimation from LiDAR point cluster using maximum likelihood Method. Previous LiDAR tracking method for autonomous driving shows high accuracy about velocity and heading of point cluster. However, Assuming the average position of a point cluster as the vehicle position has a lower accuracy than ground truth. Therefore, the box feature estimation algorithm to improve position accuracy of autonomous driving perception consists of two procedures. Firstly, proposed algorithm calculates vehicle candidate position based on relative position of point cluster. Secondly, to reflect the features of the point cluster in estimation, the likelihood of the particle scattered around the candidate position is used. The proposed estimation method has been implemented in robot operating system (ROS) environment, and investigated via simulation and actual vehicle test. The test result show that proposed cluster position estimation enhances perception and path planning performance in autonomous driving.

LiDAR Static Obstacle Map based Vehicle Dynamic State Estimation Algorithm for Urban Autonomous Driving (도심자율주행을 위한 라이다 정지 장애물 지도 기반 차량 동적 상태 추정 알고리즘)

  • Kim, Jongho;Lee, Hojoon;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.14-19
    • /
    • 2021
  • This paper presents LiDAR static obstacle map based vehicle dynamic state estimation algorithm for urban autonomous driving. In an autonomous driving, state estimation of host vehicle is important for accurate prediction of ego motion and perceived object. Therefore, in a situation in which noise exists in the control input of the vehicle, state estimation using sensor such as LiDAR and vision is required. However, it is difficult to obtain a measurement for the vehicle state because the recognition sensor of autonomous vehicle perceives including a dynamic object. The proposed algorithm consists of two parts. First, a Bayesian rule-based static obstacle map is constructed using continuous LiDAR point cloud input. Second, vehicle odometry during the time interval is calculated by matching the static obstacle map using Normal Distribution Transformation (NDT) method. And the velocity and yaw rate of vehicle are estimated based on the Extended Kalman Filter (EKF) using vehicle odometry as measurement. The proposed algorithm is implemented in the Linux Robot Operating System (ROS) environment, and is verified with data obtained from actual driving on urban roads. The test results show a more robust and accurate dynamic state estimation result when there is a bias in the chassis IMU sensor.

A Research on V2I-based Accident Prevention System for the Prevention of Unexpected Accident of Autonomous Vehicle (자율주행 차량의 돌발사고 방지를 위한 V2I 기반의 사고 방지체계 연구)

  • Han, SangYong;Kim, Myeong-jun;Kang, Dongwan;Baek, Sunwoo;Shin, Hee-seok;Kim, Jungha
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.20 no.3
    • /
    • pp.86-99
    • /
    • 2021
  • This research proposes the Accident Prevention System to prevent collision accident that can occur due to blind spots such as crossway or school zone using V2I communication. Vision sensor and LiDAR sensor located in the infrastructure of crossway somewhere like that recognize objects and warn vehicles at risk of accidents to prevent accidents in advance. Using deep learning-based YOLOv4 to recognize the object entering the intersection and using the Manhattan Distance value with LiDAR sensors to calculate the expected collision time and the weight of braking distance and secure safe distance. V2I communication used ROS (Robot Operating System) communication to prevent accidents in advance by conveying various information to the vehicle, including class, distance, and speed of entry objects, in addition to collision warning.

Vehicle Reference Dynamics Estimation by Speed and Heading Information Sensed from a Distant Point

  • Yun, Jeonghyeon;Kim, Gyeongmin;Cho, Minhyoung;Park, Byungwoon;Seo, Howon;Kim, Jinsung
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.11 no.3
    • /
    • pp.209-215
    • /
    • 2022
  • As intelligent autonomous driving vehicle development has become a big topic around the world, accurate reference dynamics estimation has been more important than before. Current systems generally use speed and heading information sensed from a distant point as a vehicle reference dynamic, however, the dynamics between different points are not same especially during rotating motions. In order to estimate properly estimate the reference dynamics from the information such as velocity and heading sensed at a point distant from the reference point such as center of gravity, this study proposes estimating reference dynamics from any location in the vehicle by combining the Bicycle and Ackermann models. A test system was constructed by implementing multiple GNSS/INS equipment on an Robot Operating System (ROS) and an actual car. Angle and speed errors of 10° and 0.2 m/s have been reduced to 0.2° and 0.06 m/s after applying the suggested method.

LiDAR Static Obstacle Map based Position Correction Algorithm for Urban Autonomous Driving (도심 자율주행을 위한 라이다 정지 장애물 지도 기반 위치 보정 알고리즘)

  • Noh, Hanseok;Lee, Hyunsung;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.14 no.2
    • /
    • pp.39-44
    • /
    • 2022
  • This paper presents LiDAR static obstacle map based vehicle position correction algorithm for urban autonomous driving. Real Time Kinematic (RTK) GPS is commonly used in highway automated vehicle systems. For urban automated vehicle systems, RTK GPS have some trouble in shaded area. Therefore, this paper represents a method to estimate the position of the host vehicle using AVM camera, front camera, LiDAR and low-cost GPS based on Extended Kalman Filter (EKF). Static obstacle map (STOM) is constructed only with static object based on Bayesian rule. To run the algorithm, HD map and Static obstacle reference map (STORM) must be prepared in advance. STORM is constructed by accumulating and voxelizing the static obstacle map (STOM). The algorithm consists of three main process. The first process is to acquire sensor data from low-cost GPS, AVM camera, front camera, and LiDAR. Second, low-cost GPS data is used to define initial point. Third, AVM camera, front camera, LiDAR point cloud matching to HD map and STORM is conducted using Normal Distribution Transformation (NDT) method. Third, position of the host vehicle position is corrected based on the Extended Kalman Filter (EKF).The proposed algorithm is implemented in the Linux Robot Operating System (ROS) environment and showed better performance than only lane-detection algorithm. It is expected to be more robust and accurate than raw lidar point cloud matching algorithm in autonomous driving.

ROV Manipulation from Observation and Exploration using Deep Reinforcement Learning

  • Jadhav, Yashashree Rajendra;Moon, Yong Seon
    • Journal of Advanced Research in Ocean Engineering
    • /
    • v.3 no.3
    • /
    • pp.136-148
    • /
    • 2017
  • The paper presents dual arm ROV manipulation using deep reinforcement learning. The purpose of this underwater manipulator is to investigate and excavate natural resources in ocean, finding lost aircraft blackboxes and for performing other extremely dangerous tasks without endangering humans. This research work emphasizes on a self-learning approach using Deep Reinforcement Learning (DRL). DRL technique allows ROV to learn the policy of performing manipulation task directly, from raw image data. Our proposed architecture maps the visual inputs (images) to control actions (output) and get reward after each action, which allows an agent to learn manipulation skill through trial and error method. We have trained our network in simulation. The raw images and rewards are directly provided by our simple Lua simulator. Our simulator achieve accuracy by considering underwater dynamic environmental conditions. Major goal of this research is to provide a smart self-learning way to achieve manipulation in highly dynamic underwater environment. The results showed that a dual robotic arm trained for a 3DOF movement successfully achieved target reaching task in a 2D space by considering real environmental factor.