• Title/Summary/Keyword: Turtlebot3

Search Result 8, Processing Time 0.023 seconds

How to fix errors in ROS installation and control for TurtleBot 3 (터틀봇3를 위한 ROS 설치 및 제어의 오류 해결 방법)

  • Park, Tae-Whan;Lee, Kang-Hee
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.07a
    • /
    • pp.331-334
    • /
    • 2020
  • 터틀봇3(Turtlebot3)을 제어하기 위하여 피시와 터틀봇3 각각에 ROS(Robot Operating System)을 설치하고 제어한다. 터틀봇3는 라즈베리파이 3 보드로 제어되는 오픈소스 로봇이다. 전세계에서 유명한 교육 및 연구용 로봇이지만 설치와 제어 과정에서 여러 오류를 경험하는 사용자들이 있다. 본 논문은 터틀봇3를 처음 사용하는 사용자들을 위하여 설치과정과 설치과정에서 발생하는 오류들에 대하여 다룬다.

  • PDF

Cloud Based Simultaneous Localization and Mapping with Turtlebot3 (Turtlebot3을 사용한 클라우드 기반 동시 로컬라이제이션 및 매핑)

  • Ahmed, Hamdi A.;Jang, Jong-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.241-243
    • /
    • 2018
  • In this paper, in Simultaneous localization and mapping (SLAM), the robot acquire its map of environment while simultaneously localizing itself relative to the map. Cloud based SLAM, allows us to optimizing resource and data sharing like map of the environment, which allows us, as one of shared available online map. Doing so, unless we add or remove significant change in our environment, the essence of rebuilding new environmental map are omitted to new mobile robot added to the environment. As result, the requirement of additional sensor are curtailed.

  • PDF

Design and Implementation of Finger Direction Detection Algorithm in YOLO Environment (YOLO 환경에서 손가락 방향감지 알고리즘 설계 및 구현)

  • Lee, Cheol Min;Thar, Min Htet;Lee, Dong Myung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.28-30
    • /
    • 2021
  • In this paper, an algorithm that detects the user's finger direction using the YOLO (You Only Look Once) library was proposed. The processing stage of the proposed finger direction detection algorithm consists of a learning data management stage, a data learning stage, and a finger direction detection stage. As a result of the experiment, it was found that the distance between the camera and the finger had a very large influence on the accuracy of detecting the direction of the finger. We plan to apply this function to Turtlebot3 after improving the accuracy and reliability of the proposed algorithm in the future.

  • PDF

ROS-based Uncertain Environment Map-Builing Test (ROS 기반 불안정한 환경 맵 빌딩 테스트)

  • Park, Tae-Whan;Lee, Kang-Hee
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.07a
    • /
    • pp.335-338
    • /
    • 2020
  • 주로 맵 빌딩 테스트는 불안정한 환경이 아닌 안정된 환경을 조성한 후에 이루어진다. 본 논문에서는 인위적인 안정된 환경이 아닌 불안정한 환경에서 맵 빌딩을 테스트한다. 맵 빌딩 테스트를 위하여 터틀봇3 버거를 사용한다. 터틀봇3의 라이더 센서를 이용하여 맵 빌딩을 진행한다. 터틀봇3는 라즈베리파이로 제어되며 맵 빌딩과 터틀봇3 제어를 위해서는 ROS를 사용한다. 터틀봇3는 우분투와 ROS가 설치된 컴퓨터와 네트워크 통신을 하며 맵 빌딩을 한다. 불안정한 환경에서 맵빌딩이 동작 및 오동작하는 모습을 확인하였으며, 향후 이를 보완하기 위한 방향을 제시한다.

  • PDF

Development of autonomous patrol robot using SLAM and LiDAR (SLAM알고리즘과 LiDAR를 이용한 자율주행 로봇 개발)

  • Yun, Tae-Jin;Kim, Min-Gu;Kim, Min;Mun, Dong-Ho;Lee, Sang-Hak
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.01a
    • /
    • pp.289-290
    • /
    • 2020
  • 본 논문에서는 Turtlebot burger3와 라즈베리파이의 OpenCV, OpenCR보드를 이용하여 ROS상에서 SLAM알고리즘을 구현하여 자율 주행 순찰이 가능한 로봇을 개발한다. 특히, 라즈베리파이 카메라에 OpenCV를 이용하여 사람 얼굴 인식이 가능하게 하여 순찰 시 카메라로 순찰 정보를 제공 할 수 있게 한다. 또한, 로봇에 탑재된 LiDAR는 SLAM 알고리즘을 이용하여 주변의 환경을 매핑하여 장애물을 회피할 수 있는 경로를 탐색할 수 있도록 한다. 개발 기술들을 통하여 사람 대신에 로봇이 경비 구역의 침입자 촬영을 하고, 원격제어가 가능한 시스템으로 다양한 분야에 로봇 제어 기술에 활용하고자 한다.

  • PDF

Sensor System for Autonomous Mobile Robot Capable of Floor-to-floor Self-navigation by Taking On/off an Elevator (엘리베이터를 통한 층간 이동이 가능한 실내 자율주행 로봇용 센서 시스템)

  • Min-ho Lee;Kun-woo Na;Seungoh Han
    • Journal of Sensor Science and Technology
    • /
    • v.32 no.2
    • /
    • pp.118-123
    • /
    • 2023
  • This study presents sensor system for autonomous mobile robot capable of floor-to-floor self-navigation. The robot was modified using the Turtlebot3 hardware platform and ROS2 (robot operating system 2). The robot utilized the Navigation2 package to estimate and calibrate the moving path acquiring a map with SLAM (simultaneous localization and mapping). For elevator boarding, ultrasonic sensor data and threshold distance are compared to determine whether the elevator door is open. The current floor information of the elevator is determined using image processing results of the ceiling-fixed camera capturing the elevator LCD (liquid crystal display)/LED (light emitting diode). To realize seamless communication at any spot in the building, the LoRa (long-range) communication module was installed on the self-navigating autonomous mobile robot to support the robot in deciding if the elevator door is open, when to get off the elevator, and how to reach at the destination.

Implementation of Enhanced Vision for an Autonomous Map-based Robot Navigation

  • Roland, Cubahiro;Choi, Donggyu;Kim, Minyoung;Jang, Jongwook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.41-43
    • /
    • 2021
  • Robot Operating System (ROS) has been a prominent and successful framework used in robotics business and academia.. However, the framework has long been focused and limited to navigation of robots and manipulation of objects in the environment. This focus leaves out other important field such as speech recognition, vision abilities, etc. Our goal is to take advantage of ROS capacity to integrate additional libraries of programming functions aimed at real-time computer vision with a depth-image camera. In this paper we will focus on the implementation of an upgraded vision with the help of a depth camera which provides a high quality data for a much enhanced and accurate understanding of the environment. The varied data from the cameras are then incorporated in ROS communication structure for any potential use. For this particular case, the system will use OpenCV libraries to manipulate the data from the camera and provide a face-detection capabilities to the robot, while navigating an indoor environment. The whole system has been implemented and tested on the latest technologies of Turtlebot3 and Raspberry Pi4.

  • PDF

Direction Relation Representation and Reasoning for Indoor Service Robots (실내 서비스 로봇을 위한 방향 관계 표현과 추론)

  • Lee, Seokjun;Kim, Jonghoon;Kim, Incheol
    • Journal of KIISE
    • /
    • v.45 no.3
    • /
    • pp.211-223
    • /
    • 2018
  • In this paper, we propose a robot-centered direction relation representation and the relevant reasoning methods for indoor service robots. Many conventional works on qualitative spatial reasoning, when deciding the relative direction relation of the target object, are based on the use of position information only. These reasoning methods may infer an incorrect direction relation of the target object relative to the robot, since they do not take into consideration the heading direction of the robot itself as the base object. In this paper, we present a robot-centered direction relation representation and the reasoning methods. When deciding the relative directional relationship of target objects based on the robot in an indoor environment, the proposed methods make use of the orientation information as well as the position information of the robot. The robot-centered reasoning methods are implemented by extending the existing cone-based, matrix-based, and hybrid methods which utilized only the position information of two objects. In various experiments with both the physical Turtlebot and the simulated one, the proposed representation and reasoning methods displayed their high performance and applicability.