• Title/Summary/Keyword: robot based learning

Search Result 479, Processing Time 0.023 seconds

Behavior-based Learning Controller for Mobile Robot using Topological Map (Topolgical Map을 이용한 이동로봇의 행위기반 학습제어기)

  • Yi, Seok-Joo;Moon, Jung-Hyun;Han, Shin;Cho, Young-Jo;Kim, Kwang-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.2834-2836
    • /
    • 2000
  • This paper introduces the behavior-based learning controller for mobile robot using topological map. When the mobile robot navigates to the goal position, it utilizes given information of topological map and its location. Under navigating in unknown environment, the robot classifies its situation using ultrasonic sensor data, and calculates each motor schema multiplied by respective gain for all behaviors, and then takes an action according to the vector sum of all the motor schemas. After an action, the information of the robot's location in given topological map is incorporated to the learning module to adapt the weights of the neural network for gain learning. As a result of simulation, the robot navigates to the goal position successfully after iterative gain learning with topological information.

  • PDF

Development of Humanoid Robot HUMIC and Reinforcement Learning-based Robot Behavior Intelligence using Gazebo Simulator (휴머노이드 로봇 HUMIC 개발 및 Gazebo 시뮬레이터를 이용한 강화학습 기반 로봇 행동 지능 연구)

  • Kim, Young-Gi;Han, Ji-Hyeong
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.3
    • /
    • pp.260-269
    • /
    • 2021
  • To verify performance or conduct experiments using actual robots, a lot of costs are needed such as robot hardware, experimental space, and time. Therefore, a simulation environment is an essential tool in robotics research. In this paper, we develop the HUMIC simulator using ROS and Gazebo. HUMIC is a humanoid robot, which is developed by HCIR Lab., for human-robot interaction and an upper body of HUMIC is similar to humans with a head, body, waist, arms, and hands. The Gazebo is an open-source three-dimensional robot simulator that provides the ability to simulate robots accurately and efficiently along with simulated indoor and outdoor environments. We develop a GUI for users to easily simulate and manipulate the HUMIC simulator. Moreover, we open the developed HUMIC simulator and GUI for other robotics researchers to use. We test the developed HUMIC simulator for object detection and reinforcement learning-based navigation tasks successfully. As a further study, we plan to develop robot behavior intelligence based on reinforcement learning algorithms using the developed simulator, and then apply it to the real robot.

Implementation of an Intelligent Learning Controller for Gait Control of Biped Walking Robot (이족보행로봇의 걸음새 제어를 위한 지능형 학습 제어기의 구현)

  • Lim, Dong-Cheol;Kuc, Tae-Yong
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.59 no.1
    • /
    • pp.29-34
    • /
    • 2010
  • This paper presents an intelligent learning controller for repetitive walking motion of biped walking robot. The proposed learning controller consists of an iterative learning controller and a direct learning controller. In the iterative learning controller, the PID feedback controller takes part in stabilizing the learning control system while the feedforward learning controller plays a role in compensating for the nonlinearity of uncertain biped walking robot. In the direct learning controller, the desired learning input for new joint trajectories with different time scales from the learned ones is generated directly based on the previous learned input profiles obtained from the iterative learning process. The effectiveness and tracking performance of the proposed learning controller to biped robotic motion is shown by mathematical analysis and computer simulation with 12 DOF biped walking robot.

Robot Vision to Audio Description Based on Deep Learning for Effective Human-Robot Interaction (효과적인 인간-로봇 상호작용을 위한 딥러닝 기반 로봇 비전 자연어 설명문 생성 및 발화 기술)

  • Park, Dongkeon;Kang, Kyeong-Min;Bae, Jin-Woo;Han, Ji-Hyeong
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.1
    • /
    • pp.22-30
    • /
    • 2019
  • For effective human-robot interaction, robots need to understand the current situation context well, but also the robots need to transfer its understanding to the human participant in efficient way. The most convenient way to deliver robot's understanding to the human participant is that the robot expresses its understanding using voice and natural language. Recently, the artificial intelligence for video understanding and natural language process has been developed very rapidly especially based on deep learning. Thus, this paper proposes robot vision to audio description method using deep learning. The applied deep learning model is a pipeline of two deep learning models for generating natural language sentence from robot vision and generating voice from the generated natural language sentence. Also, we conduct the real robot experiment to show the effectiveness of our method in human-robot interaction.

A study on Indirect Adaptive Decentralized Learning Control of the Vertical Multiple Dynamic System

  • Lee, Soo-Cheol;Park, Seok-Sun;Lee, Jeh-Won
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.7 no.1
    • /
    • pp.62-66
    • /
    • 2006
  • The learning control develops controllers that learn to improve their performance at executing a given task, based on experience performing this specific task. In a previous work, the authors presented an iterative precision of linear decentralized learning control based on p-integrated learning method for the vertical dynamic multiple systems. This paper develops an indirect decentralized learning control based on adaptive control method. The original motivation of the learning control field was learning in robots doing repetitive tasks such as an assembly line works. This paper starts with decentralized discrete time systems, and progresses to the robot application, modeling the robot as a time varying linear system in the neighborhood of the nominal trajectory, and using the usual robot controllers that are decentralized, treating each link as if it is independent of any coupling with other links. Some techniques will show up in the numerical simulation for vertical dynamic robot. The methods of learning system are shown for the iterative precision of each link.

Use of Learning Based Neuro-fuzzy System for Flexible Walking of Biped Humanoid Robot (이족 휴머노이드 로봇의 유연한 보행을 위한 학습기반 뉴로-퍼지시스템의 응용)

  • Kim, Dong-Won;Kang, Tae-Gu;Hwang, Sang-Hyun;Park, Gwi-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.539-541
    • /
    • 2006
  • Biped locomotion is a popular research area in robotics due to the high adaptability of a walking robot in an unstructured environment. When attempting to automate the motion planning process for a biped walking robot, one of the main issues is assurance of dynamic stability of motion. This can be categorized into three general groups: body stability, body path stability, and gait stability. A zero moment point (ZMP), a point where the total forces and moments acting on the robot are zero, is usually employed as a basic component for dynamically stable motion. In this rarer, learning based neuro-fuzzy systems have been developed and applied to model ZMP trajectory of a biped walking robot. As a result, we can provide more improved insight into physical walking mechanisms.

  • PDF

Cooperative Robot for Table Balancing Using Q-learning (테이블 균형맞춤 작업이 가능한 Q-학습 기반 협력로봇 개발)

  • Kim, Yewon;Kang, Bo-Yeong
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.4
    • /
    • pp.404-412
    • /
    • 2020
  • Typically everyday human life tasks involve at least two people moving objects such as tables and beds, and the balancing of such object changes based on one person's action. However, many studies in previous work performed their tasks solely on robots without factoring human cooperation. Therefore, in this paper, we propose cooperative robot for table balancing using Q-learning that enables cooperative work between human and robot. The human's action is recognized in order to balance the table by the proposed robot whose camera takes the image of the table's state, and it performs the table-balancing action according to the recognized human action without high performance equipment. The classification of human action uses a deep learning technology, specifically AlexNet, and has an accuracy of 96.9% over 10-fold cross-validation. The experiment of Q-learning was carried out over 2,000 episodes with 200 trials. The overall results of the proposed Q-learning show that the Q function stably converged at this number of episodes. This stable convergence determined Q-learning policies for the robot actions. Video of the robotic cooperation with human over the table balancing task using the proposed Q-Learning can be found at http://ibot.knu.ac.kr/videocooperation.html.

Control for Manipulator of an Underwater Robot Using Meta Reinforcement Learning (메타강화학습을 이용한 수중로봇 매니퓰레이터 제어)

  • Moon, Ji-Youn;Moon, Jang-Hyuk;Bae, Sung-Hoon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.1
    • /
    • pp.95-100
    • /
    • 2021
  • This paper introduces model-based meta reinforcement learning as a control for the manipulator of an underwater construction robot. Model-based meta reinforcement learning updates the model fast using recent experience in a real application and transfers the model to model predictive control which computes control inputs of the manipulator to reach the target position. The simulation environment for model-based meta reinforcement learning is established using MuJoCo and Gazebo. The real environment of manipulator control for underwater construction robot is set to deal with model uncertainties.

Qualitative Exploration on Children's Interactions in Telepresence Robot Assisted Language Learning (원격로봇 보조 언어교육의 아동 상호작용 질적 탐색)

  • Shin, Kyoung Wan Cathy;Han, Jeong-Hye
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.3
    • /
    • pp.177-184
    • /
    • 2017
  • The purpose of this study was to explore children and robot interaction in distant language learning environments using three different video-conferencing technologies-two traditional screen-based videoconference technologies and a telepresence robot. One American and six Korean elementary school students participated in our case study. We relied on narratives of one-on-one interviews and observation of nonverbal cues in robot assisted language learning. Our findings suggest that participants responded more positively to interactions via a telepresence robot than to two screen-based video-conferencings, with many citing a stronger sense of immediacy during robot-mediated communications.

A Study on the Intention to Use a Robot-based Learning System with Multi-Modal Interaction (멀티모달 상호작용 중심의 로봇기반교육 콘텐츠를 활용한 r-러닝 시스템 사용의도 분석)

  • Oh, Junseok;Cho, Hye-Kyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.6
    • /
    • pp.619-624
    • /
    • 2014
  • This paper introduces a robot-based learning system which is designed to teach multiplication to children. In addition to a small humanoid and a smart device delivering educational content, we employ a type of mixed-initiative operation which provides enhanced multi-modal cognition to the r-learning system through human intervention. To investigate major factors that influence people's intention to use the r-learning system and to see how the multi-modality affects the connections, we performed a user study based on TAM (Technology Acceptance Model). The results support the fact that the quality of the system and the natural interaction are key factors for the r-learning system to be used, and they also reveal very interesting implications related to the human behaviors.