• Title/Summary/Keyword: learning with a robot

Search Result 486, Processing Time 0.028 seconds

Reward Shaping for a Reinforcement Learning Method-Based Navigation Framework

  • Roland, Cubahiro;Choi, Donggyu;Jang, Jongwook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.9-11
    • /
    • 2022
  • Applying Reinforcement Learning in everyday applications and varied environments has proved the potential of the of the field and revealed pitfalls along the way. In robotics, a learning agent takes over gradually the control of a robot by abstracting the navigation model of the robot with its inputs and outputs, thus reducing the human intervention. The challenge for the agent is how to implement a feedback function that facilitates the learning process of an MDP problem in an environment while reducing the time of convergence for the method. In this paper we will implement a reward shaping system avoiding sparse rewards which gives fewer data for the learning agent in a ROS environment. Reward shaping prioritizes behaviours that brings the robot closer to the goal by giving intermediate rewards and helps the algorithm converge quickly. We will use a pseudocode implementation as an illustration of the method.

  • PDF

Behavior Learning and Evolution of Swarm Robot System using Support Vector Machine (SVM을 이용한 군집로봇의 행동학습 및 진화)

  • Seo, Sang-Wook;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.5
    • /
    • pp.712-717
    • /
    • 2008
  • In swarm robot systems, each robot must act by itself according to the its states and environments, and if necessary, must cooperate with other robots in order to carry out a given task. Therefore it is essential that each robot has both learning and evolution ability to adapt the dynamic environments. In this paper, reinforcement learning method with SVM based on structural risk minimization and distributed genetic algorithms is proposed for behavior learning and evolution of collective autonomous mobile robots. By distributed genetic algorithm exchanging the chromosome acquired under different environments by communication each robot can improve its behavior ability. Specially, in order to improve the performance of evolution, selective crossover using the characteristic of reinforcement learning that basis of SVM is adopted in this paper.

A study on Indirect Adaptive Decentralized Learning Control of the Vertical Multiple Dynamic System

  • Lee, Soo-Cheol;Park, Seok-Sun;Lee, Jeh-Won
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.7 no.1
    • /
    • pp.62-66
    • /
    • 2006
  • The learning control develops controllers that learn to improve their performance at executing a given task, based on experience performing this specific task. In a previous work, the authors presented an iterative precision of linear decentralized learning control based on p-integrated learning method for the vertical dynamic multiple systems. This paper develops an indirect decentralized learning control based on adaptive control method. The original motivation of the learning control field was learning in robots doing repetitive tasks such as an assembly line works. This paper starts with decentralized discrete time systems, and progresses to the robot application, modeling the robot as a time varying linear system in the neighborhood of the nominal trajectory, and using the usual robot controllers that are decentralized, treating each link as if it is independent of any coupling with other links. Some techniques will show up in the numerical simulation for vertical dynamic robot. The methods of learning system are shown for the iterative precision of each link.

Behavior-based Learning Controller for Mobile Robot using Topological Map (Topolgical Map을 이용한 이동로봇의 행위기반 학습제어기)

  • Yi, Seok-Joo;Moon, Jung-Hyun;Han, Shin;Cho, Young-Jo;Kim, Kwang-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.2834-2836
    • /
    • 2000
  • This paper introduces the behavior-based learning controller for mobile robot using topological map. When the mobile robot navigates to the goal position, it utilizes given information of topological map and its location. Under navigating in unknown environment, the robot classifies its situation using ultrasonic sensor data, and calculates each motor schema multiplied by respective gain for all behaviors, and then takes an action according to the vector sum of all the motor schemas. After an action, the information of the robot's location in given topological map is incorporated to the learning module to adapt the weights of the neural network for gain learning. As a result of simulation, the robot navigates to the goal position successfully after iterative gain learning with topological information.

  • PDF

Behavior Learning and Evolution of Swarm Robot System using Q-learning and Cascade SVM (Q-learning과 Cascade SVM을 이용한 군집로봇의 행동학습 및 진화)

  • Seo, Sang-Wook;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.2
    • /
    • pp.279-284
    • /
    • 2009
  • In swarm robot systems, each robot must behaves by itself according to the its states and environments, and if necessary, must cooperates with other robots in order to carry out a given task. Therefore it is essential that each robot has both learning and evolution ability to adapt the dynamic environments. In this paper, reinforcement learning method using many SVM based on structural risk minimization and distributed genetic algorithms is proposed for behavior learning and evolution of collective autonomous mobile robots. By distributed genetic algorithm exchanging the chromosome acquired under different environments by communication each robot can improve its behavior ability. Specially, in order to improve the performance of evolution, selective crossover using the characteristic of reinforcement learning that basis of Cascade SVM is adopted in this paper.

A Study on Systematic Review of Learning with a Robot (로봇활용교육의 체계적 문헌고찰에 관한 연구)

  • Kim, Chul
    • Journal of The Korean Association of Information Education
    • /
    • v.17 no.2
    • /
    • pp.199-209
    • /
    • 2013
  • The study is to review the effects of learning with a robot among regular courses of elementary and middle schools so that the research method of systematic review for papers from 2001 and 2013 was conducted. The database for review were KISS, DBpia, and E-article and it was searched with two options of 'robot & education' and 'learning with a robot'. Initially, 481 papers were searched; but finally 50 were selected after monitoring and extraction execution in line with protocol. Great volume of researches focused on academic skill such as a creativity, problem solving skills and as for research methods, Pretest-Posttest Control Group Design and t-test took the lion's share. As for educational effects, improvements of course interest, immersion, attitude, motivation, creativeness and problem solving skills were identified; but in some researches, insignificant research outcomes were reported. Based on analysis results, considerations for learning with a robot were suggested.

  • PDF

Gain Tuning for SMCSPO of Robot Arm with Q-Learning (Q-Learning을 사용한 로봇팔의 SMCSPO 게인 튜닝)

  • Lee, JinHyeok;Kim, JaeHyung;Lee, MinCheol
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.2
    • /
    • pp.221-229
    • /
    • 2022
  • Sliding mode control (SMC) is a robust control method to control a robot arm with nonlinear properties. A high switching gain of SMC causes chattering problems, although the SMC allows the adequate control performance by giving high switching gain, without the exact robot model containing nonlinear and uncertainty terms. In order to solve this problem, SMC with sliding perturbation observer (SMCSPO) has been researched, where the method can reduce the chattering by compensating the perturbation, which is estimated by the observer, and then choosing a lower switching control gain of SMC. However, optimal gain tuning is necessary to get a better tracking performance and reducing a chattering. This paper proposes a method that the Q-learning automatically tunes the control gains of SMCSPO with an iterative operation. In this tuning method, the rewards of reinforcement learning (RL) are set minus tracking errors of states, and the action of RL is a change of control gain to maximize rewards whenever the iteration number of movements increases. The simple motion test for a 7-DOF robot arm was simulated in MATLAB program to prove this RL tuning algorithm. The simulation showed that this method can automatically tune the control gains for SMCSPO.

Deep Reinforcement Learning in ROS-based autonomous robot navigation

  • Roland, Cubahiro;Choi, Donggyu;Jang, Jongwook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.47-49
    • /
    • 2022
  • Robot navigation has seen a major improvement since the the rediscovery of the potential of Artificial Intelligence (AI) and the attention it has garnered in research circles. A notable achievement in the area was Deep Learning (DL) application in computer vision with outstanding daily life applications such as face-recognition, object detection, and more. However, robotics in general still depend on human inputs in certain areas such as localization, navigation, etc. In this paper, we propose a study case of robot navigation based on deep reinforcement technology. We look into the benefits of switching from traditional ROS-based navigation algorithms towards machine learning approaches and methods. We describe the state-of-the-art technology by introducing the concepts of Reinforcement Learning (RL), Deep Learning (DL) and DRL before before focusing on visual navigation based on DRL. The case study preludes further real life deployment in which mobile navigational agent learns to navigate unbeknownst areas.

  • PDF

Qualitative Exploration on Children's Interactions in Telepresence Robot Assisted Language Learning (원격로봇 보조 언어교육의 아동 상호작용 질적 탐색)

  • Shin, Kyoung Wan Cathy;Han, Jeong-Hye
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.3
    • /
    • pp.177-184
    • /
    • 2017
  • The purpose of this study was to explore children and robot interaction in distant language learning environments using three different video-conferencing technologies-two traditional screen-based videoconference technologies and a telepresence robot. One American and six Korean elementary school students participated in our case study. We relied on narratives of one-on-one interviews and observation of nonverbal cues in robot assisted language learning. Our findings suggest that participants responded more positively to interactions via a telepresence robot than to two screen-based video-conferencings, with many citing a stronger sense of immediacy during robot-mediated communications.

Dual Mode Control for the Robot with Redundant Degree of Freedom -The application of the preview learning control to the gross motion part-

  • Mori, Yasuchika;Nyudo, Shin
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10b
    • /
    • pp.296-300
    • /
    • 1992
  • This paper deals with a dual mode control system design for the starching work robot. From the feature of this work, the robot has redundant degree of freedom. In this paper, we try to split the whole movement the robot into a gross motion part ai. a fine motion part so as to achieve a good tracking performance. The preview learning control is applied to the gross motion part. The validity of the dual mode control architecture is demonstrated.

  • PDF