• Title/Summary/Keyword: Robot-based Learning

Search Result 482, Processing Time 0.028 seconds

Effects of SW Training using Robot Based on Card Coding on Learning Motivation and Attitude (카드 코딩 기반의 로봇을 활용한 SW 교육이 학습동기 및 태도에 미치는 영향)

  • Jun, SooJin
    • Journal of The Korean Association of Information Education
    • /
    • v.22 no.4
    • /
    • pp.447-455
    • /
    • 2018
  • The purpose of this study is to investigate the effects of SW education using robot based on card coding on learning motivation and attitude of elementary school students. To do this, we conducted 8-hour SW education based on the CT concept of sequence, repetition, event, and control using the Truetrue, which is coded by command card for the 3rd grade of elementary school students. For the experiment, we examined the learning motivation for SW education and the attitude toward SW education based on the robot in advance. As a result, the students' motivation to learn SW education showed a statistically significant improvement. In addition, the attitude toward robot-based SW education improved statistically significantly as "good, convenient, interesting, easy, friendly, active, special, understandable, easy, simple". These results are expected to contribute to the expansion of education through various approaches of SW education.

Reinforcement Learning Approach to Agents Dynamic Positioning in Robot Soccer Simulation Games

  • Kwon, Ki-Duk;Kim, In-Cheol
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 2001.10a
    • /
    • pp.321-324
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement Beaming is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement loaming is different from supervised teaming in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement loaming algorithms like Q-learning do not require defining or loaming any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state-action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem, we suggest Adaptive Mediation-based Modular Q-Learning(AMMQL) as an improvement of the existing Modular Q-Learning(MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state space effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.

  • PDF

Quantitative evaluation of transfer learning for image recognition AI of robot vision (로봇 비전의 영상 인식 AI를 위한 전이학습 정량 평가)

  • Jae-Hak Jeong
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.3
    • /
    • pp.909-914
    • /
    • 2024
  • This study suggests a quantitative evaluation of transfer learning, which is widely used in various AI fields, including image recognition for robot vision. Quantitative and qualitative analyses of results applying transfer learning are presented, but transfer learning itself is not discussed. Therefore, this study proposes a quantitative evaluation of transfer learning itself based on MNIST, a handwritten digit database. For the reference network, the change in recognition accuracy according to the depth of the transfer learning frozen layer and the ratio of transfer learning data and pre-training data is tracked. It is observed that when freezing up to the first layer and the ratio of transfer learning data is more than 3%, the recognition accuracy of more than 90% can be stably maintained. The transfer learning quantitative evaluation method of this study can be used to implement transfer learning optimized according to the network structure and type of data in the future, and will expand the scope of the use of robot vision and image analysis AI in various environments.

The Effects of the Lab Practices Using Robot on Science Process Skills in the Elementary (초등학교에서 로봇활용실험이 과학탐구능력에 미치는 효과)

  • Kim, Chul
    • Journal of The Korean Association of Information Education
    • /
    • v.15 no.4
    • /
    • pp.625-634
    • /
    • 2011
  • This research examines educational effects on students' scientific process skills after applying a robot utilized MBL learning. Surveys and interviews concerning robot based science lessons were also conducted. The students were divided into experiment group who used the robots and controlled group who used traditional learning method with textbook and experiments. The result showed some significant differences in scientific measurement, prediction and inference(<.05). In contrast, no significant differences were found in observation and classification. The students answered the survey that the robots helped them understand science better and made science lessons more interesting.

  • PDF

Fast Motion Planning of Wheel-legged Robot for Crossing 3D Obstacles using Deep Reinforcement Learning (심층 강화학습을 이용한 휠-다리 로봇의 3차원 장애물극복 고속 모션 계획 방법)

  • Soonkyu Jeong;Mooncheol Won
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.2
    • /
    • pp.143-154
    • /
    • 2023
  • In this study, a fast motion planning method for the swing motion of a 6x6 wheel-legged robot to traverse large obstacles and gaps is proposed. The motion planning method presented in the previous paper, which was based on trajectory optimization, took up to tens of seconds and was limited to two-dimensional, structured vertical obstacles and trenches. A deep neural network based on one-dimensional Convolutional Neural Network (CNN) is introduced to generate keyframes, which are then used to represent smooth reference commands for the six leg angles along the robot's path. The network is initially trained using the behavioral cloning method with a dataset gathered from previous simulation results of the trajectory optimization. Its performance is then improved through reinforcement learning, using a one-step REINFORCE algorithm. The trained model has increased the speed of motion planning by up to 820 times and improved the success rates of obstacle crossing under harsh conditions, such as low friction and high roughness.

Area-Based Q-learning Algorithm to Search Target Object of Multiple Robots (다수 로봇의 목표물 탐색을 위한 Area-Based Q-learning 알고리즘)

  • Yoon, Han-Ul;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.4
    • /
    • pp.406-411
    • /
    • 2005
  • In this paper, we present the area-based Q-learning to search a target object using multiple robot. To search the target in Markovian space, the robots should recognize their surrounding at where they are located and generate some rules to act upon by themselves. Under area-based Q-learning, a robot, first of all, obtains 6-distances from itself to environment by infrared sensor which are hexagonally allocated around itself. Second, it calculates 6-areas with those distances then take an action, i.e., turn and move toward where the widest space will be guaranteed. After the action is taken, the value of Q will be updated by relative formula at the state. We set up an experimental environment with five small mobile robots, obstacles, and a target object, and tried to search for a target object while navigating in a unknown hallway where some obstacles were placed. In the end of this paper, we presents the results of three algorithms - a random search, area-based action making (ABAM), and hexagonal area-based Q-teaming.

Realtime Evolutionary Learning of Mobile Robot Behaviors (이동 로봇 행위의 실시간 진화)

  • Lee, Jae-Gu;Shim, In-Bo;Yoon, Joong-Sun
    • Proceedings of the KSME Conference
    • /
    • 2003.04a
    • /
    • pp.816-821
    • /
    • 2003
  • Researchers have utilized artificial evolution techniques and learning techniques for studying the interactions between learning and evolution. Adaptation in dynamic environments gains a significant advantage by combining evolution and learning. We propose an on-line, realtime evolutionary learning mechanism to determine the structure and the synaptic weights of a neural network controller for mobile robot navigations. We support our method, based on (1+1) evolutionary strategy which produces changes during the lifetime of an individual to increase the adaptability of the individual itself, with a set of experiments on evolutionary neural controller for physical robots behaviors. We investigate the effects of learning in evolutionary process by comparing the performance of the proposed realtime evolutionary learning method with that of evolutionary method only. Also, we investigate an interactive evolutionary algorithm to overcome the difficulties in evaluating complicated tasks.

  • PDF

Learning soccer robot using genetic programming

  • Wang, Xiaoshu;Sugisaka, Masanori
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1999.10a
    • /
    • pp.292-297
    • /
    • 1999
  • Evolving in artificial agent is an extremely difficult problem, but on the other hand, a challenging task. At present the studies mainly centered on single agent learning problem. In our case, we use simulated soccer to investigate multi-agent cooperative learning. Consider the fundamental differences in learning mechanism, existing reinforcement learning algorithms can be roughly classified into two types-that based on evaluation functions and that of searching policy space directly. Genetic Programming developed from Genetic Algorithms is one of the most well known approaches belonging to the latter. In this paper, we give detailed algorithm description as well as data construction that are necessary for learning single agent strategies at first. In following step moreover, we will extend developed methods into multiple robot domains. game. We investigate and contrast two different methods-simple team learning and sub-group loaming and conclude the paper with some experimental results.

  • PDF

Learning Relational Instance-Based Policies from User Demonstrations (사용자 데모를 이용한 관계적 개체 기반 정책 학습)

  • Park, Chan-Young;Kim, Hyun-Sik;Kim, In-Cheol
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.5
    • /
    • pp.363-369
    • /
    • 2010
  • Demonstration-based learning has the advantage that a user can easily teach his/her robot new task knowledge just by demonstrating directly how to perform the task. However, many previous demonstration-based learning techniques used a kind of attribute-value vector model to represent their state spaces and policies. Due to the limitation of this model, they suffered from both low efficiency of the learning process and low reusability of the learned policy. In this paper, we present a new demonstration-based learning method, in which the relational model is adopted in place of the attribute-value model. Applying the relational instance-based learning to the training examples extracted from the records of the user demonstrations, the method derives a relational instance-based policy which can be easily utilized for other similar tasks in the same domain. A relational policy maps a context, represented as a pair of (state, goal), to a corresponding action to be executed. In this paper, we give a detail explanation of our demonstration-based relational policy learning method, and then analyze the effectiveness of our learning method through some experiments using a robot simulator.

Development of Robot Education Program for Pre-service Elementary Teachers Using Educational Robot and its Application (교육용 로봇을 활용한 예비초등교사 로봇교육프로그램의 개발 및 적용)

  • Song, Ui-Sung
    • Journal of Digital Contents Society
    • /
    • v.14 no.3
    • /
    • pp.333-341
    • /
    • 2013
  • Robot education has the favorable influence on creativity and problem-solving ability of students. Therefore, it is commonly known to elementary school students, their parents and teacher through robot education for after school and contests. However, it has not been actively taught at university of education students because of the lack of systematic education program. In this paper, we have developed robot education program using problem-based learning and educational robot for pre-service elementary teachers. We have examined their recognition on robot education and education program after applying the developed program. Interviews for improving robot education program were also conducted. We find out robot education program has the favorable influence on the recognition, satisfaction and effectiveness for robot education. In particular, we know that education will related to robot was augmented by this education program.