• Title/Summary/Keyword: Dynamic Learning

Search Result 1,186, Processing Time 0.035 seconds

A Learning Controller for Repetitive Gate Control of Biped Walking Robot (이족 보행 로봇의 반복 걸음새 제어를 위한 학습 제어기)

  • 임동철;국태용
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.538-538
    • /
    • 2000
  • This paper presents a learning controller for repetitive gate control of biped robot. The learning control scheme consists of a feedforward learning rule and linear feedback control input for stabilization of learning system. The feasibility of teaming control to biped robotic motion is shown via dynamic simulation with 12 dof biped robot.

  • PDF

An Adaptative Learning System by using SCORM-Based Dynamic Sequencing (SCORM 기반의 동적인 시퀀스를 이용한 적응형 학습 시스템)

  • Lee Jong-Keun;Kim Jun-Tae;Kim Hyung-Il
    • The KIPS Transactions:PartD
    • /
    • v.13D no.3 s.106
    • /
    • pp.425-436
    • /
    • 2006
  • The e-learning system in which the learning is carried out by predefined procedures cannot offer proper learning suitable to the capability of individual learner. To solve this problem, SCORM sequencing can be used to define various learning procedures according to the capabilities of learners. Currently the sequencing is designed by teachers or learning contents producers to regularize the learning program. However, the predefined sequencing may not reflect the characteristics of the learning group. If inappropriate sequencing is designed it may cause the unnecessary repetition of learning. In this paper, we propose an automated evaluation system in which dynamic sequencing is applied. The dynamic sequencing reflects the evaluation results to the standard scores used by sequencing. By changing the standard scores, the sequencing changes dynamically according to the evaluation results of a learning group. Through several experiments, we verified that the proposed learning system that uses the dynamic sequencing is effective for providing the proper learning procedures suitable to the capabilities of learners.

A study on Indirect Adaptive Decentralized Learning Control of the Vertical Multiple Dynamic System

  • Lee, Soo-Cheol;Park, Seok-Sun;Lee, Jeh-Won
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.7 no.1
    • /
    • pp.62-66
    • /
    • 2006
  • The learning control develops controllers that learn to improve their performance at executing a given task, based on experience performing this specific task. In a previous work, the authors presented an iterative precision of linear decentralized learning control based on p-integrated learning method for the vertical dynamic multiple systems. This paper develops an indirect decentralized learning control based on adaptive control method. The original motivation of the learning control field was learning in robots doing repetitive tasks such as an assembly line works. This paper starts with decentralized discrete time systems, and progresses to the robot application, modeling the robot as a time varying linear system in the neighborhood of the nominal trajectory, and using the usual robot controllers that are decentralized, treating each link as if it is independent of any coupling with other links. Some techniques will show up in the numerical simulation for vertical dynamic robot. The methods of learning system are shown for the iterative precision of each link.

Reconstruction of e-Learning Contents based on Web 2.0, and the Level Diagnosis (Web 2.0 기반 e-러닝 콘텐츠 재구성 및 수준 진단)

  • Lim, Yang-Won;Lim, Han-Kyu
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.7
    • /
    • pp.429-437
    • /
    • 2010
  • As Web technology and functions have recently changed to a user-focused paradigm, new studies are being conducted to construct dynamic learning content that enables the learner's participation and continuous learning in the field of e-learning research and design. This paper covers a study on the degree of difficulty in learner-focused dynamic learning contents to provide efficient learning environments for its adaptation to e-learning 2.0. This study suggests DLA (Dynamic Level Adjustment) to provide learner-focused contents. The suggested system will be a guideline to control and adopt learning content that can be easily applied to the environmental change, and more in-depth future research can be performed by using the system. A dynamic learning content model was made to recognize various learning patterns of learners as a result of the performance evaluation.

Adaptive-learning control of vehicle dynamics using nonlinear backstepping technique (비선형 백스테핑 방식에 의한 차량 동력학의 적응-학습제어)

  • 이현배;국태용
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.636-639
    • /
    • 1997
  • In this paper, a dynamic control scheme is proposed which not only compensates for the lateral dynamics and longitudinal dynamics but also deal with the yaw motion dynamics. Using the dynamic control technique, adaptive and learning algorithm together, the proposed controller is not only robust to disturbance and parameter uncertainties but also can learn the inverse dynamics model in steady state. Based on the proposed dynamic control scheme, a dynamic vehicle simulator is contructed to design and test various control techniques for 4-wheel steering vehicles.

  • PDF

Design of Multi-Dynamic Neuro-Fuzzy Controller for Dynamic Systems Control (동적시스템 제어를 위한 다단동적 뉴로-퍼지 제어기 설계)

  • Cho, Hyun-Seob;Min, Jin-Kyoung
    • Proceedings of the KAIS Fall Conference
    • /
    • 2007.05a
    • /
    • pp.150-153
    • /
    • 2007
  • The intent of this paper is to describe a neural network structure called multi dynamic neural network(MDNN), and examine how it can be used in developing a learning scheme for computing robot inverse kinematic transformations. The architecture and learning algorithm of the proposed dynamic neural network structure, the MDNN, are described. Computer simulations are demonstrate the effectiveness of the proposed learning using the MDNN.

  • PDF

A Learning Algorithm for a Recurrent Neural Network Base on Dual Extended Kalman Filter (두개의 Extended Kalman Filter를 이용한 Recurrent Neural Network 학습 알고리듬)

  • Song, Myung-Geun;Kim, Sang-Hee;Park, Won-Woo
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.349-351
    • /
    • 2004
  • The classical dynamic backpropagation learning algorithm has the problems of learning speed and the determine of learning parameter. The Extend Kalman Filter(EKF) is used effectively for a state estimation method for a non linear dynamic system. This paper presents a learning algorithm using Dual Extended Kalman Filter(DEKF) for Fully Recurrent Neural Network(FRNN). This DEKF learning algorithm gives the minimum variance estimate of the weights and the hidden outputs. The proposed DEKF learning algorithm is applied to the system identification of a nonlinear SISO system and compared with dynamic backpropagation learning algorithm.

  • PDF

Basin-Wide Multi-Reservoir Operation Using Reinforcement Learning (강화학습법을 이용한 유역통합 저수지군 운영)

  • Lee, Jin-Hee;Shim, Myung-Pil
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2006.05a
    • /
    • pp.354-359
    • /
    • 2006
  • The analysis of large-scale water resources systems is often complicated by the presence of multiple reservoirs and diversions, the uncertainty of unregulated inflows and demands, and conflicting objectives. Reinforcement learning is presented herein as a new approach to solving the challenging problem of stochastic optimization of multi-reservoir systems. The Q-Learning method, one of the reinforcement learning algorithms, is used for generating integrated monthly operation rules for the Keum River basin in Korea. The Q-Learning model is evaluated by comparing with implicit stochastic dynamic programming and sampling stochastic dynamic programming approaches. Evaluation of the stochastic basin-wide operational models considered several options relating to the choice of hydrologic state and discount factors as well as various stochastic dynamic programming models. The performance of Q-Learning model outperforms the other models in handling of uncertainty of inflows.

  • PDF

Reinforcement Learning Approach to Agents Dynamic Positioning in Robot Soccer Simulation Games

  • Kwon, Ki-Duk;Kim, In-Cheol
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 2001.10a
    • /
    • pp.321-324
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement Beaming is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement loaming is different from supervised teaming in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement loaming algorithms like Q-learning do not require defining or loaming any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state-action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem, we suggest Adaptive Mediation-based Modular Q-Learning(AMMQL) as an improvement of the existing Modular Q-Learning(MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state space effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.

  • PDF

Implementation of a Learning Controller for Repetitive Gate Control of Biped Walking Robot (이족 보행 로봇의 반복 걸음새 제어를 위한 학습제어기의 구현)

  • Lim, Dong-Cheol;Oh, Sung-Nam;Kuc, Tae-Yong
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.594-596
    • /
    • 2005
  • This paper present a learning controller for repetitive gate control of biped robot. The learning control scheme consists of a feedforward learning rule and linear feedback control input for stabilization of learning system. The feasibility of learning control to biped robotic motion is shown via dynamic simulation and experimental results with 24 DOF biped robot.

  • PDF