• Title/Summary/Keyword: Dynamic Learning

Search Result 1,186, Processing Time 0.035 seconds

The Application of Industrial Inspection of LED

  • Xi, Wang;Chong, Kil-To
    • Proceedings of the IEEK Conference
    • /
    • 2009.05a
    • /
    • pp.91-93
    • /
    • 2009
  • In this paper, we present the Q-learning method for adaptive traffic signal control on the basis of In this paper, we present the Q-learning method for adaptive traffic signal control on the basis of multi-agent technology. The structure is composed of sixphase agents and one intersection agent. Wireless communication network provides the possibility of the cooperation of agents. As one kind of reinforcement learning, Q-learning is adopted as the algorithm of the control mechanism, which can acquire optical control strategies from delayed reward; furthermore, we adopt dynamic learning method instead of static method, which is more practical. Simulation result indicates that it is more effective than traditional signal system.

  • PDF

Path Planning for a Robot Manipulator based on Probabilistic Roadmap and Reinforcement Learning

  • Park, Jung-Jun;Kim, Ji-Hun;Song, Jae-Bok
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.6
    • /
    • pp.674-680
    • /
    • 2007
  • The probabilistic roadmap (PRM) method, which is a popular path planning scheme, for a manipulator, can find a collision-free path by connecting the start and goal poses through a roadmap constructed by drawing random nodes in the free configuration space. PRM exhibits robust performance for static environments, but its performance is poor for dynamic environments. On the other hand, reinforcement learning, a behavior-based control technique, can deal with uncertainties in the environment. The reinforcement learning agent can establish a policy that maximizes the sum of rewards by selecting the optimal actions in any state through iterative interactions with the environment. In this paper, we propose efficient real-time path planning by combining PRM and reinforcement learning to deal with uncertain dynamic environments and similar environments. A series of experiments demonstrate that the proposed hybrid path planner can generate a collision-free path even for dynamic environments in which objects block the pre-planned global path. It is also shown that the hybrid path planner can adapt to the similar, previously learned environments without significant additional learning.

Explicit Dynamic Coordination Reinforcement Learning Based on Utility

  • Si, Huaiwei;Tan, Guozhen;Yuan, Yifu;peng, Yanfei;Li, Jianping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.792-812
    • /
    • 2022
  • Multi-agent systems often need to achieve the goal of learning more effectively for a task through coordination. Although the introduction of deep learning has addressed the state space problems, multi-agent learning remains infeasible because of the joint action spaces. Large-scale joint action spaces can be sparse according to implicit or explicit coordination structure, which can ensure reasonable coordination action through the coordination structure. In general, the multi-agent system is dynamic, which makes the relations among agents and the coordination structure are dynamic. Therefore, the explicit coordination structure can better represent the coordinative relationship among agents and achieve better coordination between agents. Inspired by the maximization of social group utility, we dynamically construct a factor graph as an explicit coordination structure to express the coordinative relationship according to the utility among agents and estimate the joint action values based on the local utility transfer among factor graphs. We present the application of such techniques in the scenario of multiple intelligent vehicle systems, where state space and action space are a problem and have too many interactions among agents. The results on the multiple intelligent vehicle systems demonstrate the efficiency and effectiveness of our proposed methods.

Design and implementation of Robot Soccer Agent Based on Reinforcement Learning (강화 학습에 기초한 로봇 축구 에이전트의 설계 및 구현)

  • Kim, In-Cheol
    • The KIPS Transactions:PartB
    • /
    • v.9B no.2
    • /
    • pp.139-146
    • /
    • 2002
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless these algorithms can learn the optimal policy if the agent can visit every state-action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem, we suggest Adaptive Mediation-based Modular Q-Learning (AMMQL) as an improvement of the existing Modular Q-Learning (MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state space effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. In this paper we use the AMMQL algorithn as a learning method for dynamic positioning of the robot soccer agent, and implement a robot soccer agent system called Cogitoniks.

Indirect Adaptive Decentralized Learning Control based Error Wave Propagation of the Vertical Multiple Dynamic Systems (수직다물체시스템의 오차파형전달방식 간접적응형 분산학습제어)

  • Lee Soo-Cheol
    • Proceedings of the Korea Society for Industrial Systems Conference
    • /
    • 2006.05a
    • /
    • pp.211-217
    • /
    • 2006
  • The learning control develops controllers that learn to improve their performance at executing a given task, based on experience performing this specific task. In a previous work, the authors presented an iterative precision of linear decentralized learning control based on p-integrated learning method for the vertical dynamic multiple systems. This paper develops an indirect decentralized learning control based on adaptive control method. The original motivation of the teaming control field was teaming in robots doing repetitive tasks such as on an assembly line. This paper starts with decentralized discrete time systems, and progresses to the robot application, modeling the robot as a time varying linear system in the neighborhood of the nominal trajectory, and using the usual robot controllers that are decentralized, treating each link as if it is independent of any coupling with other links. Error wave propagation method will show up in the numerical simulation for five-bar linkage as a vertical dynamic robot. The methods of learning system are shown up for the iterative precision of each link at each time step in repetition domain. Those can be helped to apply to the vertical multiple dynamic systems for precision quality assurance in the industrial robots and medical equipments.

  • PDF

Quality Assurance of Repeatability for the Vertical Multiple Dynamic Systems in Indirect Adaptive Decentralized Learning Control based Error wave Propagation (오차파형전달방식 간접적응형 분산학습제어 알고리즘을 적용한 수직다물체시스템의 반복정밀도 보증)

  • Lee Soo-Cheol
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.11 no.2
    • /
    • pp.40-47
    • /
    • 2006
  • The learning control develops controllers that learn to improve their performance at executing a given task, based on experience performing this specific task. In a previous work the authors presented an iterative precision of linear decentralized learning control based on p-integrated teaming method for the vertical dynamic multiple systems. This paper develops an indirect decentralized learning control based on adaptive control method. The original motivation of the loaming control field was learning in robots doing repetitive tasks such as on a]1 assembly line. This paper starts with decentralized discrete time systems, and progresses to the robot application, modeling the robot as a time varying linear system in the neighborhood of the nominal trajectory, and using the usual robot controllers that are decentralized, treating each link as if it is independent of any coupling with other links. Error wave propagation method will show up in the numerical simulation for five-bar linkage as a vertical dynamic robot. The methods of learning system are shown up for the iterative precision of each link at each time step in repetition domain. Those can be helped to apply to the vertical multiple dynamic systems for precision quality assurance in the industrial robots and medical equipments.

  • PDF

On the Web Based Interactive Teaching and Learning Material with Cinderella (Cinderella를 이용한 웹 기반 탐구형 교수-학습자료 연구)

  • 전명진;홍경희
    • Journal of the Korean School Mathematics Society
    • /
    • v.5 no.2
    • /
    • pp.101-109
    • /
    • 2002
  • Among interactive dynamic geometry softwares, Cinderella has some merits on the accuracy of algorithms and compatibility with internet. In this paper we compare dynamic geometry softwares such as GSP, Cabri II, Cinderella briefly and we design a web based interactive learning materials using the exercise editor of Cinderella and some Java applets, and we propose a web based interactive teaching and learning model in which achievement test can be given by the clickings on the help icon.

  • PDF

Decentralized Iterative Learning Control in Large Scale Linear Dynamic Systems (대규모 선형 시스템에서의 비집중 반복 학습제어)

  • ;Zeungnam Bien
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.39 no.10
    • /
    • pp.1098-1107
    • /
    • 1990
  • Decentralized iterative learning control methods are presented for a class of large scale interconnected linear dynamic systems, in which iterative learning controller in each subsystem operates on its local subsystem exclusively with no exchange of information between subsystems. Suffcient conditions for convergence of the algorithms are given and numerical examples are illustrated to show the validity of the algorithms. In particular, the algorithms are useful for the systems having large uncertainty of inter-connected terms.

  • PDF

AMN controller for dynamic control of robot manpulators (로봇 머니퓰레이터의 동력학 제어를 위한 AMN제어기)

  • 정재욱;국태용;이택종
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.1569-1572
    • /
    • 1997
  • In this paper, we present an associative memory network (AMN) controller for dynamic robot control. The purpose of using AMN is to reduce the size of required memory in storing and recalling large of daa representing input relationship of nonlinear functions. With the capability AMN can be used to dynamic robot control, which has nonlinear properties inherently. The proposed AMN control scheme has advantages for the inverse dynamics learning no limitatiion of inpur range, and insensitive of payload change. Computer simulations show the effectiveness and feasibility of proposed scheme.

  • PDF

Nonlinear Dynamic Manipulator Control Using DNP Controller (DNP 제어기에 의한 비선형 동적 매니퓰레이터 제어)

  • Cho, Hyeon-Seob;Kim, Hee-Sook;Ryu, In-Ho;Jang, Sung-Whan
    • Proceedings of the KIEE Conference
    • /
    • 1999.07b
    • /
    • pp.764-767
    • /
    • 1999
  • In this paper, to bring under robust and accurate control of auto-equipment systems which disturbance, parameter alteration of system, uncertainty and so forth exist, neural network controller called dynamic neural processor(DNP) is designed. Also, the architecture and learning algorithm of the proposed dynamic neural network, the DNP, are described and computer simulations are provided to demonstrate the effectiveness of the proposed learning method using the DNP.

  • PDF