• 제목/요약/키워드: Dynamic Learning

검색결과 1,186건 처리시간 0.034초

Adapative Modular Q-Learning for Agents´ Dynamic Positioning in Robot Soccer Simulation

  • Kwon, Ki-Duk;Kim, In-Cheol
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.149.5-149
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent´s dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless ...

  • PDF

A fuzzy dynamic learning controller for chemical process control

  • Song, Jeong-Jun;Park, Sun-Won
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1991년도 한국자동제어학술회의논문집(국제학술편); KOEX, Seoul; 22-24 Oct. 1991
    • /
    • pp.1950-1955
    • /
    • 1991
  • A fuzzy dynamic learning controller is proposed and applied to control of time delayed, non-linear and unstable chemical processes. The proposed fuzzy dynamic learning controller can self-adjust its fuzzy control rules using the external dynamic information from the process during on-line control and it can create th,, new fuzzy control rules autonomously using its learning capability from past control trends. The proposed controller shows better performance than the conventional fuzzy logic controller and the fuzzy self organizing controller.

  • PDF

팀 학습행동이 팀 효과성에 미치는 영향과 팀 동적역량의 매개효과 (The Effects of Team Learning Behavior on Team Effectiveness and the Mediating Effects of Team Dynamic Capabilities)

  • 이균재;홍아정
    • 지식경영연구
    • /
    • 제15권4호
    • /
    • pp.57-78
    • /
    • 2014
  • Since team performance has become one of the core factors for companies' success, companies are putting every effort to raise team productivity. In this vein, the purpose of this study was to examine the influence of team learning behavior upon team dynamic capabilities, team effectiveness, and to verify the mediating effect of team dynamic capabilities in corporations. 312 employees were randomly selected to participate in an questionnaire survey. The result has shown that the static correlation exists between team learning behavior, team dynamic capabilities, and team effectiveness. Team dynamic capabilities mediated the relationship between team learning behavior and team effectiveness. Based on the findings, the study implies that learning behaviors among team members should be supported in order to improve its outcome, and HR representatives must help to develop dynamic capabilities.

중소기업의 기업가지향성, 학습지향성이 동적역량과 국제화 성과에 미치는 영향에 관한 연구: 여유자원의 조절효과를 중심으로 (The Effect of Entrepreneurial Orientation, Learning Orientation and Dynamic Capability on International Performance: Moderating Effects of Slack Resource)

  • 류동우;김기근
    • 무역학회지
    • /
    • 제45권5호
    • /
    • pp.161-179
    • /
    • 2020
  • The importance of entering the international market for small and medium-sized enterprises (SMEs) has been continuously emphasized. As a way to overcome these factors, prior studies have increased interest in dynamic capability. The purpose of this study is to investigate the effects of entrepreneurial orientation, learning orientation, and dynamic capability on their international performance of SMEs. Drawing on an extensive review of the literature on dynamic capability view and internationalization, hypotheses are developed and tested using a sample of 214 SMEs in South Korea. Structural equation modeling was applied. As a result of analysis, first, dynamic capability has a significant effect on international performance. Second, entrepreneurial orientation has significant influence on dynamic capability. Third, learning orientation has significant influence on dynamic capability. Lastly, slack resource was found to moderate the relationship between dynamic capability and international performance. The results indicate that their entrepreneurial orientation and learning orientation ware driver of their dynamic capability. and that their dynamic capability was significant driver of their international performance. In the final conclusion, implications and limitations of research and suggestions for future research are discussed.

에이전트 기반 시뮬레이션을 통한 디스패칭 시스템의 강화학습 모델 (A Reinforcement Learning Model for Dispatching System through Agent-based Simulation)

  • 김민정;신문수
    • 산업경영시스템학회지
    • /
    • 제47권2호
    • /
    • pp.116-123
    • /
    • 2024
  • In the manufacturing industry, dispatching systems play a crucial role in enhancing production efficiency and optimizing production volume. However, in dynamic production environments, conventional static dispatching methods struggle to adapt to various environmental conditions and constraints, leading to problems such as reduced production volume, delays, and resource wastage. Therefore, there is a need for dynamic dispatching methods that can quickly adapt to changes in the environment. In this study, we aim to develop an agent-based model that considers dynamic situations through interaction between agents. Additionally, we intend to utilize the Q-learning algorithm, which possesses the characteristics of temporal difference (TD) learning, to automatically update and adapt to dynamic situations. This means that Q-learning can effectively consider dynamic environments by sensitively responding to changes in the state space and selecting optimal dispatching rules accordingly. The state space includes information such as inventory and work-in-process levels, order fulfilment status, and machine status, which are used to select the optimal dispatching rules. Furthermore, we aim to minimize total tardiness and the number of setup changes using reinforcement learning. Finally, we will develop a dynamic dispatching system using Q-learning and compare its performance with conventional static dispatching methods.

동적 경쟁학습을 수행하는 병렬 신경망 (Parallel neural netowrks with dynamic competitive learning)

  • 김종완
    • 전자공학회논문지B
    • /
    • 제33B권3호
    • /
    • pp.169-175
    • /
    • 1996
  • In this paper, a new parallel neural network system that performs dynamic competitive learning is proposed. Conventional learning mehtods utilize the full dimension of the original input patterns. However, a particular attribute or dimension of the input patterns does not necessarily contribute to classification. The proposed system consists of parallel neural networks with the reduced input dimension in order to take advantage of the information in each dimension of the input patterns. Consensus schemes were developed to decide the netowrks performs a competitive learning that dynamically generates output neurons as learning proceeds. Each output neuron has it sown class threshold in the proposed dynamic competitive learning. Because the class threshold in the proposed dynamic learning phase, the proposed neural netowrk adapts properly to the input patterns distribution. Experimental results with remote sensing and speech data indicate the improved performance of the proposed method compared to the conventional learning methods.

  • PDF

A study on the Adaptive Controller with Chaotic Dynamic Neural Networks

  • Kim, Sang-Hee;Ahn, Hee-Wook;Wang, Hua O.
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제7권4호
    • /
    • pp.236-241
    • /
    • 2007
  • This paper presents an adaptive controller using chaotic dynamic neural networks(CDNN) for nonlinear dynamic system. A new dynamic backpropagation learning method of the proposed chaotic dynamic neural networks is developed for efficient learning, and this learning method includes the convergence for improving the stability of chaotic neural networks. The proposed CDNN is applied to the system identification of chaotic system and the adaptive controller. The simulation results show good performances in the identification of Lorenz equation and the adaptive control of nonlinear system, since the CDNN has the fast learning characteristics and the robust adaptability to nonlinear dynamic system.

적응 뉴럴 컴퓨팅 방법을 이용한 동적 시스템의 특성 모델링 (Characteristics Modeling of Dynamic Systems Using Adaptive Neural Computation)

  • 김병호
    • 제어로봇시스템학회논문지
    • /
    • 제13권4호
    • /
    • pp.309-314
    • /
    • 2007
  • This paper presents an adaptive neural computation algorithm for multi-layered neural networks which are applied to identify the characteristic function of dynamic systems. The main feature of the proposed algorithm is that the initial learning rate for the employed neural network is assigned systematically, and also the assigned learning rate can be adjusted empirically for effective neural leaning. By employing the approach, enhanced modeling of dynamic systems is possible. The effectiveness of this approach is veri tied by simulations.

Dynamic CBDT : Q-learning의 강화기법을 응용한 CBDT 확장 기법 (Dynamic CBDT : Extension of CBDT via Reinforcement Method of Q-learning)

  • 진영균;장형수
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2006년도 가을 학술발표논문집 Vol.33 No.2 (B)
    • /
    • pp.194-199
    • /
    • 2006
  • 본 논문에서는 불확실한 환경 상에서의 의사결정 알고리즘인 "Case-based Decision Theory" (CBDT) 알고리즘을 dynamic하게 연동되는 연속된 의사결정 문제에 대하여 강화학습의 대표적인 Q-learning의 강화기법을 응용하여 확장한 새로운 의사결정 알고리즘 "Dynamic CBDT"를 제안하고, CBDT알고리즘에 대한 Dynamic CBDT의 효율성을 테트리스 실험을 통하여 확인한다.

  • PDF

Dynamic Positioning of Robot Soccer Simulation Game Agents using Reinforcement learning

  • Kwon, Ki-Duk;Cho, Soo-Sin;Kim, In-Cheol
    • 한국지능정보시스템학회:학술대회논문집
    • /
    • 한국지능정보시스템학회 2001년도 The Pacific Aisan Confrence On Intelligent Systems 2001
    • /
    • pp.59-64
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to chose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state- action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem. we suggest Adaptive Mediation-based Modular Q-Learning (AMMQL)as an improvement of the existing Modular Q-Learning (MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.

  • PDF