• 제목/요약/키워드: delayed reward

검색결과 22건 처리시간 0.022초

Q-learning for intersection traffic flow Control based on agents

  • 주선;정길도
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2009년도 정보 및 제어 심포지움 논문집
    • /
    • pp.94-96
    • /
    • 2009
  • In this paper, we present the Q-learning method for adaptive traffic signal control on the basis of multi-agent technology. The structure is composed of sixphase agents and one intersection agent. Wireless communication network provides the possibility of the cooperation of agents. As one kind of reinforcement learning, Q-learning is adopted as the algorithm of the control mechanism, which can acquire optical control strategies from delayed reward; furthermore, we adopt dynamic learning method instead of static method, which is more practical. Simulation result indicates that it is more effective than traditional signal system.

  • PDF

액터-크리틱 모형기반 포트폴리오 연구 (A Study on the Portfolio Performance Evaluation using Actor-Critic Reinforcement Learning Algorithms)

  • 이우식
    • 한국산업융합학회 논문집
    • /
    • 제25권3호
    • /
    • pp.467-476
    • /
    • 2022
  • The Bank of Korea raised the benchmark interest rate by a quarter percentage point to 1.75 percent per year, and analysts predict that South Korea's policy rate will reach 2.00 percent by the end of calendar year 2022. Furthermore, because market volatility has been significantly increased by a variety of factors, including rising rates, inflation, and market volatility, many investors have struggled to meet their financial objectives or deliver returns. Banks and financial institutions are attempting to provide Robo-Advisors to manage client portfolios without human intervention in this situation. In this regard, determining the best hyper-parameter combination is becoming increasingly important. This study compares some activation functions of the Deep Deterministic Policy Gradient(DDPG) and Twin-delayed Deep Deterministic Policy Gradient (TD3) Algorithms to choose a sequence of actions that maximizes long-term reward. The DDPG and TD3 outperformed its benchmark index, according to the results. One reason for this is that we need to understand the action probabilities in order to choose an action and receive a reward, which we then compare to the state value to determine an advantage. As interest in machine learning has grown and research into deep reinforcement learning has become more active, finding an optimal hyper-parameter combination for DDPG and TD3 has become increasingly important.

The Application of Industrial Inspection of LED

  • 왕숙;정길도
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2009년도 정보 및 제어 심포지움 논문집
    • /
    • pp.91-93
    • /
    • 2009
  • In this paper, we present the Q-learning method for adaptive traffic signal control on the basis of In this paper, we present the Q-learning method for adaptive traffic signal control on the basis of multi-agent technology. The structure is composed of sixphase agents and one intersection agent. Wireless communication network provides the possibility of the cooperation of agents. As one kind of reinforcement learning, Q-learning is adopted as the algorithm of the control mechanism, which can acquire optical control strategies from delayed reward; furthermore, we adopt dynamic learning method instead of static method, which is more practical. Simulation result indicates that it is more effective than traditional signal system.

  • PDF

A Navigation System for Mobile Robot

  • 장원량;정길도
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2009년도 정보 및 제어 심포지움 논문집
    • /
    • pp.118-120
    • /
    • 2009
  • In this paper, we present the Q-learning method for adaptive traffic signal control on the basis of multi-agent technology. The structure is composed of sixphase agents and one intersection agent. Wireless communication network provides the possibility of the cooperation of agents. As one kind of reinforcement learning, Q-learning is adopted as the algorithm of the control mechanism, which can acquire optical control strategies from delayed reward; furthermore, we adopt dynamic learning method instead of static method, which is more practical. Simulation result indicates that it is more effective than traditional signal system.

  • PDF

임무수행을 위한 개선된 강화학습 방법 (An Improved Reinforcement Learning Technique for Mission Completion)

  • 권우영;이상훈;서일홍
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제52권9호
    • /
    • pp.533-539
    • /
    • 2003
  • Reinforcement learning (RL) has been widely used as a learning mechanism of an artificial life system. However, RL usually suffers from slow convergence to the optimum state-action sequence or a sequence of stimulus-response (SR) behaviors, and may not correctly work in non-Markov processes. In this paper, first, to cope with slow-convergence problem, if some state-action pairs are considered as disturbance for optimum sequence, then they no to be eliminated in long-term memory (LTM), where such disturbances are found by a shortest path-finding algorithm. This process is shown to let the system get an enhanced learning speed. Second, to partly solve a non-Markov problem, if a stimulus is frequently met in a searching-process, then the stimulus will be classified as a sequential percept for a non-Markov hidden state. And thus, a correct behavior for a non-Markov hidden state can be learned as in a Markov environment. To show the validity of our proposed learning technologies, several simulation result j will be illustrated.

Dynamic Positioning of Robot Soccer Simulation Game Agents using Reinforcement learning

  • Kwon, Ki-Duk;Cho, Soo-Sin;Kim, In-Cheol
    • 한국지능정보시스템학회:학술대회논문집
    • /
    • 한국지능정보시스템학회 2001년도 The Pacific Aisan Confrence On Intelligent Systems 2001
    • /
    • pp.59-64
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to chose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state- action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem. we suggest Adaptive Mediation-based Modular Q-Learning (AMMQL)as an improvement of the existing Modular Q-Learning (MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.

  • PDF

Reinforcement Learning Approach to Agents Dynamic Positioning in Robot Soccer Simulation Games

  • Kwon, Ki-Duk;Kim, In-Cheol
    • 한국시뮬레이션학회:학술대회논문집
    • /
    • 한국시뮬레이션학회 2001년도 The Seoul International Simulation Conference
    • /
    • pp.321-324
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement Beaming is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement loaming is different from supervised teaming in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement loaming algorithms like Q-learning do not require defining or loaming any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state-action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem, we suggest Adaptive Mediation-based Modular Q-Learning(AMMQL) as an improvement of the existing Modular Q-Learning(MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state space effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.

  • PDF

추계학적 확률과정을 이용한 경사제 피복재의 예방적 유지관리를 위한 조건기반모형 (Condition-Based Model for Preventive Maintenance of Armor Units of Rubble-Mound Breakwaters using Stochastic Process)

  • 이철응
    • 한국해안·해양공학회논문집
    • /
    • 제28권4호
    • /
    • pp.191-201
    • /
    • 2016
  • 추계학적 확률과정을 이용하여 경사제 피복재를 예방적으로 유지관리할 수 있는 조건기반모형을 개발하였다. 완전 보수보강 조건에서 가장 경제적으로 보수보강이 수행되어야 하는 최적의 시점을 결정할 수 있는 모형이다. 본 연구에서 개발된 RRP(Renewal Reward Process) 기반 경제성 모형은 이자율을 고려할 수 있을 뿐만 아니라 기존 연구에서 상수로 취급하던 비용을 시간에 따른 확률변수로 고려할 수 있다. 누적피해와 사용한계 그리고 구조물의 중요도를 모두 고려할 수 있는 함수식을 제시하여 ABM(Age-Based Maintenance)을 CBM(Condition-Based Maintenance)으로 쉽게 확장할 수 있게 하였다. 또한 함수식에 포함된 계수들을 수학적으로 산정할 수 있는 방법도 제시하였다. 두 가지 추계학적 확률과정, WP(Wiener Process)와 GP(Gamma Process)를 이용하여 경사제 사석재를 해석하였다. 사용한계, 이자율 그리고 구조물의 중요도에 따라 시간에 따른 기대총비용율을 산정하여 기대총비용율이 최소가 되는 예방적 유지관리의 최적 시점을 쉽게 추정할 수 있었다. 동일한 사용한계에서 이자율이 높을수록 최적시점은 늦어지고 그에 따라 기대총비용율도 낮아졌다. 또한 상대적으로 GP가 WP보다 더 보수적으로 최적시점을 예측하였다. 마지막으로 동일한 조건에서 구조물의 중요도가 높을수록 더 자주 예방적 보수보강을 실시하여야 한다는 것을 알았다.

호텔기업의 자기희생적 리더십, 직장영성, 상사호감 및 혁신행동의 구조적 관계 연구 (A Study on the Structural Relationships between Self-Sacrificial Leadership, Employees' Workplace Spirituality, Supervisor Likeability and Innovation Behavior of Hotel Enterprise)

  • 박종철;최현정
    • 한국조리학회지
    • /
    • 제24권3호
    • /
    • pp.177-187
    • /
    • 2018
  • This study began with the expectation that work spirituality according to self - sacrificing leadership of a company recognized by hotel employees could positively affect superior liking and innovation behavior. The results of this study can be summarized as follows; the exercise of authority and the self - sacrifice in reward distribution are very important in the social exchange relationship. And the experience of joy and meaning, which is a behavior that meets the values, showed that it fulfilled the sense of accomplishment that the employees realized as a value, and it increased the satisfaction of pursuing the value of the boss and participating in and contributing to the world. Moreover, the hotel employees were likely to favor the bosses when they gave up or refrained from their authority or delegated to their subordinates, or when their bosses delayed or gave up rewards that had to be distributed to them. Also, self-sacrifice on the supervisor's job assignment is considered as an essential part of inducing the innovative behavior of the subordinates, regarded as desirable behavior or qualities.

강화 학습에 기초한 로봇 축구 에이전트의 설계 및 구현 (Design and implementation of Robot Soccer Agent Based on Reinforcement Learning)

  • 김인철
    • 정보처리학회논문지B
    • /
    • 제9B권2호
    • /
    • pp.139-146
    • /
    • 2002
  • 로봇 축구 시뮬레이션 게임은 하나의 동적 다중 에이전트 환경이다. 본 논문에서는 그러한 환경 하에서 각 에이전트의 동적 위치 결정을 위한 새로운 강화학습 방법을 제안한다. 강화학습은 한 에이전트가 환경으로부터 받는 간접적 지연 보상을 기초로 누적 보상값을 최대화할 수 있는 최적의 행동 전략을 학습하는 기계학습 방법이다. 따라서 강화학습은 입력-출력 쌍들이 훈련 예로 직접 제공되지 않는 다는 점에서 교사학습과 크게 다르다. 더욱이 Q-학습과 같은 비-모델 기반의 강화학습 알고리즘들은 주변 환경에 대한 어떤 모델도 학습하거나 미리 정의하는 것을 요구하지 않는다. 그럼에도 불구하고 이 알고리즘들은 에이전트가 모든 상태-행동 쌍들을 충분히 반복 경험할 수 있다면 최적의 행동전략에 수렴할 수 있다. 하지만 단순한 강화학습 방법들의 가장 큰 문제점은 너무 큰 상태 공간 때문에 보다 복잡한 환경들에 그대로 적용하기 어렵다는 것이다. 이런 문제점을 해결하기 위해 본 연구에서는 기존의 모듈화 Q-학습방법(MQL)을 개선한 적응적 중재에 기초한 모듈화 Q-학습 방법(AMMQL)을 제안한다. 종래의 단순한 모듈화 Q-학습 방법에서는 각 학습 모듈들의 결과를 결합하는 방식이 매우 단순하고 고정적이었으나 AMMQL학습 방법에서는 보상에 끼친 각 모듈의 기여도에 따라 모듈들에 서로 다른 가중치를 부여함으로써 보다 유연한 방식으로 각 모듈의 학습결과를 결합한다. 따라서 AMMQL 학습 방법은 큰 상태공간의 문제를 해결할 수 있을 뿐 아니라 동적인 환경변화에 보다 높은 적응성을 제공할 수 있다. 본 논문에서는 로봇 축구 에이전트의 동적 위치 결정을 위한 학습 방법으로 AMMQL 학습 방법을 사용하였고 이를 기초로 Cogitoniks 축구 에이전트 시스템을 구현하였다.