• Title/Summary/Keyword: Q 학습

Search Result 290, Processing Time 0.034 seconds

A Strategy for improving Performance of Q-learning with Prediction Information (예측 정보를 이용한 Q-학습의 성능 개선 기법)

  • Lee, Choong-Hyeon;Um, Ky-Hyun;Cho, Kyung-Eun
    • Journal of Korea Game Society
    • /
    • v.7 no.4
    • /
    • pp.105-116
    • /
    • 2007
  • Nowadays, learning of agents gets more and more useful in game environments. But it takes a long learning time to produce satisfactory results in game. So, we need a good method to shorten the learning time. In this paper, we present a strategy for improving the learning performance of Q-learning with prediction information. It refers to the chosen action at each status in the Q-learning algorithm, It stores the referred value at the P-table of prediction module, and then it searches some values with high frequency at the table. The values are used to renew second compensation value from the Q-table. Our experiments show that our approach gets the efficiency improvement of average 9% after the middle point of learning experiments, and that the more actions in a status space, the higher performance.

  • PDF

Neural-Q method based on KFD regression (KFD 회귀를 이용한 뉴럴-큐 기법)

  • 조원희;김영일;박주영
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.05a
    • /
    • pp.85-88
    • /
    • 2003
  • 강화학습의 한가지 방법인 Q-learning은 최근에 Linear Quadratic Regulation(이하 LQR) 문제에 성공적으로 적용된 바 있다. 특히, 시스템 모델의 파라미터에 대한 구체적인 정보없이 적절한 입ㆍ출력만으로 학습을 통해 문제의 해결이 가능하므로 상황에 따라 매우 실용적인 방법이 될 수 있다. 뉴럴-큐 기법은 이러한 Q-learning의 Q-value를 MLP(multilayer perceptron) 신경망의 출력으로 대치시켜, 비선형 시스템의 최적제어 문제를 다룰 수 있게 한 방법이다. 그러나, 뉴럴-큐 기법은 신경망의 구조를 먼저 결정한 후 역전파 알고리즘을 이용해 학습하는 절차를 행하므로, 시행착오를 통해 신경망 구조를 결정해야 한다는 점, 역전파 알고리즘의 적용에 따라 신경망의 연결강도 값들이 지역적 최적해로 수렴한다는 점등의 문제점이 있다. 본 논문에서는 뉴럴-큐 학습의 도구로 KFD회귀를 이용하여 Q 함수의 근사 기법을 제안하고 관련 수식을 유도하였다. 그리고, 모의 실험을 통하여, 제안된 뉴럴-큐 방법의 적용 가능성을 알아보았다.

  • PDF

Improved Deep Q-Network Algorithm Using Self-Imitation Learning (Self-Imitation Learning을 이용한 개선된 Deep Q-Network 알고리즘)

  • Sunwoo, Yung-Min;Lee, Won-Chang
    • Journal of IKEEE
    • /
    • v.25 no.4
    • /
    • pp.644-649
    • /
    • 2021
  • Self-Imitation Learning is a simple off-policy actor-critic algorithm that makes an agent find an optimal policy by using past good experiences. In case that Self-Imitation Learning is combined with reinforcement learning algorithms that have actor-critic architecture, it shows performance improvement in various game environments. However, its applications are limited to reinforcement learning algorithms that have actor-critic architecture. In this paper, we propose a method of applying Self-Imitation Learning to Deep Q-Network which is a value-based deep reinforcement learning algorithm and train it in various game environments. We also show that Self-Imitation Learning can be applied to Deep Q-Network to improve the performance of Deep Q-Network by comparing the proposed algorithm and ordinary Deep Q-Network training results.

Optimization of Stock Trading System based on Multi-Agent Q-Learning Framework (다중 에이전트 Q-학습 구조에 기반한 주식 매매 시스템의 최적화)

  • Kim, Yu-Seop;Lee, Jae-Won;Lee, Jong-Woo
    • The KIPS Transactions:PartB
    • /
    • v.11B no.2
    • /
    • pp.207-212
    • /
    • 2004
  • This paper presents a reinforcement learning framework for stock trading systems. Trading system parameters are optimized by Q-learning algorithm and neural networks are adopted for value approximation. In this framework, cooperative multiple agents are used to efficiently integrate global trend prediction and local trading strategy for obtaining better trading performance. Agents Communicate With Others Sharing training episodes and learned policies, while keeping the overall scheme of conventional Q-learning. Experimental results on KOSPI 200 show that a trading system based on the proposed framework outperforms the market average and makes appreciable profits. Furthermore, in view of risk management, the system is superior to a system trained by supervised learning.

Fuzzy Q-learning using Distributed Eligibility (분포 기여도를 이용한 퍼지 Q-learning)

  • 정석일;이연정
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.5
    • /
    • pp.388-394
    • /
    • 2001
  • Reinforcement learning is a kind of unsupervised learning methods that an agent control rules from experiences acquired by interactions with environment. The eligibility is used to resolve the credit-assignment problem which is one of important problems in reinforcement learning, Conventional eligibilities such as the accumulating eligibility and the replacing eligibility are ineffective in use of rewards acquired in learning process, since on1y one executed action for a visited state is learned. In this paper, we propose a new eligibility, called the distributed eligibility, with which not only an executed action but also neighboring actions in a visited state are to be learned. The fuzzy Q-learning algorithm using the proposed eligibility is applied to a cart-pole balancing problem, which shows the superiority of the proposed method to conventional methods in terms of learning speed.

  • PDF

Function Approximation for Reinforcement Learning using Fuzzy Clustering (퍼지 클러스터링을 이용한 강화학습의 함수근사)

  • Lee, Young-Ah;Jung, Kyoung-Sook;Chung, Tae-Choong
    • The KIPS Transactions:PartB
    • /
    • v.10B no.6
    • /
    • pp.587-592
    • /
    • 2003
  • Many real world control problems have continuous states and actions. When the state space is continuous, the reinforcement learning problems involve very large state space and suffer from memory and time for learning all individual state-action values. These problems need function approximators that reason action about new state from previously experienced states. We introduce Fuzzy Q-Map that is a function approximators for 1 - step Q-learning and is based on fuzzy clustering. Fuzzy Q-Map groups similar states and chooses an action and refers Q value according to membership degree. The centroid and Q value of winner cluster is updated using membership degree and TD(Temporal Difference) error. We applied Fuzzy Q-Map to the mountain car problem and acquired accelerated learning speed.

Online Reinforcement Learning to Search the Shortest Path in Maze Environments (미로 환경에서 최단 경로 탐색을 위한 실시간 강화 학습)

  • Kim, Byeong-Cheon;Kim, Sam-Geun;Yun, Byeong-Ju
    • The KIPS Transactions:PartB
    • /
    • v.9B no.2
    • /
    • pp.155-162
    • /
    • 2002
  • Reinforcement learning is a learning method that uses trial-and-error to perform Learning by interacting with dynamic environments. It is classified into online reinforcement learning and delayed reinforcement learning. In this paper, we propose an online reinforcement learning system (ONRELS : Outline REinforcement Learning System). ONRELS updates the estimate-value about all the selectable (state, action) pairs before making state-transition at the current state. The ONRELS learns by interacting with the compressed environments through trial-and-error after it compresses the state space of the mage environments. Through experiments, we can see that ONRELS can search the shortest path faster than Q-learning using TD-ewor and $Q(\lambda{)}$-learning using $TD(\lambda{)}$ in the maze environments.

Extended Q-larning under Multiple Tasks (복수의 부분 작업을 위한 확장된 Q-Learning)

  • 오도훈;윤소정;오경환
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.04b
    • /
    • pp.229-231
    • /
    • 2000
  • 많은 학습 방법 중에서 비교적 최근에 제시된 강화학습은 동적인 환경에서 뛰어난 학습 능력을 보여주었다. 이런 장점을 바탕으로 강화학습은 학습을 기초로 하는 에이전트 연구에 많이 사용되고 있다. 하지만, 현재까지 연구 결과는 강화학습으로 구축된 에이전트로 해결 할 수 있는 작업의 난이도에 한계가 있음을 보이고 있다. 특히, 복수의 부분 작업으로 구성되어 있는 복합 작업을 처리할 경우에 기존의 강화학습 방법은 문제 해결에 한계를 보여주고 있다. 본 논문에서는 복수의 부분 작업으로 구성된 복합 작업이 왜 처리하기 힘든가를 분석하고, 이런 문제를 처리할 수 있는 방안을 제안한다. 본 논문에서 제안하고 있는 EQ-Learning은 강화학습 방법의 대표적인 Q-Learning을 개량하고 기존의 문제를 해결한다. 이 방법은 각각의 부분 작업 해결 방안을 학습시키고 그 학습 결과들의 적절한 적용 순서를 찾아내 복합 작업을 해결한다. EQ-Learning의 타당성을 검증하기 위해 격자 공간에서 복수의 부분작업으로 구성된 미로 문제를 통하여 실험하였다.

  • PDF

Topic directed Web Spidering using Reinforcement Learning (강화학습을 이용한 주제별 웹 탐색)

  • Lim, Soo-Yeon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.4
    • /
    • pp.395-399
    • /
    • 2005
  • In this paper, we presents HIGH-Q learning algorithm with reinforcement learning for more fast and exact topic-directed web spidering. The purpose of reinforcement learning is to maximize rewards from environment, an reinforcement learning agents learn by interacting with external environment through trial and error. We performed experiments that compared the proposed method using reinforcement learning with breath first search method for searching the web pages. In result, reinforcement learning method using future discounted rewards searched a small number of pages to find result pages.

A strategic Q&A system for self-directed study (자기주도적 학습을 위한 전략형 Q&A 시스템)

  • Lee, Hae-Bok;Kim, Kap-Su
    • Journal of The Korean Association of Information Education
    • /
    • v.6 no.1
    • /
    • pp.13-29
    • /
    • 2002
  • Mathematical curriculum has been developed based on learners' level and difficulties of contents. Succeed in solving problem in mathematics depends on the completion of the precedent learning. Thus, it is important to diagnose students beforehand. It is also important to develop problem-solving skills for students. In this thesis, Q&A system is proposed to help students learn various problem solving skills in mathematics. Although the system is currently applicable to mathematics, it can be applied to any other subjects.

  • PDF