• Title/Summary/Keyword: Q-Learning

Search Result 415, Processing Time 0.029 seconds

Reinforcement Learning with Clustering for Function Approximation and Rule Extraction (함수근사와 규칙추출을 위한 클러스터링을 이용한 강화학습)

  • 이영아;홍석미;정태충
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.11
    • /
    • pp.1054-1061
    • /
    • 2003
  • Q-Learning, a representative algorithm of reinforcement learning, experiences repeatedly until estimation values about all state-action pairs of state space converge and achieve optimal policies. When the state space is high dimensional or continuous, complex reinforcement learning tasks involve very large state space and suffer from storing all individual state values in a single table. We introduce Q-Map that is new function approximation method to get classified policies. As an agent learns on-line, Q-Map groups states of similar situations and adapts to new experiences repeatedly. State-action pairs necessary for fine control are treated in the form of rule. As a result of experiment in maze environment and mountain car problem, we can achieve classified knowledge and extract easily rules from Q-Map

Neural -Q met,hod based on $\varepsilon$-SVR ($\varepsilon$-SVR을 이용한 Neural-Q 기법)

  • 조원희;김영일;박주영
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2002.12a
    • /
    • pp.162-165
    • /
    • 2002
  • Q-learning은 강화학습의 한 방법으로서, 여러 분야에 널리 응용되고 있는 기법이다. 최근에는 Linear Quadratic Regulation(이하 LQR) 문제에 성공적으로 적용된 바 있는데, 특히, 시스템모델의 파라미터에 대한 구체적인 정보가 없는 상태에서 적절한 입력과 출력만을 가지고 학습을 통해 문제를 해결할 수 있어서 상황에 따라서 매우 실용적인 대안이 될 수 있다. Neural Q-learning은 이러한 Q-learning의 Q-value를 MLP(multilayer perceptron) 신경망의 출력으로 대치시킴으로써, 비선형 시스템의 최적제어 문제를 다룰 수 있게 한 방법이다. 그러나, Neural Q방식은 신경망의 구조를 먼저 결정한 후 역전파 알고리즘을 이용하여 학습하는 절차를 취하기 때문에, 시행착오를 통하여 신경망 구조를 결정해야 한다는 점, 역전파 알고리즘의 적용으로 인해 신경망의 연결강도 값들이 지역적 최적해로 수렴한다는 점등의 문제점을 상속받는 한계가 있다. 따라서, 본 논문에서는 Neural-0 학습의 도구로, 역전파 알고리즘으로 학습되는 MLP 신경망을 사용하는 대신 최근 들어 여러 분야에서 그 성능을 인정받고 있는 서포트 벡터 학습법을 사용하는 방법을 택하여, $\varepsilon$-SVR(Epsilon Support Vector Regression)을 이용한 Q-value 근사 기법을 제안하고 관련 수식을 유도하였다. 그리고, 모의 실험을 통하여, 제안된 서포트 벡터학습 기반 Neural-Q 방법의 적용 가능성을 알아보았다.

Multi-Dimensional Reinforcement Learning Using a Vector Q-Net - Application to Mobile Robots

  • Kiguchi, Kazuo;Nanayakkara, Thrishantha;Watanabe, Keigo;Fukuda, Toshio
    • International Journal of Control, Automation, and Systems
    • /
    • v.1 no.1
    • /
    • pp.142-148
    • /
    • 2003
  • Reinforcement learning is considered as an important tool for robotic learning in unknown/uncertain environments. In this paper, we propose an evaluation function expressed in a vector form to realize multi-dimensional reinforcement learning. The novel feature of the proposed method is that learning one behavior induces parallel learning of other behaviors though the objectives of each behavior are different. In brief, all behaviors watch other behaviors from a critical point of view. Therefore, in the proposed method, there is cross-criticism and parallel learning that make the multi-dimensional learning process more efficient. By ap-plying the proposed learning method, we carried out multi-dimensional evaluation (reward) and multi-dimensional learning simultaneously in one trial. A special neural network (Q-net), in which the weights and the output are represented by vectors, is proposed to realize a critic net-work for Q-learning. The proposed learning method is applied for behavior planning of mobile robots.

Dynamic Action Space Handling Method for Reinforcement Learning Models

  • Woo, Sangchul;Sung, Yunsick
    • Journal of Information Processing Systems
    • /
    • v.16 no.5
    • /
    • pp.1223-1230
    • /
    • 2020
  • Recently, extensive studies have been conducted to apply deep learning to reinforcement learning to solve the state-space problem. If the state-space problem was solved, reinforcement learning would become applicable in various fields. For example, users can utilize dance-tutorial systems to learn how to dance by watching and imitating a virtual instructor. The instructor can perform the optimal dance to the music, to which reinforcement learning is applied. In this study, we propose a method of reinforcement learning in which the action space is dynamically adjusted. Because actions that are not performed or are unlikely to be optimal are not learned, and the state space is not allocated, the learning time can be shortened, and the state space can be reduced. In an experiment, the proposed method shows results similar to those of traditional Q-learning even when the state space of the proposed method is reduced to approximately 0.33% of that of Q-learning. Consequently, the proposed method reduces the cost and time required for learning. Traditional Q-learning requires 6 million state spaces for learning 100,000 times. In contrast, the proposed method requires only 20,000 state spaces. A higher winning rate can be achieved in a shorter period of time by retrieving 20,000 state spaces instead of 6 million.

An analysis of Learning Attitude among the Chinese Students in Korea - focused on the Q Methodology - (한국 내 중국 유학생의 학습태도 유형 분석 - Q방법론적 접근 -)

  • Li, Zhangpei;Li, Xiaohui;Park, Changun
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.6
    • /
    • pp.115-123
    • /
    • 2017
  • The purpose of this research provides analyzes the learning attitude types by Chinese students in Korea. For this purpose, we have adopted of the practical research methodology and quantitative research methodology, which can objectively determine the individual's ideas and behavior of the "Q methodology". To this end, This research is targeted at Chinese students in the students' learning attitude implemented by 4 types and analyzes questionnaires of each type. The analysis results are categorized as the type of learning environment is not satisfied; positively cooperate with the learning process and the environment; the lack of learning motivation; and paradoxical learning state. As a result of this discussion, Chinese students should have clear motivation to learn new things; improve their korean language ability; and need to know their clear learning methods. Nowadays, more and more Chinese students are choosing study abroad. Therefore, the learning attitudes and learning abilities as two of the most important of focus from society.

The Application of Industrial Inspection of LED

  • Xi, Wang;Chong, Kil-To
    • Proceedings of the IEEK Conference
    • /
    • 2009.05a
    • /
    • pp.91-93
    • /
    • 2009
  • In this paper, we present the Q-learning method for adaptive traffic signal control on the basis of In this paper, we present the Q-learning method for adaptive traffic signal control on the basis of multi-agent technology. The structure is composed of sixphase agents and one intersection agent. Wireless communication network provides the possibility of the cooperation of agents. As one kind of reinforcement learning, Q-learning is adopted as the algorithm of the control mechanism, which can acquire optical control strategies from delayed reward; furthermore, we adopt dynamic learning method instead of static method, which is more practical. Simulation result indicates that it is more effective than traditional signal system.

  • PDF

Rate Adaptation with Q-Learning in CSMA/CA Wireless Networks

  • Cho, Soohyun
    • Journal of Information Processing Systems
    • /
    • v.16 no.5
    • /
    • pp.1048-1063
    • /
    • 2020
  • In this study, we propose a reinforcement learning agent to control the data transmission rates of nodes in carrier sensing multiple access with collision avoidance (CSMA/CA)-based wireless networks. We design a reinforcement learning (RL) agent, based on Q-learning. The agent learns the environment using the timeout events of packets, which are locally available in data sending nodes. The agent selects actions to control the data transmission rates of nodes that adjust the modulation and coding scheme (MCS) levels of the data packets to utilize the available bandwidth in dynamically changing channel conditions effectively. We use the ns3-gym framework to simulate RL and investigate the effects of the parameters of Q-learning on the performance of the RL agent. The simulation results indicate that the proposed RL agent adequately adjusts the MCS levels according to the changes in the network, and achieves a high throughput comparable to those of the existing data transmission rate adaptation schemes such as Minstrel.

Design and Implementation of Parking Guidance System Based on Internet of Things(IoT) Using Q-learning Model (Q-learning 모델을 이용한 IoT 기반 주차유도 시스템의 설계 및 구현)

  • Ji, Yong-Joo;Choi, Hak-Hui;Kim, Dong-Seong
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.3
    • /
    • pp.153-162
    • /
    • 2016
  • This paper proposes an optimal dynamic resource allocation method in IoT (Internet of Things) parking guidance system using Q-learning resource allocation model. In the proposed method, a resource allocation using a forecasting model based on Q-learning is employed for optimal utilization of parking guidance system. To demonstrate efficiency and availability of the proposed method, it is verified by computer simulation and practical testbed. Through simulation results, this paper proves that the proposed method can enhance total throughput, decrease penalty fee issued by SLA (Service Level Agreement) and reduce response time with the dynamic number of users.

Object tracking algorithm of Swarm Robot System for using SVM and Polygon based Q-learning (SVM과 다각형 기반의 Q-learning 알고리즘을 이용한 군집로봇의 목표물 추적 알고리즘)

  • Seo, Sang-Wook;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2008.04a
    • /
    • pp.143-146
    • /
    • 2008
  • 본 논문에서는 군집로봇시스템에서 목표물 추적을 위하여 SVM을 이용한 12각형 기반의 Q-learning 알고리즘을 제안한다. 제안한 알고리즘의 유효성을 보이기 위해 본 논문에서는 여러대의 로봇과 장애물 그리고 하나의 목표물을 정하고, 각각의 로봇이 숨겨진 목표물을 찾아내는 실험을 가정하여 무작위, DBAM과 ABAM의 융합 모델, 그리고 마지막으로 본 논문에서 제안한 SVM과 12각형 기반의 Q-learning 알고리즘을 이용하여 실험을 수행하고, 이 3가지 방법을 비교하여 본 논문의 유효성을 검증하였다.

  • PDF

Design and implementation of Robot Soccer Agent Based on Reinforcement Learning (강화 학습에 기초한 로봇 축구 에이전트의 설계 및 구현)

  • Kim, In-Cheol
    • The KIPS Transactions:PartB
    • /
    • v.9B no.2
    • /
    • pp.139-146
    • /
    • 2002
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless these algorithms can learn the optimal policy if the agent can visit every state-action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem, we suggest Adaptive Mediation-based Modular Q-Learning (AMMQL) as an improvement of the existing Modular Q-Learning (MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state space effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. In this paper we use the AMMQL algorithn as a learning method for dynamic positioning of the robot soccer agent, and implement a robot soccer agent system called Cogitoniks.