• Title/Summary/Keyword: Q-Learning

Search Result 420, Processing Time 0.043 seconds

Max-Mean N-step Temporal-Difference Learning Using Multi-Step Return (멀티-스텝 누적 보상을 활용한 Max-Mean N-Step 시간차 학습)

  • Hwang, Gyu-Young;Kim, Ju-Bong;Heo, Joo-Seong;Han, Youn-Hee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.5
    • /
    • pp.155-162
    • /
    • 2021
  • n-step TD learning is a combination of Monte Carlo method and one-step TD learning. If appropriate n is selected, n-step TD learning is known as an algorithm that performs better than Monte Carlo method and 1-step TD learning, but it is difficult to select the best values of n. In order to solve the difficulty of selecting the values of n in n-step TD learning, in this paper, using the characteristic that overestimation of Q can improve the performance of initial learning and that all n-step returns have similar values for Q ≈ Q*, we propose a new learning target, which is composed of the maximum and the mean of all k-step returns for 1 ≤ k ≤ n. Finally, in OpenAI Gym's Atari game environment, we compare the proposed algorithm with n-step TD learning and proved that the proposed algorithm is superior to n-step TD learning algorithm.

A Relay Selection Scheme with Q-Learning (Q-Learning을 이용한 릴레이 선택 기법)

  • Jung, Hong-Kyu;Kim, Kwang-Yul;Shin, Yo-An
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.49 no.6
    • /
    • pp.39-47
    • /
    • 2012
  • As a scheme to efficiently reduce the effects of multipath fading in next generation wireless communication systems, cooperative communication systems have recently come into the spotlight. Since these cooperative communication systems use cooperative relays with diverse fading coefficients to transmit information, having all relays participate in cooperative communication may result in unnecessary waste of resources, and thus relay selection schemes are required to efficiently use wireless resources. In this paper, we propose an efficient relay selection scheme through self-learning in cooperative wireless networks using Q-learning algorithm. In this scheme, we define states, actions and two rewards to achieve good SER (Symbol Error Rate) performance, while selecting a small number of cooperative relays. When these parameters are well-defined, we can obtain good performance. For demonstrating the superiority of the proposed Q-learning, We compared the proposed scheme with Q-learning and a relay selection scheme with a mathematical analysis. The simulation results show that, compared to a scheme that obtains optimum relays through a mathematical analysis, the proposed scheme uses resources efficiently by using smaller numbers of relays with comparable SER performance. According to these simulation results, the proposed scheme can be considered as a good attempt for future wireless communication.

Fuzzy Q-learning using Distributed Eligibility (분포 기여도를 이용한 퍼지 Q-learning)

  • 정석일;이연정
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.5
    • /
    • pp.388-394
    • /
    • 2001
  • Reinforcement learning is a kind of unsupervised learning methods that an agent control rules from experiences acquired by interactions with environment. The eligibility is used to resolve the credit-assignment problem which is one of important problems in reinforcement learning, Conventional eligibilities such as the accumulating eligibility and the replacing eligibility are ineffective in use of rewards acquired in learning process, since on1y one executed action for a visited state is learned. In this paper, we propose a new eligibility, called the distributed eligibility, with which not only an executed action but also neighboring actions in a visited state are to be learned. The fuzzy Q-learning algorithm using the proposed eligibility is applied to a cart-pole balancing problem, which shows the superiority of the proposed method to conventional methods in terms of learning speed.

  • PDF

Object Tracking Algorithm of Swarm Robot System for using Polygon Based Q-Learning and Cascade SVM (다각형 기반의 Q-Learning과 Cascade SVM을 이용한 군집로봇의 목표물 추적 알고리즘)

  • Seo, Sang-Wook;Yang, Hyung-Chang;Sim, Kwee-Bo
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.3 no.2
    • /
    • pp.119-125
    • /
    • 2008
  • This paper presents the polygon-based Q-leaning and Cascade Support Vector Machine algorithm for object search with multiple robots. We organized an experimental environment with ten mobile robots, twenty five obstacles, and an object, and then we sent the robots to a hallway, where some obstacles were lying about, to search for a hidden object. In experiment, we used four different control methods: a random search, a fusion model with Distance-based action making (DBAM) and Area-based action making (ABAM) process to determine the next action of the robots, and hexagon-based Q-learning and dodecagon-based Q-learning and Cascade SVM to enhance the fusion model with DBAM and ABAM process.

  • PDF

Object tracking algorithm of Swarm Robot System for using Polygon based Q-learning and parallel SVM

  • Seo, Snag-Wook;Yang, Hyun-Chang;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.3
    • /
    • pp.220-224
    • /
    • 2008
  • This paper presents the polygon-based Q-leaning and Parallel SVM algorithm for object search with multiple robots. We organized an experimental environment with one hundred mobile robots, two hundred obstacles, and ten objects. Then we sent the robots to a hallway, where some obstacles were lying about, to search for a hidden object. In experiment, we used four different control methods: a random search, a fusion model with Distance-based action making (DBAM) and Area-based action making (ABAM) process to determine the next action of the robots, and hexagon-based Q-learning, and dodecagon-based Q-learning and parallel SVM algorithm to enhance the fusion model with Distance-based action making (DBAM) and Area-based action making (ABAM) process. In this paper, the result show that dodecagon-based Q-learning and parallel SVM algorithm is better than the other algorithm to tracking for object.

FuzzyQ-Learning to Process the Vague Goals of Intelligent Agent (지능형 에이전트의 모호한 목적을 처리하기 위한 FuzzyQ-Learning)

  • 서호섭;윤소정;오경환
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.04b
    • /
    • pp.271-273
    • /
    • 2000
  • 일반적으로, 지능형 에이전트는 사용자의 목적과 주위 환경으로부터 최적의 행동을 스스로 찾아낼 수 있어야 한다. 만약 에이전트의 목적이나 주위 환경이 불확실성을 포함하는 경우, 에이전트는 적절한 행동을 선택하기 어렵다. 그러나, 사용자의 목적이 인간 지식의 불확실성을 포함하는 언어값으로 표현되었을 경우, 이를 처리하려는 연구는 없었다. 본 논문에서는 모호한 사용자의 의도를 퍼지 목적으로 나타내고, 에이전트가 인지하는 불확실한 환경을 퍼지 상태로 표현하는 방법을 제안한다. 또, 퍼지 목적과 상태를 이용하여 확장한 펴지 강화 함수와를 이용하여, 기존 강화 학습 알고리즘 중 하나인 Q-Learning을 FuzzyQ-Learning으로 확장하고, 이에 대한 타당성을 검증한다.

  • PDF

(The Development of Janggi Board Game Using Backpropagation Neural Network and Q Learning Algorithm) (역전파 신경회로망과 Q학습을 이용한 장기보드게임 개발)

  • 황상문;박인규;백덕수;진달복
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.39 no.1
    • /
    • pp.83-90
    • /
    • 2002
  • This paper proposed the strategy learning method by means of the fusion of Back-Propagation neural network and Q learning algorithm for two-person, deterministic janggi board game. The learning process is accomplished simply through the playing each other. The system consists of two parts of move generator and search kernel. The one consists of move generator generating the moves on the board, the other consists of back-propagation and Q learning plus $\alpha$$\beta$ search algorithm in an attempt to learn the evaluation function. while temporal difference learns the discrepancy between the adjacent rewards, Q learning acquires the optimal policies even when there is no prior knowledge of effects of its moves on the environment through the learning of the evaluation function for the augmented rewards. Depended on the evaluation function through lots of games through the learning procedure it proved that the percentage won is linearly proportional to the portion of learning in general.

A Simulation of Vehicle Parking Distribution System for Local Cultural Festival with Queuing Theory and Q-Learning Algorithm (대기행렬이론과 Q-러닝 알고리즘을 적용한 지역문화축제 진입차량 주차분산 시뮬레이션 시스템)

  • Cho, Youngho;Seo, Yeong Geon;Jeong, Dae-Yul
    • The Journal of Information Systems
    • /
    • v.29 no.2
    • /
    • pp.131-147
    • /
    • 2020
  • Purpose The purpose of this study is to develop intelligent vehicle parking distribution system based on LoRa network at the circumstance of traffic congestion during cultural festival in a local city. This paper proposes a parking dispatch and distribution system using a Q-learning algorithm to rapidly disperse traffics that increases suddenly because of in-bound traffics from the outside of a city in the real-time base as well as to increase parking probability in a parking lot which is widely located in a city. Design/methodology/approach The system get information on realtime-base from the sensor network of IoT (LoRa network). It will contribute to solve the sudden increase in traffic and parking bottlenecks during local cultural festival. We applied the simulation system with Queuing model to the Yudeung Festival in Jinju, Korea. We proposed a Q-learning algorithm that could change the learning policy by setting the acceptability value of each parking lot as a threshold from the Jinju highway IC (Interchange) to the 7 parking lots. LoRa Network platform supports to browse parking resource information to each vehicle in realtime. The system updates Q-table periodically using Q-learning algorithm as soon as get information from parking lots. The Queuing Theory with Poisson arrival distribution is used to get probability distribution function. The Dijkstra algorithm is used to find the shortest distance. Findings This paper suggest a simulation test to verify the efficiency of Q-learning algorithm at the circumstance of high traffic jam in a city during local festival. As a result of the simulation, the proposed algorithm performed well even when each parking lot was somewhat saturated. When an intelligent learning system such as an O-learning algorithm is applied, it is possible to more effectively distribute the vehicle to a lot with a high parking probability when the vehicle inflow from the outside rapidly increases at a specific time, such as a local city cultural festival.

Fuzzy Q-learning using Weighted Eligibility (가중 기여도를 이용한 퍼지 Q-learning)

  • 정석일;이연정
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2000.11a
    • /
    • pp.163-167
    • /
    • 2000
  • The eligibility is used to solve the credit-assignment problem which is one of important problems in reinforcement learning. Conventional eligibilities which are accumulating eligibility and replacing eligibility make ineffective use of rewards acquired in learning process. Because only an executed action in a visited state is learned by these eligibilities. Thus, we propose a new eligibility, called the weighted eligibility with which not only an executed action but also neighboring actions in a visited state are to be learned. The fuzzy Q-learning algorithm using proposed eligibility is applied to a cart-pole balancing problem, which shows improvement of learning speed.

  • PDF

Basin-Wide Multi-Reservoir Operation Using Reinforcement Learning (강화학습법을 이용한 유역통합 저수지군 운영)

  • Lee, Jin-Hee;Shim, Myung-Pil
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2006.05a
    • /
    • pp.354-359
    • /
    • 2006
  • The analysis of large-scale water resources systems is often complicated by the presence of multiple reservoirs and diversions, the uncertainty of unregulated inflows and demands, and conflicting objectives. Reinforcement learning is presented herein as a new approach to solving the challenging problem of stochastic optimization of multi-reservoir systems. The Q-Learning method, one of the reinforcement learning algorithms, is used for generating integrated monthly operation rules for the Keum River basin in Korea. The Q-Learning model is evaluated by comparing with implicit stochastic dynamic programming and sampling stochastic dynamic programming approaches. Evaluation of the stochastic basin-wide operational models considered several options relating to the choice of hydrologic state and discount factors as well as various stochastic dynamic programming models. The performance of Q-Learning model outperforms the other models in handling of uncertainty of inflows.

  • PDF