• Title/Summary/Keyword: Q learning

Search Result 433, Processing Time 0.031 seconds

Enhanced Machine Learning Algorithms: Deep Learning, Reinforcement Learning, and Q-Learning

  • Park, Ji Su;Park, Jong Hyuk
    • Journal of Information Processing Systems
    • /
    • v.16 no.5
    • /
    • pp.1001-1007
    • /
    • 2020
  • In recent years, machine learning algorithms are continuously being used and expanded in various fields, such as facial recognition, signal processing, personal authentication, and stock prediction. In particular, various algorithms, such as deep learning, reinforcement learning, and Q-learning, are continuously being improved. Among these algorithms, the expansion of deep learning is rapidly changing. Nevertheless, machine learning algorithms have not yet been applied in several fields, such as personal authentication technology. This technology is an essential tool in the digital information era, walking recognition technology as promising biometrics, and technology for solving state-space problems. Therefore, algorithm technologies of deep learning, reinforcement learning, and Q-learning, which are typical machine learning algorithms in various fields, such as agricultural technology, personal authentication, wireless network, game, biometric recognition, and image recognition, are being improved and expanded in this paper.

Function Approximation for Reinforcement Learning using Fuzzy Clustering (퍼지 클러스터링을 이용한 강화학습의 함수근사)

  • Lee, Young-Ah;Jung, Kyoung-Sook;Chung, Tae-Choong
    • The KIPS Transactions:PartB
    • /
    • v.10B no.6
    • /
    • pp.587-592
    • /
    • 2003
  • Many real world control problems have continuous states and actions. When the state space is continuous, the reinforcement learning problems involve very large state space and suffer from memory and time for learning all individual state-action values. These problems need function approximators that reason action about new state from previously experienced states. We introduce Fuzzy Q-Map that is a function approximators for 1 - step Q-learning and is based on fuzzy clustering. Fuzzy Q-Map groups similar states and chooses an action and refers Q value according to membership degree. The centroid and Q value of winner cluster is updated using membership degree and TD(Temporal Difference) error. We applied Fuzzy Q-Map to the mountain car problem and acquired accelerated learning speed.

Equal Energy Consumption Routing Protocol Algorithm Based on Q-Learning for Extending the Lifespan of Ad-Hoc Sensor Network (애드혹 센서 네트워크 수명 연장을 위한 Q-러닝 기반 에너지 균등 소비 라우팅 프로토콜 기법)

  • Kim, Ki Sang;Kim, Sung Wook
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.10
    • /
    • pp.269-276
    • /
    • 2021
  • Recently, smart sensors are used in various environments, and the implementation of ad-hoc sensor networks (ASNs) is a hot research topic. Unfortunately, traditional sensor network routing algorithms focus on specific control issues, and they can't be directly applied to the ASN operation. In this paper, we propose a new routing protocol by using the Q-learning technology, Main challenge of proposed approach is to extend the life of ASNs through efficient energy allocation while obtaining the balanced system performance. The proposed method enhances the Q-learning effect by considering various environmental factors. When a transmission fails, node penalty is accumulated to increase the successful communication probability. Especially, each node stores the Q value of the adjacent node in its own Q table. Every time a data transfer is executed, the Q values are updated and accumulated to learn to select the optimal routing route. Simulation results confirm that the proposed method can choose an energy-efficient routing path, and gets an excellent network performance compared with the existing ASN routing protocols.

Region-based Q- learning For Autonomous Mobile Robot Navigation (자율 이동 로봇의 주행을 위한 영역 기반 Q-learning)

  • 차종환;공성학;서일홍
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.174-174
    • /
    • 2000
  • Q-learning, based on discrete state and action space, is a most widely used reinforcement Learning. However, this requires a lot of memory and much time for learning all actions of each state when it is applied to a real mobile robot navigation using continuous state and action space Region-based Q-learning is a reinforcement learning method that estimates action values of real state by using triangular-type action distribution model and relationship with its neighboring state which was defined and learned before. This paper proposes a new Region-based Q-learning which uses a reward assigned only when the agent reached the target, and get out of the Local optimal path with adjustment of random action rate. If this is applied to mobile robot navigation, less memory can be used and robot can move smoothly, and optimal solution can be learned fast. To show the validity of our method, computer simulations are illusrated.

  • PDF

A Study on the Implementation of Crawling Robot using Q-Learning

  • Hyunki KIM;Kyung-A KIM;Myung-Ae CHUNG;Min-Soo KANG
    • Korean Journal of Artificial Intelligence
    • /
    • v.11 no.4
    • /
    • pp.15-20
    • /
    • 2023
  • Machine learning is comprised of supervised learning, unsupervised learning and reinforcement learning as the type of data and processing mechanism. In this paper, as input and output are unclear and it is difficult to apply the concrete modeling mathematically, reinforcement learning method are applied for crawling robot in this paper. Especially, Q-Learning is the most effective learning technique in model free reinforcement learning. This paper presents a method to implement a crawling robot that is operated by finding the most optimal crawling method through trial and error in a dynamic environment using a Q-learning algorithm. The goal is to perform reinforcement learning to find the optimal two motor angle for the best performance, and finally to maintain the most mature and stable motion about EV3 Crawling robot. In this paper, for the production of the crawling robot, it was produced using Lego Mindstorms with two motors, an ultrasonic sensor, a brick and switches, and EV3 Classroom SW are used for this implementation. By repeating 3 times learning, total 60 data are acquired, and two motor angles vs. crawling distance graph are plotted for the more understanding. Applying the Q-learning reinforcement learning algorithm, it was confirmed that the crawling robot found the optimal motor angle and operated with trained learning, and learn to know the direction for the future research.

Object tracking algorithm of Swarm Robot System for using SVM and Dodecagon based Q-learning (12각형 기반의 Q-learning과 SVM을 이용한 군집로봇의 목표물 추적 알고리즘)

  • Seo, Sang-Wook;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.3
    • /
    • pp.291-296
    • /
    • 2008
  • This paper presents the dodecagon-based Q-leaning and SVM algorithm for object search with multiple robots. We organized an experimental environment with several mobile robots, obstacles, and an object. Then we sent the robots to a hallway, where some obstacles were tying about, to search for a hidden object. In experiment, we used four different control methods: a random search, a fusion model with Distance-based action making(DBAM) and Area-based action making(ABAM) process to determine the next action of the robots, and hexagon-based Q-learning and dodecagon-based Q-learning and SVM to enhance the fusion model with Distance-based action making(DBAM) and Area-based action making(ABAM) process.

A Q-learning based channel access scheme for cognitive radios (무선 인지 시스템을 위한 Q-learning 기반 채널접근기법)

  • Lee, Young-Doo;Koo, In-Soo
    • Journal of Internet Computing and Services
    • /
    • v.12 no.3
    • /
    • pp.77-88
    • /
    • 2011
  • In distributed cognitive radio networks, cognitive radio devices which perform the channel sensing individually, are seriously affected by radio channel environments such as noise, shadowing and fading such that they can not property satisfy the maximum allowable interference level to the primary user. In the paper, we propose a Q-learning based channel access scheme for cognitive radios so as to satisfy the maximum allowable interference level to the primary user as well as to improve the throughput of cognitive radio by opportunistically accessing on the idle channels. In the proposed scheme, the pattern of channel usage of the primary user will be learned through Q-learning during the pre-play learning step, and then the learned channel usage pattern will be utilized for improving the sensing performance during the Q-learning normal operation step. Through the simulation, it is shown that the proposed scheme can provide bettor performance than the conventional energy detector in terms of the interference level to primary user and the throughput of cognitive radio under both AWGN and Rayleigh fading channels.

A Strategy for improving Performance of Q-learning with Prediction Information (예측 정보를 이용한 Q-학습의 성능 개선 기법)

  • Lee, Choong-Hyeon;Um, Ky-Hyun;Cho, Kyung-Eun
    • Journal of Korea Game Society
    • /
    • v.7 no.4
    • /
    • pp.105-116
    • /
    • 2007
  • Nowadays, learning of agents gets more and more useful in game environments. But it takes a long learning time to produce satisfactory results in game. So, we need a good method to shorten the learning time. In this paper, we present a strategy for improving the learning performance of Q-learning with prediction information. It refers to the chosen action at each status in the Q-learning algorithm, It stores the referred value at the P-table of prediction module, and then it searches some values with high frequency at the table. The values are used to renew second compensation value from the Q-table. Our experiments show that our approach gets the efficiency improvement of average 9% after the middle point of learning experiments, and that the more actions in a status space, the higher performance.

  • PDF

Q-learning to improve learning speed using Minimax algorithm (미니맥스 알고리즘을 이용한 학습속도 개선을 위한 Q러닝)

  • Shin, YongWoo
    • Journal of Korea Game Society
    • /
    • v.18 no.4
    • /
    • pp.99-106
    • /
    • 2018
  • Board games have many game characters and many state spaces. Therefore, games must be long learning. This paper used reinforcement learning algorithm. But, there is weakness with reinforcement learning. At the beginning of learning, reinforcement learning has the drawback of slow learning speed. Therefore, we tried to improve the learning speed by using the heuristic using the knowledge of the problem domain considering the game tree when there is the same best value during learning. In order to compare the existing character the improved one. I produced a board game. So I compete with one-sided attacking character. Improved character attacked the opponent's one considering the game tree. As a result of experiment, improved character's capability was improved on learning speed.

Q-learning based packet scheduling using Softmax (Softmax를 이용한 Q-learning 기반의 패킷 스케줄링)

  • Kim, Dong-Hyun;Lee, Tae-Ho;Lee, Byung-Jun;Kim, Kyung-Tae;Youn, Hee-Yong
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.01a
    • /
    • pp.37-38
    • /
    • 2019
  • 본 논문에서는 자원제한적인 IoT 환경에서 스케줄링 정확도 향상을 위해 Softmax를 이용한 Q-learning 기반의 패킷 스케줄링 기법을 제안한다. 기존 Q-learning의 Exploitation과 Exploration의 균형을 유지하기 위해 e-greedy 기법이 자주 사용되지만, e-greedy는 Exploration 과정에서 최악의 행동이 선택될 수도 있는 문제가 발생한다. 이러한 문제점을 해결하기 위해 본 연구에서는 Softmax를 기반으로 다중 센서 노드 환경에서 데이터 패킷에 대한 Quality of Service (QoS) requirement 정확도를 높이기 위한 연구를 진행한다. 이 때 Temperature 매개변수를 사용하는데, 이는 새로운 정책을 Explore 하기 위한 매개변수이다. 본 논문에서는 시뮬레이션을 통하여 제안된 Softmax를 이용한 Q-learning 기반의 패킷 스케줄링 기법이 기존의 e-greedy를 이용한 Q-learning 기법에 비해 스케줄링 정확도 측면에서 우수함을 보인다.

  • PDF