• Title/Summary/Keyword: Markov decision process(MDP)

Search Result 35, Processing Time 0.025 seconds

A Study of Adaptive QoS Routing scheme using Policy-gradient Reinforcement Learning (정책 기울기 값 강화학습을 이용한 적응적인 QoS 라우팅 기법 연구)

  • Han, Jeong-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.2
    • /
    • pp.93-99
    • /
    • 2011
  • In this paper, we propose a policy-gradient routing scheme under Reinforcement Learning that can be used adaptive QoS routing. A policy-gradient RL routing can provide fast learning of network environments as using optimal policy adapted average estimate rewards gradient values. This technique shows that fast of learning network environments results in high success rate of routing. For prove it, we simulate and compare with three different schemes.

Reinforcement Learning-based Dynamic Weapon Assignment to Multi-Caliber Long-Range Artillery Attacks (다종 장사정포 공격에 대한 강화학습 기반의 동적 무기할당)

  • Hyeonho Kim;Jung Hun Kim;Joohoe Kong;Ji Hoon Kyung
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.45 no.4
    • /
    • pp.42-52
    • /
    • 2022
  • North Korea continues to upgrade and display its long-range rocket launchers to emphasize its military strength. Recently Republic of Korea kicked off the development of anti-artillery interception system similar to Israel's "Iron Dome", designed to protect against North Korea's arsenal of long-range rockets. The system may not work smoothly without the function assigning interceptors to incoming various-caliber artillery rockets. We view the assignment task as a dynamic weapon target assignment (DWTA) problem. DWTA is a multistage decision process in which decision in a stage affects decision processes and its results in the subsequent stages. We represent the DWTA problem as a Markov decision process (MDP). Distance from Seoul to North Korea's multiple rocket launchers positioned near the border, limits the processing time of the model solver within only a few second. It is impossible to compute the exact optimal solution within the allowed time interval due to the curse of dimensionality inherently in MDP model of practical DWTA problem. We apply two reinforcement-based algorithms to get the approximate solution of the MDP model within the time limit. To check the quality of the approximate solution, we adopt Shoot-Shoot-Look(SSL) policy as a baseline. Simulation results showed that both algorithms provide better solution than the solution from the baseline strategy.

A Localized Adaptive QoS Routing Scheme Using POMDP and Exploration Bonus Techniques (POMDP와 Exploration Bonus를 이용한 지역적이고 적응적인 QoS 라우팅 기법)

  • Han Jeong-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.3B
    • /
    • pp.175-182
    • /
    • 2006
  • In this paper, we propose a Localized Adaptive QoS Routing Scheme using POMDP and Exploration Bonus Techniques. Also, this paper shows that CEA technique using expectation values can be simply POMDP problem, because performing dynamic programming to solve a POMDP is highly computationally expensive. And we use Exploration Bonus to search detour path better than current path. For this, we proposed the algorithm(SEMA) to search multiple path. Expecially, we evaluate performances of service success rate and average hop count with $\phi$ and k performance parameters, which is defined as exploration count and intervals. As result, we knew that the larger $\phi$, the better detour path search. And increasing n increased the amount of exploration.

Approximate Dynamic Programming Based Interceptor Fire Control and Effectiveness Analysis for M-To-M Engagement (근사적 동적계획을 활용한 요격통제 및 동시교전 효과분석)

  • Lee, Changseok;Kim, Ju-Hyun;Choi, Bong Wan;Kim, Kyeongtaek
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.50 no.4
    • /
    • pp.287-295
    • /
    • 2022
  • As low altitude long-range artillery threat has been strengthened, the development of anti-artillery interception system to protect assets against its attacks will be kicked off. We view the defense of long-range artillery attacks as a typical dynamic weapon target assignment (DWTA) problem. DWTA is a sequential decision process in which decision making under future uncertain attacks affects the subsequent decision processes and its results. These are typical characteristics of Markov decision process (MDP) model. We formulate the problem as a MDP model to examine the assignment policy for the defender. The proximity of the capital of South Korea to North Korea border limits the computation time for its solution to a few second. Within the allowed time interval, it is impossible to compute the exact optimal solution. We apply approximate dynamic programming (ADP) approach to check if ADP approach solve the MDP model within processing time limit. We employ Shoot-Shoot-Look policy as a baseline strategy and compare it with ADP approach for three scenarios. Simulation results show that ADP approach provide better solution than the baseline strategy.

R-Trader: An Automatic Stock Trading System based on Reinforcement learning (R-Trader: 강화 학습에 기반한 자동 주식 거래 시스템)

  • 이재원;김성동;이종우;채진석
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.11
    • /
    • pp.785-794
    • /
    • 2002
  • Automatic stock trading systems should be able to solve various kinds of optimization problems such as market trend prediction, stock selection, and trading strategies, in a unified framework. But most of the previous trading systems based on supervised learning have a limit in the ultimate performance, because they are not mainly concerned in the integration of those subproblems. This paper proposes a stock trading system, called R-Trader, based on reinforcement teaming, regarding the process of stock price changes as Markov decision process (MDP). Reinforcement learning is suitable for Joint optimization of predictions and trading strategies. R-Trader adopts two popular reinforcement learning algorithms, temporal-difference (TD) and Q, for selecting stocks and optimizing other trading parameters respectively. Technical analysis is also adopted to devise the input features of the system and value functions are approximated by feedforward neural networks. Experimental results on the Korea stock market show that the proposed system outperforms the market average and also a simple trading system trained by supervised learning both in profit and risk management.

A Study on Deep Reinforcement Learning Framework for DME Pulse Design

  • Lee, Jungyeon;Kim, Euiho
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.10 no.2
    • /
    • pp.113-120
    • /
    • 2021
  • The Distance Measuring Equipment (DME) is a ground-based aircraft navigation system and is considered as an infrastructure that ensures resilient aircraft navigation capability during the event of a Global Navigation Satellite System (GNSS) outage. The main problem of DME as a GNSS back up is a poor positioning accuracy that often reaches over 100 m. In this paper, a novel approach of applying deep reinforcement learning to a DME pulse design is introduced to improve the DME distance measuring accuracy. This method is designed to develop multipath-resistant DME pulses that comply with current DME specifications. In the research, a Markov Decision Process (MDP) for DME pulse design is set using pulse shape requirements and a timing error. Based on the designed MDP, we created an Environment called PulseEnv, which allows the agent representing a DME pulse shape to explore continuous space using the Soft Actor Critical (SAC) reinforcement learning algorithm.

Robust Scheduling based on Daily Activity Learning by using Markov Decision Process and Inverse Reinforcement Learning (강건한 스케줄링을 위한 마코프 의사결정 프로세스 추론 및 역강화 학습 기반 일상 행동 학습)

  • Lee, Sang-Woo;Kwak, Dong-Hyun;On, Kyoung-Woon;Heo, Yujung;Kang, Wooyoung;Cinarel, Ceyda;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.10
    • /
    • pp.599-604
    • /
    • 2017
  • A useful application of smart assistants is to predict and suggest users' daily behaviors the way real assistants do. Conventional methods to predict behavior have mainly used explicit schedule information logged by a user or extracted from e-mail or SNS data. However, gathering explicit information for smart assistants has limitations, and much of a user's routine behavior is not logged in the first place. In this paper, we suggest a novel approach that combines explicit schedule information with patterns of routine behavior. We propose using inference based on a Markov decision process and learning with a reward function based on inverse reinforcement learning. The results of our experiment shows that the proposed method outperforms comparable models on a life-log dataset collected over six weeks.

Optimal LNG Procurement Policy in a Spot Market Using Dynamic Programming (동적 계획법을 이용한 LNG 현물시장에서의 포트폴리오 구성방법)

  • Ryu, Jong-Hyun
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.41 no.3
    • /
    • pp.259-266
    • /
    • 2015
  • Among many energy resources, natural gas has recently received a remarkable amount of attention, particularly from the electrical generation industry. This is in part due to increasing shale gas production, providing an environment-friendly fossil fuel, and high risk of nuclear power. Because South Korea, the world's second largest LNG importing nation after Japan, has no international natural gas pipelines and relies on imports in the form of LNG, the natural gas has been traditionally procured by long term LNG contracts at relatively high price. Thus, there is a need of developing an Asian LNG trading hub, where LNG can be traded at more competitive spot prices. In a natural gas spot market, the amount of natural gas to be bought should be carefully determined considering a limited storage capacity and future pricing dynamics. In this work, the problem to find the optimal amount of natural gas in a spot market is formulated as a Markov decision process (MDP) in risk neutral environment and the optimal base stock policy which depends on a stage and price is established. Taking into account price and demand uncertainties, the basestock target levels are simply approximated from dynamic programming. The simulation results show that the basestock policy can be one of effective ways for procurement of LNG in a spot market.

Deep Q-Network based Game Agents (심층 큐 신경망을 이용한 게임 에이전트 구현)

  • Han, Dongki;Kim, Myeongseop;Kim, Jaeyoun;Kim, Jung-Su
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.3
    • /
    • pp.157-162
    • /
    • 2019
  • The video game Tetris is one of most popular game and it is well known that its game rule can be modelled as MDP (Markov Decision Process). This paper presents a DQN (Deep Q-Network) based game agent for Tetris game. To this end, the state is defined as the captured image of the Tetris game board and the reward is designed as a function of cleared lines by the game agent. The action is defined as left, right, rotate, drop, and their finite number of combinations. In addition to this, PER (Prioritized Experience Replay) is employed in order to enhance learning performance. To train the network more than 500000 episodes are used. The game agent employs the trained network to make a decision. The performance of the developed algorithm is validated via not only simulation but also real Tetris robot agent which is made of a camera, two Arduinos, 4 servo motors, and artificial fingers by 3D printing.

Online Adaptation of Control Parameters with Safe Exploration by Control Barrier Function (제어 장벽함수를 이용한 안전한 행동 영역 탐색과 제어 매개변수의 실시간 적응)

  • Kim, Suyeong;Son, Hungsun
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.1
    • /
    • pp.76-85
    • /
    • 2022
  • One of the most fundamental challenges when designing controllers for dynamic systems is the adjustment of controller parameters. Usually the system model is used to get the initial controller, but eventually the controller parameters must be manually adjusted in the real system to achieve the best performance. To avoid this manual tuning step, data-driven methods such as machine learning were used. Recently, reinforcement learning became one alternative of this problem to be considered as an agent learns policies in large state space with trial-and-error Markov Decision Process (MDP) which is widely used in the field of robotics. However, on initial training step, as an agent tries to explore to the new state space with random action and acts directly on the controller parameters in real systems, MDP can lead the system safety-critical system failures. Therefore, the issue of 'safe exploration' became important. In this paper we meet 'safe exploration' condition with Control Barrier Function (CBF) which converts direct constraints on the state space to the implicit constraint of the control inputs. Given an initial low-performance controller, it automatically optimizes the parameters of the control law while ensuring safety by the CBF so that the agent can learn how to predict and control unknown and often stochastic environments. Simulation results on a quadrotor UAV indicate that the proposed method can safely optimize controller parameters quickly and automatically.