• Title/Summary/Keyword: Markov decision problem (MDP)

Search Result 22, Processing Time 0.028 seconds

Markov Decision Process-based Potential Field Technique for UAV Planning

  • MOON, CHAEHWAN;AHN, JAEMYUNG
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.25 no.4
    • /
    • pp.149-161
    • /
    • 2021
  • This study proposes a methodology for mission/path planning of an unmanned aerial vehicle (UAV) using an artificial potential field with the Markov Decision Process (MDP). The planning problem is formulated as an MDP. A low-resolution solution of the MDP is obtained and used to define an artificial potential field, which provides a continuous UAV mission plan. A numerical case study is conducted to demonstrate the validity of the proposed technique.

A MARKOV DECISION PROCESSES FORMULATION FOR THE LINEAR SEARCH PROBLEM

  • Balkhi, Z.T.;Benkherouf, L.
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.19 no.1
    • /
    • pp.201-206
    • /
    • 1994
  • The linear search problem is concerned with finding a hiden target on the real line R. The position of the target governed by some probability distribution. It is desired to find the target in the least expected search time. This problem has been formulated as an optimization problem by a number of authors without making use of Markov Decision Process (MDP) theory. It is the aim of the paper to give a (MDP) formulation to the search problem which we feel is both natural and easy to follow.

  • PDF

A Markov Decision Process (MDP) based Load Balancing Algorithm for Multi-cell Networks with Multi-carriers

  • Yang, Janghoon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.10
    • /
    • pp.3394-3408
    • /
    • 2014
  • Conventional mobile state (MS) and base station (BS) association based on average signal strength often results in imbalance of cell load which may require more powerful processor at BSs and degrades the perceived transmission rate of MSs. To deal with this problem, a Markov decision process (MDP) for load balancing in a multi-cell system with multi-carriers is formulated. To solve the problem, exploiting Sarsa algorithm of on-line learning type [12], ${\alpha}$-controllable load balancing algorithm is proposed. It is designed to control tradeoff between the cell load deviation of BSs and the perceived transmission rates of MSs. We also propose an ${\varepsilon}$-differential soft greedy policy for on-line learning which is proven to be asymptotically convergent to the optimal greedy policy under some condition. Simulation results verify that the ${\alpha}$-controllable load balancing algorithm controls the behavior of the algorithm depending on the choice of ${\alpha}$. It is shown to be very efficient in balancing cell loads of BSs with low ${\alpha}$.

Topological measures for algorithm complexity of Markov decision processes (마르코프 결정 프로세스의 위상적 계산 복잡도 척도)

  • Yi, Seung-Joon;Zhang, Byoung-Tak
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.06c
    • /
    • pp.319-323
    • /
    • 2007
  • 실세계의 여러 문제들은 마르코프 결정 문제(Markov decision problem, MDP)로 표현될 수 있고, 이 MDP는 모델이 알려진 경우에는 평가치 반복(value iteration) 이나 모델이 알려지지 않은 경우에도 강화 학습(reinforcement learning) 알고리즘 등을 사용하여 풀 수 있다. 하지만 이들 알고리즘들은 시간 복잡도가 높아 크기가 큰 실세계 문제에 적용하기 쉽지 않아, MDP를 계층적으로 분할하거나, 여러 단계를 묶어서 수행하는 등의 시간적 추상화(temporal abstraction) 방법이 제안되어 왔다. 이러한 시간적 추상화 방법들의 문제점으로는 시간적 추상화의 디자인에 따라 MDP의 풀이 성능이 크게 달라질 수 있으며, 많은 경우 사용자가 이 디자인을 직접 제공해야 한다는 것들이 있다. 최근 사용자의 간섭이 필요 없이 자동적으로 시간적 추상화를 만드는 방법들이 제안된 바 있으나, 이들 방법들 역시 결과물에 대한 이론적인 성능 보장(performance guarantee)은 제공하지 못하고 있다. 본 연구에서는 이러한 문제점을 해결하기 위해 MDP의 구조와 그 풀이 성능을 연관짓는 복잡도 척도에 대해 살펴본다. 이를 위해 MDP로부터 얻은 상태 경로 그래프(state trajectory graph)의 위상적 성질들을 여러 네트워크 척도(network measurements) 들을 이용하여 측정하고, 이와 MDP의 풀이 성능과의 관계를 다양한 상황에 대해 실험적, 이론적으로 분석해 보았다.

  • PDF

A Semi-Markov Decision Process (SMDP) for Active State Control of A Heterogeneous Network

  • Yang, Janghoon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.7
    • /
    • pp.3171-3191
    • /
    • 2016
  • Due to growing demand on wireless data traffic, a large number of different types of base stations (BSs) have been installed. However, space-time dependent wireless data traffic densities can result in a significant number of idle BSs, which implies the waste of power resources. To deal with this problem, we propose an active state control algorithm based on semi-Markov decision process (SMDP) for a heterogeneous network. A MDP in discrete time domain is formulated from continuous domain with some approximation. Suboptimal on-line learning algorithm with a random policy is proposed to solve the problem. We explicitly include coverage constraint so that active cells can provide the same signal to noise ratio (SNR) coverage with a targeted outage rate. Simulation results verify that the proposed algorithm properly controls the active state depending on traffic densities without increasing the number of handovers excessively while providing average user perceived rate (UPR) in a more power efficient way than a conventional algorithm.

A Simulation Sample Accumulation Method for Efficient Simulation-based Policy Improvement in Markov Decision Process (마르코프 결정 과정에서 시뮬레이션 기반 정책 개선의 효율성 향상을 위한 시뮬레이션 샘플 누적 방법 연구)

  • Huang, Xi-Lang;Choi, Seon Han
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.7
    • /
    • pp.830-839
    • /
    • 2020
  • As a popular mathematical framework for modeling decision making, Markov decision process (MDP) has been widely used to solve problem in many engineering fields. MDP consists of a set of discrete states, a finite set of actions, and rewards received after reaching a new state by taking action from the previous state. The objective of MDP is to find an optimal policy, that is, to find the best action to be taken in each state to maximize the expected discounted reward of policy (EDR). In practice, MDP is typically unknown, so simulation-based policy improvement (SBPI), which improves a given base policy sequentially by selecting the best action in each state depending on rewards observed via simulation, can be a practical way to find the optimal policy. However, the efficiency of SBPI is still a concern since many simulation samples are required to precisely estimate EDR for each action in each state. In this paper, we propose a method to select the best action accurately in each state using a small number of simulation samples, thereby improving the efficiency of SBPI. The proposed method accumulates the simulation samples observed in the previous states, so it is possible to precisely estimate EDR even with a small number of samples in the current state. The results of comparative experiments on the existing method demonstrate that the proposed method can improve the efficiency of SBPI.

Determination of Ship Collision Avoidance Path using Deep Deterministic Policy Gradient Algorithm (심층 결정론적 정책 경사법을 이용한 선박 충돌 회피 경로 결정)

  • Kim, Dong-Ham;Lee, Sung-Uk;Nam, Jong-Ho;Furukawa, Yoshitaka
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.56 no.1
    • /
    • pp.58-65
    • /
    • 2019
  • The stability, reliability and efficiency of a smart ship are important issues as the interest in an autonomous ship has recently been high. An automatic collision avoidance system is an essential function of an autonomous ship. This system detects the possibility of collision and automatically takes avoidance actions in consideration of economy and safety. In order to construct an automatic collision avoidance system using reinforcement learning, in this work, the sequential decision problem of ship collision is mathematically formulated through a Markov Decision Process (MDP). A reinforcement learning environment is constructed based on the ship maneuvering equations, and then the three key components (state, action, and reward) of MDP are defined. The state uses parameters of the relationship between own-ship and target-ship, the action is the vertical distance away from the target course, and the reward is defined as a function considering safety and economics. In order to solve the sequential decision problem, the Deep Deterministic Policy Gradient (DDPG) algorithm which can express continuous action space and search an optimal action policy is utilized. The collision avoidance system is then tested assuming the $90^{\circ}$intersection encounter situation and yields a satisfactory result.

Using Topological Properties of Complex Networks for analysis of the efficiency of MDP-based learning (복잡계의 위상특성을 이용한 MDP 학습의 효율 분석)

  • Yi Seung-Joon;Zhang Byoung-Tak
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.06b
    • /
    • pp.232-234
    • /
    • 2006
  • 본 논문에서는 마르코프 결정 문제 (Markov decision problem)의 풀이 효율을 잴 수 있는 척도를 알아보기 위해 복잡계 네트워크 (complex network) 의 관점에서 MDP를 하나의 그래프로 나타내고, 그 그래프의 위상학적 성질들을 여러 네트워크 척도 (network measurements)들을 이용하여 측정하고 그 MDP의 풀이 효율과의 관계를 분석하였다. 실세계의 여러 문제들이 MDP로 표현될 수 있고, 모델이 알려진 경우에는 평가치 반복(value iteration)이나 모델이 알려지지 않은 경우에도 강화 학습(reinforcement learning) 알고리즘등을 사용하여 풀 수 있으나, 이들 알고리즘들은 시간 복잡도가 높아 크기가 큰 실세계 문제에 적용하기 쉽지 않다. 이 문제를 해결하기 위해 제안된 것이 MDP를 계층적으로 분할하거나, 여러 단계를 묶어서 수행하는 등의 시간적 추상화(temporal abstraction) 방법들이다. 시간적 추상화를 도입할 경우 MDP가 보다 효율적으로 풀리는 꼴로 바뀐다는 사실에 착안하여, MDP의 풀이 효율을 네트워크 척도를 이용하여 측정할 수 있는 여러 위상학적 성질들을 기반으로 분석하였다. 다양한 구조와 파라미터를 가진 MDP들을 사용해 네트워크 척도들과 MDP의 풀이 효율간의 관계를 분석해 본 결과, 네트워크 척도들 중 평균 측지 거리 (mean geodesic distance) 가 그 MDP의 풀이 효율을 결정하는 가장 중요한 기준이라는 사실을 알 수 있었다.

  • PDF

Operating Room Reservation Problem Considering Patient Priority : Modified Value Iteration Method with Binary Search (환자 우선순위를 고려한 수술실 예약 : 이진검색을 활용한 수정 평가치반복법)

  • Min, Dai-Ki
    • IE interfaces
    • /
    • v.24 no.4
    • /
    • pp.274-280
    • /
    • 2011
  • Delayed access to surgery may lead to deterioration in the patient condition, poor clinical outcomes, increase in the probability of emergency admission, or even death. The purpose of this work is to decide the number of patients selected from a waiting list and to schedule them in accordance with the operating room capacity in the next period. We formulate the problem as an infinite horizon Markov Decision Process (MDP), which attempts to strike a balance between the patient waiting times and overtime works. Structural properties of the proposed model are investigated to facilitate the solution procedure. The proposed procedure modifies the conventional value iteration method along with the binary search technique. An example of the optimal policy is provided, and computational results are given to show that the proposed procedure improves computational efficiency.

Efficient Approximation of State Space for Reinforcement Learning Using Complex Network Models (복잡계망 모델을 사용한 강화 학습 상태 공간의 효율적인 근사)

  • Yi, Seung-Joon;Eom, Jae-Hong;Zhang, Byoung-Tak
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.6
    • /
    • pp.479-490
    • /
    • 2009
  • A number of temporal abstraction approaches have been suggested so far to handle the high computational complexity of Markov decision problems (MDPs). Although the structure of temporal abstraction can significantly affect the efficiency of solving the MDP, to our knowledge none of current temporal abstraction approaches explicitly consider the relationship between topology and efficiency. In this paper, we first show that a topological measurement from complex network literature, mean geodesic distance, can reflect the efficiency of solving MDP. Based on this, we build an incremental method to systematically build temporal abstractions using a network model that guarantees a small mean geodesic distance. We test our algorithm on a realistic 3D game environment, and experimental results show that our model has subpolynomial growth of mean geodesic distance according to problem size, which enables efficient solving of resulting MDP.