• 제목/요약/키워드: partially observable Markov decision process

검색결과 20건 처리시간 0.026초

System Replacement Policy for A Partially Observable Markov Decision Process Model

  • Kim, Chang-Eun
    • 대한산업공학회지
    • /
    • 제16권2호
    • /
    • pp.1-9
    • /
    • 1990
  • The control of deterioration processes for which only incomplete state information is available is examined in this study. When the deterioration is governed by a Markov process, such processes are known as Partially Observable Markov Decision Processes (POMDP) which eliminate the assumption that the state or level of deterioration of the system is known exactly. This research investigates a two state partially observable Markov chain in which only deterioration can occur and for which the only actions possible are to replace or to leave alone. The goal of this research is to develop a new jump algorithm which has the potential for solving system problems dealing with continuous state space Markov chains.

  • PDF

Partially Observable Markov Decision Processes (POMDPs) and Wireless Body Area Networks (WBAN): A Survey

  • Mohammed, Yahaya Onimisi;Baroudi, Uthman A.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권5호
    • /
    • pp.1036-1057
    • /
    • 2013
  • Wireless body area network (WBAN) is a promising candidate for future health monitoring system. Nevertheless, the path to mature solutions is still facing a lot of challenges that need to be overcome. Energy efficient scheduling is one of these challenges given the scarcity of available energy of biosensors and the lack of portability. Therefore, researchers from academia, industry and health sectors are working together to realize practical solutions for these challenges. The main difficulty in WBAN is the uncertainty in the state of the monitored system. Intelligent learning approaches such as a Markov Decision Process (MDP) were proposed to tackle this issue. A Markov Decision Process (MDP) is a form of Markov Chain in which the transition matrix depends on the action taken by the decision maker (agent) at each time step. The agent receives a reward, which depends on the action and the state. The goal is to find a function, called a policy, which specifies which action to take in each state, so as to maximize some utility functions (e.g., the mean or expected discounted sum) of the sequence of rewards. A partially Observable Markov Decision Processes (POMDP) is a generalization of Markov decision processes that allows for the incomplete information regarding the state of the system. In this case, the state is not visible to the agent. This has many applications in operations research and artificial intelligence. Due to incomplete knowledge of the system, this uncertainty makes formulating and solving POMDP models mathematically complex and computationally expensive. Limited progress has been made in terms of applying POMPD to real applications. In this paper, we surveyed the existing methods and algorithms for solving POMDP in the general domain and in particular in Wireless body area network (WBAN). In addition, the papers discussed recent real implementation of POMDP on practical problems of WBAN. We believe that this work will provide valuable insights for the newcomers who would like to pursue related research in the domain of WBAN.

Partially Observable Markov Decision Process with Lagged Information over Infinite Horizon

  • Jeong, Byong-Ho;Kim, Soung-Hie
    • 한국경영과학회지
    • /
    • 제16권1호
    • /
    • pp.135-146
    • /
    • 1991
  • This paper shows the infinite horizon model of Partially Observable Markov Decision Process with lagged information. The lagged information is uncertain delayed observation of the process under control. Even though the optimal policy of the model exists, finding the optimal policy is very time consuming. Thus, the aim of this study is to find an .eplison.-optimal stationary policy minimizing the expected discounted total cost of the model. .EPSILON.- optimal policy is found by using a modified version of the well known policy iteration algorithm. The modification focuses to the value determination routine of the algorithm. Some properties of the approximation functions for the expected discounted cost of a stationary policy are presented. The expected discounted cost of a stationary policy is approximated based on these properties. A numerical example is also shown.

  • PDF

Optimal maintenance procedure for multi-state deteriorated system with incomplete monitoring

  • Jin, L.;Suzuki, K.
    • International Journal of Reliability and Applications
    • /
    • 제11권2호
    • /
    • pp.69-87
    • /
    • 2010
  • The optimal replacement problem was investigated for a multi-state deteriorated system for which the true internal state cannot be observed directly except when the system breaks down completely. The internal state was assumed to be monitored incompletely by a monitor that gives information related to the true state of the system. The problem was formulated as a partially observable Markov decision process. The optimal procedure was found to be a monotone procedure with respect to stochastic increasing ordering of the state probability vectors under some assumptions. Limiting the optimal procedure to a monotone procedure would greatly reduce the tremendous amount of calculation time required to find the optimal procedure.

  • PDF

Triple-state 보상 함수를 기반으로 한 개선된 DSA 기법 (An Improved DSA Strategy based on Triple-States Reward Function)

  • 타사미아;구준롱;장성진;김재명
    • 대한전자공학회논문지TC
    • /
    • 제47권11호
    • /
    • pp.59-68
    • /
    • 2010
  • 본 논문은 보상함수 수정을 통해 보다 완벽한 DSA(Dynamic Spectrum Access)를 수행하는 새로운 방법을 제시하였다. POMDP(Partially Observable Markov Decision Process)는 미래의 스펙트럼 상태를 예측하는데 사용되는 알고리즘으로서, 그 중 보상함수는 스펙트럼을 예측하는데 있어 가장 중요한 부분이다. 그러나 보상함수는 Busy 및 Idle의 두 가지 상태만 갖고 있기 때문에 채널에서 충돌이 발생하게 되면 보상함수는 Busy를 반환함으로써 2차 사용자의 성능을 감소시키게 된다. 따라서 본 논문에서는 기존의 Busy를 Busy 및 Collision 의 두 상태로 구분하였고, 이렇게 추가된 Collision 상태를 통해 2차 사용자의 채널 접근 기회를 보다 향상시킴으로서 데이터 전송율을 증대시킬 수 있도록 하였다. 또한 본 논문은 새로운 알고리즘의 신뢰도 벡터를 수학적으로 분석하였다. 마지막으로 시뮬레이션 결과를 통해 개선된 보상함수의 성능을 검증하고, 이를 통해 새로운 알고리즘이 CR 네트워크에서 2차 사용자의 성능을 향상시킬 수 있음을 보인다.

Two-Dimensional POMDP-Based Opportunistic Spectrum Access in Time-Varying Environment with Fading Channels

  • Wang, Yumeng;Xu, Yuhua;Shen, Liang;Xu, Chenglong;Cheng, Yunpeng
    • Journal of Communications and Networks
    • /
    • 제16권2호
    • /
    • pp.217-226
    • /
    • 2014
  • In this research, we study the problem of opportunistic spectrum access (OSA) in a time-varying environment with fading channels, where the channel state is characterized by both channel quality and the occupancy of primary users (PUs). First, a finite-state Markov channel model is introduced to represent a fading channel. Second, by probing channel quality and exploring the activities of PUs jointly, a two-dimensional partially observable Markov decision process framework is proposed for OSA. In addition, a greedy strategy is designed, where a secondary user selects a channel that has the best-expected data transmission rate to maximize the instantaneous reward in the current slot. Compared with the optimal strategy that considers future reward, the greedy strategy brings low complexity and relatively ideal performance. Meanwhile, the spectrum sensing error that causes the collision between a PU and a secondary user (SU) is also discussed. Furthermore, we analyze the multiuser situation in which the proposed single-user strategy is adopted by every SU compared with the previous one. By observing the simulation results, the proposed strategy attains a larger throughput than the previous works under various parameter configurations.

Machine Maintenance Policy Using Partially Observable Markov Decision Process

  • Pak, Pyoung Ki;Kim, Dong Won;Jeong, Byung Ho
    • 품질경영학회지
    • /
    • 제16권2호
    • /
    • pp.1-9
    • /
    • 1988
  • This paper considers a machine maintenance problem. The machine's condition is partially known by observing the machine's output products. This problem is formulated as an infinite horizon partially observable Markov decison process to find an optimal maintenance policy. However, even though the optimal policy of the model exists, finding the optimal policy is very time consuming. Thus, the intends of this study is to find ${\varepsilon}-optimal$ stationary policy minimizing the expected discounted total cost of the system, ${\varepsilon}-optimal$ policy is found by using a modified version of the well-known policy iteration algorithm. A numerical example is also shown.

  • PDF

POMDP와 Exploration Bonus를 이용한 지역적이고 적응적인 QoS 라우팅 기법 (A Localized Adaptive QoS Routing Scheme Using POMDP and Exploration Bonus Techniques)

  • 한정수
    • 한국통신학회논문지
    • /
    • 제31권3B호
    • /
    • pp.175-182
    • /
    • 2006
  • 본 논문에서는 Localized Aptive QoS 라우팅을 위해 POMDP(Partially Observable Markov Decision Processes)와 Exploration Bonus 기법을 사용하는 방법을 제안하였다. 또한, POMDP 문제를 해결하기 위해 Dynamic Programming을 사용하여 최적의 행동을 찾는 연산이 매우 복잡하고 어렵기 때문에 CEA(Certainty Equivalency Approximation) 기법을 통한 기댓값 사용으로 문제를 단순하였으며, Exploration Bonus 방식을 사용해 현재 경로보다 나은 경로를 탐색하고자 하였다. 이를 위해 다중 경로 탐색 알고리즘(SEMA)을 제안했다. 더욱이 탐색의 횟수와 간격을 정의하기 위해 $\phi$와 k 성능 파라미터들을 사용하여 이들을 통해 탐색의 횟수 변화를 통한 서비스 성공률과 성공 시 사용된 평균 홉 수에 대한 성능을 살펴보았다. 결과적으로 $\phi$ 값이 증가함에 따라 현재의 경로보다 더 나은 경로를 찾게 되며, k 값이 증가할수록 탐색이 증가함을 볼 수 있다.

Throughput Maximization for a Primary User with Cognitive Radio and Energy Harvesting Functions

  • Nguyen, Thanh-Tung;Koo, Insoo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권9호
    • /
    • pp.3075-3093
    • /
    • 2014
  • In this paper, we consider an advanced wireless user, called primary-secondary user (PSU) who is capable of harvesting renewable energy and connecting to both the primary network and cognitive radio networks simultaneously. Recently, energy harvesting has received a great deal of attention from the research community and is a promising approach for maintaining long lifetime of users. On the other hand, the cognitive radio function allows the wireless user to access other primary networks in an opportunistic manner as secondary users in order to receive more throughput in the current time slot. Subsequently, in the paper we propose the channel access policy for a PSU with consideration of the energy harvesting, based on a Partially Observable Markov decision process (POMDP) in which the optimal action from the action set will be selected to maximize expected long-term throughput. The simulation results show that the proposed POMDP-based channel access scheme improves the throughput of PSU, but it requires more computations to make an action decision regarding channel access.

Labeling Q-Learning for Maze Problems with Partially Observable States

  • Lee, Hae-Yeon;Hiroyuki Kamaya;Kenich Abe
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.489-489
    • /
    • 2000
  • Recently, Reinforcement Learning(RL) methods have been used far teaming problems in Partially Observable Markov Decision Process(POMDP) environments. Conventional RL-methods, however, have limited applicability to POMDP To overcome the partial observability, several algorithms were proposed [5], [7]. The aim of this paper is to extend our previous algorithm for POMDP, called Labeling Q-learning(LQ-learning), which reinforces incomplete information of perception with labeling. Namely, in the LQ-learning, the agent percepts the current states by pair of observation and its label, and the agent can distinguish states, which look as same, more exactly. Labeling is carried out by a hash-like function, which we call Labeling Function(LF). Numerous labeling functions can be considered, but in this paper, we will introduce several labeling functions based on only 2 or 3 immediate past sequential observations. We introduce the basic idea of LQ-learning briefly, apply it to maze problems, simple POMDP environments, and show its availability with empirical results, look better than conventional RL algorithms.

  • PDF