• 제목/요약/키워드: Partially Observable Markov decision process (POMDP)

검색결과 15건 처리시간 0.023초

POMDP와 Exploration Bonus를 이용한 지역적이고 적응적인 QoS 라우팅 기법 (A Localized Adaptive QoS Routing Scheme Using POMDP and Exploration Bonus Techniques)

  • 한정수
    • 한국통신학회논문지
    • /
    • 제31권3B호
    • /
    • pp.175-182
    • /
    • 2006
  • 본 논문에서는 Localized Aptive QoS 라우팅을 위해 POMDP(Partially Observable Markov Decision Processes)와 Exploration Bonus 기법을 사용하는 방법을 제안하였다. 또한, POMDP 문제를 해결하기 위해 Dynamic Programming을 사용하여 최적의 행동을 찾는 연산이 매우 복잡하고 어렵기 때문에 CEA(Certainty Equivalency Approximation) 기법을 통한 기댓값 사용으로 문제를 단순하였으며, Exploration Bonus 방식을 사용해 현재 경로보다 나은 경로를 탐색하고자 하였다. 이를 위해 다중 경로 탐색 알고리즘(SEMA)을 제안했다. 더욱이 탐색의 횟수와 간격을 정의하기 위해 $\phi$와 k 성능 파라미터들을 사용하여 이들을 통해 탐색의 횟수 변화를 통한 서비스 성공률과 성공 시 사용된 평균 홉 수에 대한 성능을 살펴보았다. 결과적으로 $\phi$ 값이 증가함에 따라 현재의 경로보다 더 나은 경로를 찾게 되며, k 값이 증가할수록 탐색이 증가함을 볼 수 있다.

Partially Observable Markov Decision Processes (POMDPs) and Wireless Body Area Networks (WBAN): A Survey

  • Mohammed, Yahaya Onimisi;Baroudi, Uthman A.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권5호
    • /
    • pp.1036-1057
    • /
    • 2013
  • Wireless body area network (WBAN) is a promising candidate for future health monitoring system. Nevertheless, the path to mature solutions is still facing a lot of challenges that need to be overcome. Energy efficient scheduling is one of these challenges given the scarcity of available energy of biosensors and the lack of portability. Therefore, researchers from academia, industry and health sectors are working together to realize practical solutions for these challenges. The main difficulty in WBAN is the uncertainty in the state of the monitored system. Intelligent learning approaches such as a Markov Decision Process (MDP) were proposed to tackle this issue. A Markov Decision Process (MDP) is a form of Markov Chain in which the transition matrix depends on the action taken by the decision maker (agent) at each time step. The agent receives a reward, which depends on the action and the state. The goal is to find a function, called a policy, which specifies which action to take in each state, so as to maximize some utility functions (e.g., the mean or expected discounted sum) of the sequence of rewards. A partially Observable Markov Decision Processes (POMDP) is a generalization of Markov decision processes that allows for the incomplete information regarding the state of the system. In this case, the state is not visible to the agent. This has many applications in operations research and artificial intelligence. Due to incomplete knowledge of the system, this uncertainty makes formulating and solving POMDP models mathematically complex and computationally expensive. Limited progress has been made in terms of applying POMPD to real applications. In this paper, we surveyed the existing methods and algorithms for solving POMDP in the general domain and in particular in Wireless body area network (WBAN). In addition, the papers discussed recent real implementation of POMDP on practical problems of WBAN. We believe that this work will provide valuable insights for the newcomers who would like to pursue related research in the domain of WBAN.

System Replacement Policy for A Partially Observable Markov Decision Process Model

  • Kim, Chang-Eun
    • 대한산업공학회지
    • /
    • 제16권2호
    • /
    • pp.1-9
    • /
    • 1990
  • The control of deterioration processes for which only incomplete state information is available is examined in this study. When the deterioration is governed by a Markov process, such processes are known as Partially Observable Markov Decision Processes (POMDP) which eliminate the assumption that the state or level of deterioration of the system is known exactly. This research investigates a two state partially observable Markov chain in which only deterioration can occur and for which the only actions possible are to replace or to leave alone. The goal of this research is to develop a new jump algorithm which has the potential for solving system problems dealing with continuous state space Markov chains.

  • PDF

Throughput Maximization for a Primary User with Cognitive Radio and Energy Harvesting Functions

  • Nguyen, Thanh-Tung;Koo, Insoo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권9호
    • /
    • pp.3075-3093
    • /
    • 2014
  • In this paper, we consider an advanced wireless user, called primary-secondary user (PSU) who is capable of harvesting renewable energy and connecting to both the primary network and cognitive radio networks simultaneously. Recently, energy harvesting has received a great deal of attention from the research community and is a promising approach for maintaining long lifetime of users. On the other hand, the cognitive radio function allows the wireless user to access other primary networks in an opportunistic manner as secondary users in order to receive more throughput in the current time slot. Subsequently, in the paper we propose the channel access policy for a PSU with consideration of the energy harvesting, based on a Partially Observable Markov decision process (POMDP) in which the optimal action from the action set will be selected to maximize expected long-term throughput. The simulation results show that the proposed POMDP-based channel access scheme improves the throughput of PSU, but it requires more computations to make an action decision regarding channel access.

Triple-state 보상 함수를 기반으로 한 개선된 DSA 기법 (An Improved DSA Strategy based on Triple-States Reward Function)

  • 타사미아;구준롱;장성진;김재명
    • 대한전자공학회논문지TC
    • /
    • 제47권11호
    • /
    • pp.59-68
    • /
    • 2010
  • 본 논문은 보상함수 수정을 통해 보다 완벽한 DSA(Dynamic Spectrum Access)를 수행하는 새로운 방법을 제시하였다. POMDP(Partially Observable Markov Decision Process)는 미래의 스펙트럼 상태를 예측하는데 사용되는 알고리즘으로서, 그 중 보상함수는 스펙트럼을 예측하는데 있어 가장 중요한 부분이다. 그러나 보상함수는 Busy 및 Idle의 두 가지 상태만 갖고 있기 때문에 채널에서 충돌이 발생하게 되면 보상함수는 Busy를 반환함으로써 2차 사용자의 성능을 감소시키게 된다. 따라서 본 논문에서는 기존의 Busy를 Busy 및 Collision 의 두 상태로 구분하였고, 이렇게 추가된 Collision 상태를 통해 2차 사용자의 채널 접근 기회를 보다 향상시킴으로서 데이터 전송율을 증대시킬 수 있도록 하였다. 또한 본 논문은 새로운 알고리즘의 신뢰도 벡터를 수학적으로 분석하였다. 마지막으로 시뮬레이션 결과를 통해 개선된 보상함수의 성능을 검증하고, 이를 통해 새로운 알고리즘이 CR 네트워크에서 2차 사용자의 성능을 향상시킬 수 있음을 보인다.

POMDP 기반 사용자-로봇 인터랙션 행동 모델 (POMDP-based Human-Robot Interaction Behavior Model)

  • 김종철
    • 제어로봇시스템학회논문지
    • /
    • 제20권6호
    • /
    • pp.599-605
    • /
    • 2014
  • This paper presents the interactive behavior modeling method based on POMDP (Partially Observable Markov Decision Process) for HRI (Human-Robot Interaction). HRI seems similar to conversational interaction in point of interaction between human and a robot. The POMDP has been popularly used in conversational interaction system. The POMDP can efficiently handle uncertainty of observable variables in conversational interaction system. In this paper, the input variables of the proposed conversational HRI system in POMDP are the input information of sensors and the log of used service. The output variables of system are the name of robot behaviors. The robot behavior presents the motion occurred from LED, LCD, Motor, sound. The suggested conversational POMDP-based HRI system was applied to an emotional robot KIBOT. In the result of human-KIBOT interaction, this system shows the flexible robot behavior in real world.

Labeling Q-Learning for Maze Problems with Partially Observable States

  • Lee, Hae-Yeon;Hiroyuki Kamaya;Kenich Abe
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.489-489
    • /
    • 2000
  • Recently, Reinforcement Learning(RL) methods have been used far teaming problems in Partially Observable Markov Decision Process(POMDP) environments. Conventional RL-methods, however, have limited applicability to POMDP To overcome the partial observability, several algorithms were proposed [5], [7]. The aim of this paper is to extend our previous algorithm for POMDP, called Labeling Q-learning(LQ-learning), which reinforces incomplete information of perception with labeling. Namely, in the LQ-learning, the agent percepts the current states by pair of observation and its label, and the agent can distinguish states, which look as same, more exactly. Labeling is carried out by a hash-like function, which we call Labeling Function(LF). Numerous labeling functions can be considered, but in this paper, we will introduce several labeling functions based on only 2 or 3 immediate past sequential observations. We introduce the basic idea of LQ-learning briefly, apply it to maze problems, simple POMDP environments, and show its availability with empirical results, look better than conventional RL algorithms.

  • PDF

Factored POMDP를 이용한 가상군의 자율행위 모델링 사례연구 (A Case Study on Modeling Computer Generated Forces based on Factored POMDPs)

  • 이강훈;임희진;김기응
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2012년도 한국컴퓨터종합학술대회논문집 Vol.39 No.1(B)
    • /
    • pp.333-335
    • /
    • 2012
  • 가상군의 자율행위 모델링은 전장모의모델링 시스템의 성능을 결정하는 주요한 요소이다. 불확실한 상황을 확률적으로 고려하여 최적의 의사결정을 가능하게 하는 POMDP (partially observable Markov decision process) 모델은 가상군의 자율행위 모델링에 있어서 매우 자연스러운 프레임워크이다. 그러나 POMDP 모델의 높은 계산복잡도로 인한 최적 행동정책 계산의 어려움은 POMDP 모델을 이용한 가상 군의 자율행위 모델링을 저해하는 요소이다. 본 논문에서는 대규모 가상군의 자율행위 모델링을 위해 factored POMDP 모델을 이용한다. 그리고 "Hasty Defense" 사례연구를 통해 그 효과를 확인한다.

Two-Dimensional POMDP-Based Opportunistic Spectrum Access in Time-Varying Environment with Fading Channels

  • Wang, Yumeng;Xu, Yuhua;Shen, Liang;Xu, Chenglong;Cheng, Yunpeng
    • Journal of Communications and Networks
    • /
    • 제16권2호
    • /
    • pp.217-226
    • /
    • 2014
  • In this research, we study the problem of opportunistic spectrum access (OSA) in a time-varying environment with fading channels, where the channel state is characterized by both channel quality and the occupancy of primary users (PUs). First, a finite-state Markov channel model is introduced to represent a fading channel. Second, by probing channel quality and exploring the activities of PUs jointly, a two-dimensional partially observable Markov decision process framework is proposed for OSA. In addition, a greedy strategy is designed, where a secondary user selects a channel that has the best-expected data transmission rate to maximize the instantaneous reward in the current slot. Compared with the optimal strategy that considers future reward, the greedy strategy brings low complexity and relatively ideal performance. Meanwhile, the spectrum sensing error that causes the collision between a PU and a secondary user (SU) is also discussed. Furthermore, we analyze the multiuser situation in which the proposed single-user strategy is adopted by every SU compared with the previous one. By observing the simulation results, the proposed strategy attains a larger throughput than the previous works under various parameter configurations.

제약을 갖는 POMDP를 위한 점-기반 가치 반복 알고리즘 (Point-Based Value Iteration for Constrained POMDPs)

  • 김동호;이재송;김기응
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2011년도 한국컴퓨터종합학술대회논문집 Vol.38 No.1(A)
    • /
    • pp.286-289
    • /
    • 2011
  • 제약을 갖는 부분 관찰 의사결정 과정(Constrained Partially Observable Markov Decision Process; CPOMDP)는 정책이 제약(constraint)를 만족하면서 가치 함수를 최적화하도록 일반적인 부분 관찰 의사결정과정(POMDP)을 확장한 모델이다. CPOMDP는 제한된 자원을 가지거나 여러 개의 목적 함수를 가지는 문제를 자연스럽게 모델링할 수 있기 때문에 일반적인 POMDP에 비해 더 실용적인 장점을 가진다. 본 논문에서는 CPOMDP의 확률적 최적 정책 및 근사 최적 정책을 계산할 수 있는 최적 및 근사 동적 프로그래밍 알고리즘을 제안한다. 최적 알고리즘은 동적 프로그래밍의 각 단계마다 미니맥스 이차 제약 계획 문제를 계산해야 하는 반면에 근사 알고리즘은 선형 계획 문제만을 필요로 하는 점-기반(point-based) 가치 업데이트를 이용한다. 실험 결과, 확률적 정책이 결정적(deterministic) 정책보다 더 나은 성능을 보이며, 근사 알고리즘을 통해 계산 시간을 줄일 수 있음을 보였다.