• 제목/요약/키워드: Markov Process

검색결과 618건 처리시간 0.027초

멀티밴드 해양통신망에서 전송주기를 보장하는 최소 비용의 망 선택 기법 (The Minimum-cost Network Selection Scheme to Guarantee the Periodic Transmission Opportunity in the Multi-band Maritime Communication System)

  • 조구민;윤창호;강충구
    • 한국통신학회논문지
    • /
    • 제36권2A호
    • /
    • pp.139-148
    • /
    • 2011
  • 본 논문은 멀티밴드 해양통신망에서 선적 정보를 주기적으로 전송할 때 발생하는 비용을 최소화하기 위해 가용한 네트워크의 전송 비용과 주어진 허용 가능한 최대 지연 범위 이내에서 예상되는 최소 평균 전송 비용을 비교하여 전송 시점을 결정하는 방안을 제시한다. 이때 전송 시점과 해당 네트워크의 선택 과정을 Markov Decision Process (MDP)로 모델링하며, 이에 따라 각 밴드에서의 채널 상태를 2-State Markov Chain으로 모델링하고 평균 전송 비용을 Stochastic Dynamic Programming을 통해 계산한다. 이를 통해 최소 비용의 망 선택 방식이 도출되었으며, 제안된 방식을 사용할 때 고정 주기를 사용하여 정보를 전송하는 방식에 비해 상당한 망 사용 비용을 절감할 수 있음을 컴퓨터 시뮬레이션을 통해 보인다.

System Replacement Policy for A Partially Observable Markov Decision Process Model

  • Kim, Chang-Eun
    • 대한산업공학회지
    • /
    • 제16권2호
    • /
    • pp.1-9
    • /
    • 1990
  • The control of deterioration processes for which only incomplete state information is available is examined in this study. When the deterioration is governed by a Markov process, such processes are known as Partially Observable Markov Decision Processes (POMDP) which eliminate the assumption that the state or level of deterioration of the system is known exactly. This research investigates a two state partially observable Markov chain in which only deterioration can occur and for which the only actions possible are to replace or to leave alone. The goal of this research is to develop a new jump algorithm which has the potential for solving system problems dealing with continuous state space Markov chains.

  • PDF

예방정비를 고려한 복수 부품 시스템의 신뢰성 분석: 마코프 체인 모형의 응용 (Reliability Analysis of Multi-Component System Considering Preventive Maintenance: Application of Markov Chain Model)

  • 김헌길;김우성
    • 한국신뢰성학회지:신뢰성응용연구
    • /
    • 제16권4호
    • /
    • pp.313-322
    • /
    • 2016
  • Purpose: We introduce ways to employ Markov chain model to evaluate the effect of preventive maintenance process. While the preventive maintenance process decreases the failure rate of each subsystems, it increases the downtime of the system because the system can not work during the maintenance process. The goal of this paper is to introduce ways to analyze this trade-off. Methods: Markov chain models are employed. We derive the availability of the system consisting of N repairable subsystems by the methods under various maintenance policies. Results: To validate our methods, we apply our models to the real maintenance data reports of military truck. The error between the model and the data was about 1%. Conclusion: The models developed in this paper fit real data well. These techniques can be applied to calculate the availability under various preventive maintenance policies.

Network Security Situation Assessment Method Based on Markov Game Model

  • Li, Xi;Lu, Yu;Liu, Sen;Nie, Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권5호
    • /
    • pp.2414-2428
    • /
    • 2018
  • In order to solve the problem that the current network security situation assessment methods just focus on the attack behaviors, this paper proposes a kind of network security situation assessment method based on Markov Decision Process and Game theory. The method takes the Markov Game model as the core, and uses the 4 levels data fusion to realize the evaluation of the network security situation. In this process, the Nash equilibrium point of the game is used to determine the impact on the network security. Experiments show that the results of this method are basically consistent with the expert evaluation data. As the method takes full account of the interaction between the attackers and defenders, it is closer to reality, and can accurately assess network security situation.

LIMIT THEOREMS FOR MARKOV PROCESSES GENERATED BY ITERATIONS OF RANDOM MAPS

  • Lee, Oe-Sook
    • 대한수학회지
    • /
    • 제33권4호
    • /
    • pp.983-992
    • /
    • 1996
  • Let p(x, dy) be a transition probability function on $(S, \rho)$, where S is a complete separable metric space. Then a Markov process $X_n$ which has p(x, dy) as its transition probability may be generated by random iterations of the form $X_{n+1} = f(X_n, \varepsilon_{n+1})$, where $\varepsilon_n$ is a sequence of independent and identically distributed random variables (See, e.g., Kifer(1986), Bhattacharya and Waymire(1990)).

  • PDF

Optimal control of stochastic continuous discrete systems applied to FMS

  • Boukas, E.K.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1989년도 한국자동제어학술회의논문집; Seoul, Korea; 27-28 Oct. 1989
    • /
    • pp.733-743
    • /
    • 1989
  • This paper deals with the control of system with controlled jump Markov disturbances. A such formulation was used by Boukas to model the planning production and maintenance of a FMS with failure machines. The optimal control problem of systems with controlled jump Markov process is addressed. This problem describes the planning production and preventive maintenance of production systems. The optimality conditions in both cases finite and infinite horizon, are derived. A numerical example is presented to validate the proposed results.

  • PDF

경사제 피복재의 유지관리를 위한 추계학적 Markov 확률모형의 개발 (Development of Stochastic Markov Process Model for Maintenance of Armor Units of Rubble-Mound Breakwaters)

  • 이철응
    • 한국해안·해양공학회논문집
    • /
    • 제25권2호
    • /
    • pp.52-62
    • /
    • 2013
  • 경사제 피복재의 시간에 따른 파괴확률을 산정할 수 있는 추계학적 Markov 확률모형을 개발하였다. 하중발생에 대한 CP/RP 해석과 누적피해사건에 대한 DP 해석을 결합하여 수학적 모형을 수립하고 경사제 피복재에 적용하였다. 피복재의 피해수준에 대한 정의와 MCS 기법을 이용하여 이행확률을 산정하고 분석하였다. 산정된 이행확률들은 확률적으로나 물리적으로 만족해야하는 제약조건들을 잘 충족한다. 또한 경사제 피복재의 설계와 관련하여 중요한 변수로 생각되는 재현기간 및 안전율의 변화에 따른 시간 의존 파괴확률을 산정하여 그 거동 특성을 자세히 비교 분석하였다. 특히 시간 의존 파괴확률이 이전단계의 피해수준에 의해 어떻게 달라지는지를 정량적으로 해석할 수 있었다. 마지막으로 유지관리에서 가장 중요한 보수보강 시점을 결정할 수 있는 두 가지 접근방법을 제시하고 경제성 분석을 포함한 다양한 해석이 수행되었다.

CHAIN DEPENDENCE AND STATIONARITY TEST FOR TRANSITION PROBABILITIES OF MARKOV CHAIN UNDER LOGISTIC REGRESSION MODEL

  • Sinha Narayan Chandra;Islam M. Ataharul;Ahmed Kazi Saleh
    • Journal of the Korean Statistical Society
    • /
    • 제35권4호
    • /
    • pp.355-376
    • /
    • 2006
  • To identify whether the sequence of observations follows a chain dependent process and whether the chain dependent or repeated observations follow stationary process or not, alternative procedures are suggested in this paper. These test procedures are formulated on the basis of logistic regression model under the likelihood ratio test criterion and applied to the daily rainfall occurrence data of Bangladesh for selected stations. These test procedures indicate that the daily rainfall occurrences follow a chain dependent process, and the different types of transition probabilities and overall transition probabilities of Markov chain for the occurrences of rainfall follow a stationary process in the Mymensingh and Rajshahi areas, and non-stationary process in the Chittagong, Faridpur and Satkhira areas.

Partially Observable Markov Decision Processes (POMDPs) and Wireless Body Area Networks (WBAN): A Survey

  • Mohammed, Yahaya Onimisi;Baroudi, Uthman A.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권5호
    • /
    • pp.1036-1057
    • /
    • 2013
  • Wireless body area network (WBAN) is a promising candidate for future health monitoring system. Nevertheless, the path to mature solutions is still facing a lot of challenges that need to be overcome. Energy efficient scheduling is one of these challenges given the scarcity of available energy of biosensors and the lack of portability. Therefore, researchers from academia, industry and health sectors are working together to realize practical solutions for these challenges. The main difficulty in WBAN is the uncertainty in the state of the monitored system. Intelligent learning approaches such as a Markov Decision Process (MDP) were proposed to tackle this issue. A Markov Decision Process (MDP) is a form of Markov Chain in which the transition matrix depends on the action taken by the decision maker (agent) at each time step. The agent receives a reward, which depends on the action and the state. The goal is to find a function, called a policy, which specifies which action to take in each state, so as to maximize some utility functions (e.g., the mean or expected discounted sum) of the sequence of rewards. A partially Observable Markov Decision Processes (POMDP) is a generalization of Markov decision processes that allows for the incomplete information regarding the state of the system. In this case, the state is not visible to the agent. This has many applications in operations research and artificial intelligence. Due to incomplete knowledge of the system, this uncertainty makes formulating and solving POMDP models mathematically complex and computationally expensive. Limited progress has been made in terms of applying POMPD to real applications. In this paper, we surveyed the existing methods and algorithms for solving POMDP in the general domain and in particular in Wireless body area network (WBAN). In addition, the papers discussed recent real implementation of POMDP on practical problems of WBAN. We believe that this work will provide valuable insights for the newcomers who would like to pursue related research in the domain of WBAN.

Gauss-Markov 추정기를 이용한 비트 동기화를 위한 파라미터 추정에 관한 연구 (A Study on the Parameter Estimation for the Bit Synchronization Using the Gauss-Markov Estimator)

  • 유흥균;안수길
    • 대한전자공학회논문지
    • /
    • 제26권3호
    • /
    • pp.8-13
    • /
    • 1989
  • 부가성 가우시안 잡음 상황하에서, 미지의 확률 분포를 갖는 양극성 2진 불규칙 수형파 신호의 중요한 파라미터인, 진폭과 위상을 Gauss-Markov 추정기를 사용하여 동시에 추정하므로써 전송된 디지탈 데이타를 복원하였다. 그러나, Gauss-Markov 추정기가 이용되기 위해서는 승산기와 적분기로 구성된 상관기를 사용하여, 수신 신호를 표본화 급수로 변환하고 관측된 데이타 벡타를 얻기 위한 사전 처리단계가 필요하게 됨을 알게 되었다.

  • PDF