• Title/Summary/Keyword: markov chain

Search Result 890, Processing Time 0.032 seconds

A Design and Implementation of Reliability Analyzer for Embedded Software using Markov Chain Model and Unit Testing (내장형 소프트웨어 마르코프 체인 모델과 단위 테스트를 이용한 내장형 소프트웨어 신뢰도 분석 도구의 설계와 구현)

  • Kwak, Dong-Gyu;Yoo, Chae-Woo;Choi, Jae-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.12
    • /
    • pp.1-10
    • /
    • 2011
  • As requirements of embedded system get complicated, the tool for analyzing the reliability of embedded software is being needed. A probabilistic modeling is used as the way of analyzing the reliability of a software and to apply it to embedded software controlling multiple devices. So, it is necessary to specialize that to embedded software. Also, existing reliability analyzers should measure the transition probability of each condition in different ways and doesn't consider reusing the model once used. In this paper, we suggest a reliability analyzer for embedded software using embedded software Markov chin model and a unit testing tool. Embedded software Markov chain model is model specializing Markov chain model which is used for analyzing reliability to an embedded software. And a unit testing tool has host-target structure which is appropriate to development environment of embedded software. This tool can analyze the reliability more easily than existing tool by automatically measuring the transition probability between units for analyzing reliability from the result of unit testing. It can also directly apply the test result updated by unit testing tool by representing software model as a XML oriented document and has the advantage that many developers can access easily using the web oriented interface and SVN store. In this paper, we show reliability analyzing of a example by so doing show usefulness of reliability analyzer.

Estimation of Markov Chain and Gamma Distribution Parameters for Generation of Daily Precipitation Data from Monthly Data (월 자료로부터 일 강수자료 생성을 위한 Markov 연쇄 및 감마분포 모수 추정)

  • Moon, Kyung Hwan;Song, Eun Young;Son, In Chang;Wi, Seung Hwan;Oh, Soonja;Hyun, Hae Nam
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.19 no.1
    • /
    • pp.27-35
    • /
    • 2017
  • This research was to elucidate the generation method of daily precipitation data from monthly data. We applied a combined method of Markov chain and gamma distribution function using 4 specific parameters of ${\alpha}$, ${\beta}$, p(W/W) and p(W/D) for generation of daily rainfall data using daily precipitation data for the past 30 years which were collected from the country's 23 meteorological offices. Four parameters, applied to use for the combination method, were calculated by maximum likelihood method in location of 23 sites. There are high correlations of 0.99, 0.98 and 0.98 in rainfall days, rainfall probability and mean amount of daily rainfall between measured and simulated data in case of those parameters. In case of using parameters estimated from monthly precipitation, correlation coefficients in rainfall days, rainfall probability and mean amount of daily rainfall are 0.84, 0.83 and 0.96, respectively. We concluded that a combination method with parameter estimation from monthly precipitation data can be applied, in practical purpose such as assessment of climate change in agriculture and water resources, to get daily precipitation data in Korea.

Continuous Time Markov Process Model for Nuclide Decay Chain Transport in the Fractured Rock Medium (균열 암반 매질에서의 핵종의 붕괴사슬 이동을 위한 연속시간 마코프 프로세스 모델)

  • Lee, Y.M.;Kang, C.H.;Hahn, P.S.;Park, H.H.;Lee, K.J.
    • Nuclear Engineering and Technology
    • /
    • v.25 no.4
    • /
    • pp.539-547
    • /
    • 1993
  • A stochastic approach using continuous time Markov process is presented to model the one-dimensional nuclide transport in fractured rock media as a further extension for previous works[1-3]. Nuclide transport of decay chain of arbitrary length in the single planar fractured rock media in the vicinity of the radioactive waste repository is modeled using a continuous time Markov process. While most of analytical solutions for nuclide transport of decay chain deal with the limited length of decay chain, do not consider the case of having rock matrix diffusion, and have very complicated solution form, the present model offers rather a simplified solution in the form of expectance and its variance resulted from a stochastic modeling. As another deterministic way, even numerical models of decay chain transport, in most cases, show very complicated procedure to get the solution and large discrepancy for the exact solution as opposed to the stochastic model developed in this study. To demonstrate the use of the present model and to verify the model by comparing with the deterministic model, a specific illustration was made for the transport of a chain of three member in single fractured rock medium with constant groundwater flow rate in the fracture, which ignores the rock matrix diffusion and shows good capability to model the fractured media around the repository.

  • PDF

Bayesian Inference of the Stochastic Gompertz Growth Model for Tumor Growth

  • Paek, Jayeong;Choi, Ilsu
    • Communications for Statistical Applications and Methods
    • /
    • v.21 no.6
    • /
    • pp.521-528
    • /
    • 2014
  • A stochastic Gompertz diffusion model for tumor growth is a topic of active interest as cancer is a leading cause of death in Korea. The direct maximum likelihood estimation of stochastic differential equations would be possible based on the continuous path likelihood on condition that a continuous sample path of the process is recorded over the interval. This likelihood is useful in providing a basis for the so-called continuous record or infill likelihood function and infill asymptotic. In practice, we do not have fully continuous data except a few special cases. As a result, the exact ML method is not applicable. In this paper we proposed a method of parameter estimation of stochastic Gompertz differential equation via Markov chain Monte Carlo methods that is applicable for several data structures. We compared a Markov transition data structure with a data structure that have an initial point.

Tolerance Optimization with Markov Chain Process (마르코프 과정을 이용한 공차 최적화)

  • Lee, Jin-Koo
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.13 no.2
    • /
    • pp.81-87
    • /
    • 2004
  • This paper deals with a new approach to tolerance optimization problems. Optimal tolerance allotment problems can be formulated as stochastic optimization problems. Most schemes to solve the stochastic optimization problems have been found to exhibit difficulties in multivariate integration of the probability density function. As a typical example of stochastic optimization the optimal tolerance allotment problem has the same difficulties. In this stochastic model, manufacturing system is represented by Gauss-Markov stochastic process and the manufacturing unit availability is characterized for realistic optimization modeling. The new algorithm performed robustly for a large deviation approximation. A significant reduction in computation time was observed compared to the results obtained in previous studies.

Bayesian Hierarchical Mixed Effects Analysis of Time Non-Homogeneous Markov Chains (계층적 베이지안 혼합 효과 모델을 사용한 비동차 마코프 체인의 분석)

  • Sung, Minje
    • The Korean Journal of Applied Statistics
    • /
    • v.27 no.2
    • /
    • pp.263-275
    • /
    • 2014
  • The present study used a hierarchical Bayesian approach was used to develop a mixed effect model to describe the transitional behavior of subjects in time nonhomogeneous Markov chains. The posterior distributions of model parameters were not in analytically tractable forms; subsequently, a Gibbs sampling method was used to draw samples from full conditional posterior distributions. The proposed model was implemented with real data.

The Average Outgoing Quality of CSP's for Markov-Dependent Production Processes in Short Production Runs (마코프종속(從屬)인 생산공정의 운영기간(運營期間)에 따른 연속생산형(連續生産型) 샘플링 검사방식의 평균출검품질(平均出檢品質))

  • Park, Heung-Seon;Kim, Seong-In
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.15 no.1
    • /
    • pp.89-103
    • /
    • 1989
  • In this paper the approximate average outgoing quality and properties of a class of continuous sampling plans in a short production run are investigated when the quality of successive units follows a two-state time-homogeneous Markov chain. The results of previous studies can be obtained as special cases. It is observed that the long-run average outgoing quality limit values under the statistical control differ significantly as compared to the case of short production runs in a Markov-dependent production process.

  • PDF

Partially Observable Markov Decision Processes (POMDPs) and Wireless Body Area Networks (WBAN): A Survey

  • Mohammed, Yahaya Onimisi;Baroudi, Uthman A.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.5
    • /
    • pp.1036-1057
    • /
    • 2013
  • Wireless body area network (WBAN) is a promising candidate for future health monitoring system. Nevertheless, the path to mature solutions is still facing a lot of challenges that need to be overcome. Energy efficient scheduling is one of these challenges given the scarcity of available energy of biosensors and the lack of portability. Therefore, researchers from academia, industry and health sectors are working together to realize practical solutions for these challenges. The main difficulty in WBAN is the uncertainty in the state of the monitored system. Intelligent learning approaches such as a Markov Decision Process (MDP) were proposed to tackle this issue. A Markov Decision Process (MDP) is a form of Markov Chain in which the transition matrix depends on the action taken by the decision maker (agent) at each time step. The agent receives a reward, which depends on the action and the state. The goal is to find a function, called a policy, which specifies which action to take in each state, so as to maximize some utility functions (e.g., the mean or expected discounted sum) of the sequence of rewards. A partially Observable Markov Decision Processes (POMDP) is a generalization of Markov decision processes that allows for the incomplete information regarding the state of the system. In this case, the state is not visible to the agent. This has many applications in operations research and artificial intelligence. Due to incomplete knowledge of the system, this uncertainty makes formulating and solving POMDP models mathematically complex and computationally expensive. Limited progress has been made in terms of applying POMPD to real applications. In this paper, we surveyed the existing methods and algorithms for solving POMDP in the general domain and in particular in Wireless body area network (WBAN). In addition, the papers discussed recent real implementation of POMDP on practical problems of WBAN. We believe that this work will provide valuable insights for the newcomers who would like to pursue related research in the domain of WBAN.

On the Analysis of DS/CDMA Multi-hop Packet Radio Network with Auxiliary Markov Transient Matrix. (보조 Markov 천이행렬을 이용한 DS/CDMA 다중도약 패킷무선망 분석)

  • 이정재
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.5
    • /
    • pp.805-814
    • /
    • 1994
  • In this paper, we introduce a new method which is available for analyzing the throughput of the packet radio network by using the auxiliary Markov transient matrix with a failure state and a success state. And we consider the effect of symbol error for the network state(X, R) consisted of the number of transmitting PRU X and receiving PRU R. We examine the packet radio network of a continuous time Markov chain model, and the direct sequence binary phase shift keying CDMA radio channel with hard decision Viterbi decoding and bit-by-bit changing spreading code. For the unslotted distributed multi-hop packet radio network, we assume that the packet error due to a symbol error of radio channel has Poisson process, and the time period of an error occurrence is exponentially distributed. Through the throughputs which are found as a function of radio channel parameters, such as the received signal to noise ratio and chips of spreading code per symbol, and of network parameters, such as the number of PRU and offered traffic rate, it is shown that this composite analysis enables us to combine the Markovian packet radio network model with a coded DS/BPSK CDMA radio channel.

  • PDF

Accuracy evaluation of ZigBee's indoor localization algorithm (ZigBee 실내 위치 인식 알고리즘의 정확도 평가)

  • Noh, Angela Song-Ie;Lee, Woong-Jae
    • Journal of Internet Computing and Services
    • /
    • v.11 no.1
    • /
    • pp.27-33
    • /
    • 2010
  • This paper applies Bayesian Markov inferred localization techniques for determining ZigBee mobile device's position. To evaluate its accuracy, we compare it with conventional technique, map-based localization. While the map-based localization technique referring to database of predefined locations and their RSSI data, the Bayesian Markov inferred localization is influenced by changes of time, direction and distance. All determinations are drawn from the estimation of Received Signal Strength (RSS) using ZigBee modules. Our results show the relationship between RSSI and distance in indoor ZigBee environment and higher localization accuracy of Bayesian Markov localization technique. We conclude that map-based localization is not suitable for flexible changes in indoors because of its predefined condition setup and lower accuracy comparing to distance-based Markov Chain inference localization system.