• Title/Summary/Keyword: Markov Analysis

Search Result 758, Processing Time 0.026 seconds

Uncertainty Assessment of Single Event Rainfall-Runoff Model Using Bayesian Model (Bayesian 모형을 이용한 단일사상 강우-유출 모형의 불확실성 분석)

  • Kwon, Hyun-Han;Kim, Jang-Gyeong;Lee, Jong-Seok;Na, Bong-Kil
    • Journal of Korea Water Resources Association
    • /
    • v.45 no.5
    • /
    • pp.505-516
    • /
    • 2012
  • The study applies a hydrologic simulation model, HEC-1 developed by Hydrologic Engineering Center to Daecheong dam watershed for modeling hourly inflows of Daecheong dam. Although the HEC-1 model provides an automatic optimization technique for some of the parameters, the built-in optimization model is not sufficient in estimating reliable parameters. In particular, the optimization model often fails to estimate the parameters when a large number of parameters exist. In this regard, a main objective of this study is to develop Bayesian Markov Chain Monte Carlo simulation based HEC-1 model (BHEC-1). The Clark IUH method for transformation of precipitation excess to runoff and the soil conservation service runoff curve method for abstractions were used in Bayesian Monte Carlo simulation. Simulations of runoff at the Daecheong station in the HEC-1 model under Bayesian optimization scheme allow the posterior probability distributions of the hydrograph thus providing uncertainties in rainfall-runoff process. The proposed model showed a powerful performance in terms of estimating model parameters and deriving full uncertainties so that the model can be applied to various hydrologic problems such as frequency curve derivation, dam risk analysis and climate change study.

Derivation of the Instantaneous Unit Hydrograph and Estimation of the Direct Runoff by Using the Geomorphologic Parameters (지상인자에 의한 순간단위도 유도와 유출량 예측)

  • 천만복;서승덕
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.32 no.3
    • /
    • pp.87-101
    • /
    • 1990
  • The purpose of this study is to estimate the flood discharge and runoff volume at a stream by using geomorphologic parameters obtained from the topographic maps following the law of stream classification and ordering by Horton and Strahier. The present model is modified from Cheng' s model which derives the geomorphologic instantaneous unit hydrograph. The present model uses the results of Laplace transformation and convolution intergral of probability density function of the travel time at each state. The stream flow velocity parameters are determined as a function of the rainfall intensity, and the effective rainfall is calculated by the SCS method. The total direct runoff volume until the time to peak is estimated by assuming a triangular hydrograph. The model is used to estimate the time to peak, the flood discharge, and the direct runoff at Andong, Imha. Geomchon, and Sunsan basin in the Nakdong River system. The results of the model application are as follows : 1.For each basin, as the rainfall intensity doubles form 1 mm/h to 2 mm/h with the same rainfall duration of 1 hour, the hydrographs show that the runoff volume doubles while the duration of the base flow and the time to peak are the same. This aggrees with the theory of the unit hydrograph. 2.Comparisions of the model predicted and observed values show that small relative errors of 0.44-7.4% of the flood discharge, and 1 hour difference in time to peak except the Geomchon basin which shows 10.32% and 2 hours respectively. 3.When the rainfall intensity is small, the error of flood discharge estimated by using this model is relatively large. The reason of this might be because of introducing the flood velocity concept in the stream flow velocity. 4.Total direct runoff volume until the time to peak estimated by using this model has small relative error comparing with the observed data. 5.The sensitivity analysis of velocity parameters to flood discharge shows that the flood discharge is sensitive to the velocity coefficient while it is insensitive to the ratio of arrival time of moving portion to that of storage portion of a stream and to the ratio of arrival time of stream to that of overland flow.

  • PDF

A Comparison Study of Model Parameter Estimation Methods for Prognostics (건전성 예측을 위한 모델변수 추정방법의 비교)

  • An, Dawn;Kim, Nam Ho;Choi, Joo Ho
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.25 no.4
    • /
    • pp.355-362
    • /
    • 2012
  • Remaining useful life(RUL) prediction of a system is important in the prognostics field since it is directly linked with safety and maintenance scheduling. In the physics-based prognostics, accurately estimated model parameters can predict the remaining useful life exactly. It, however, is not a simple task to estimate the model parameters because most real system have multivariate model parameters, also they are correlated each other. This paper presents representative methods to estimate model parameters in the physics-based prognostics and discusses the difference between three methods; the particle filter method(PF), the overall Bayesian method(OBM), and the sequential Bayesian method(SBM). The three methods are based on the same theoretical background, the Bayesian estimation technique, but the methods are distinguished from each other in the sampling methods or uncertainty analysis process. Therefore, a simple physical model as an easy task and the Paris model for crack growth problem are used to discuss the difference between the three methods, and the performance of each method evaluated by using established prognostics metrics is compared.

Analysis of an M/G/1/K Queueing System with Queue-Length Dependent Service and Arrival Rates (시스템 내 고객 수에 따라 서비스율과 도착율을 조절하는 M/G/1/K 대기행렬의 분석)

  • Choi, Doo-Il;Lim, Dae-Eun
    • Journal of the Korea Society for Simulation
    • /
    • v.24 no.3
    • /
    • pp.27-35
    • /
    • 2015
  • We analyze an M/G/1/K queueing system with queue-length dependent service and arrival rates. There are a single server and a buffer with finite capacity K including a customer in service. The customers are served by a first-come-first-service basis. We put two thresholds $L_1$ and $L_2$($${\geq_-}L_1$$ ) on the buffer. If the queue length at the service initiation epoch is less than the threshold $L_1$, the service time of customers follows $S_1$ with a mean of ${\mu}_1$ and the arrival of customers follows a Poisson process with a rate of ${\lambda}_1$. When the queue length at the service initiation epoch is equal to or greater than $L_1$ and less than $L_2$, the service time is changed to $S_2$ with a mean of $${\mu}_2{\geq_-}{\mu}_1$$. The arrival rate is still ${\lambda}_1$. Finally, if the queue length at the service initiation epoch is greater than $L_2$, the arrival rate of customers are also changed to a value of $${\lambda}_2({\leq_-}{\lambda}_1)$$ and the mean of the service times is ${\mu}_2$. By using the embedded Markov chain method, we derive queue length distribution at departure epochs. We also obtain the queue length distribution at an arbitrary time by the supplementary variable method. Finally, performance measures such as loss probability and mean waiting time are presented.

Prediction of the direction of stock prices by machine learning techniques (기계학습을 활용한 주식 가격의 이동 방향 예측)

  • Kim, Yonghwan;Song, Seongjoo
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.5
    • /
    • pp.745-760
    • /
    • 2021
  • Prediction of a stock price has been a subject of interest for a long time in financial markets, and thus, many studies have been conducted in various directions. As the efficient market hypothesis introduced in the 1970s acquired supports, it came to be the majority opinion that it was impossible to predict stock prices. However, recent advances in predictive models have led to new attempts to predict the future prices. Here, we summarize past studies on the price prediction by evaluation measures, and predict the direction of stock prices of Samsung Electronics, LG Chem, and NAVER by applying various machine learning models. In addition to widely used technical indicator variables, accounting indicators such as Price Earning Ratio and Price Book-value Ratio and outputs of the hidden Markov Model are used as predictors. From the results of our analysis, we conclude that no models show significantly better accuracy and it is not possible to predict the direction of stock prices with models used. Considering that the models with extra predictors show relatively high test accuracy, we may expect the possibility of a meaningful improvement in prediction accuracy if proper variables that reflect the opinions and sentiments of investors would be utilized.

Measuring the Impact of Competition on Pricing Behaviors in a Two-Sided Market

  • Kim, Minkyung;Song, Inseong
    • Asia Marketing Journal
    • /
    • v.16 no.1
    • /
    • pp.35-69
    • /
    • 2014
  • The impact of competition on pricing has been studied in the context of counterfactual merger analyses where expected optimal prices in a hypothetical monopoly are compared with observed prices in an oligopolistic market. Such analyses would typically assume static decision making by consumers and firms and thus have been applied mostly to data obtained from consumer packed goods such as cereal and soft drinks. However such static modeling approach is not suitable when decision makers are forward looking. When it comes to the markets for durable products with indirect network effects, consumer purchase decisions and firm pricing decisions are inherently dynamic as they take into account future states when making purchase and pricing decisions. Researchers need to take into account the dynamic aspects of decision making both in the consumer side and in the supplier side for such markets. Firms in a two-sided market typically subsidize one side of the market to exploit the indirect network effect. Such pricing behaviors would be more prevalent in competitive markets where firms would try to win over the battle for standard. While such qualitative expectation on the relationship between pricing behaviors and competitive structures could be easily formed, little empirical studies have measured the extent to which the distinct pricing structure in two-sided markets depends on the competitive structure of the market. This paper develops an empirical model to measure the impact of competition on optimal pricing of durable products under indirect network effects. In order to measure the impact of exogenously determined competition among firms on pricing, we compare the equilibrium prices in the observed oligopoly market to those in a hypothetical monopoly market. In computing the equilibrium prices, we account for the forward looking behaviors of consumers and supplier. We first estimate a demand function that accounts for consumers' forward-looking behaviors and indirect network effects. And then, for the supply side, the pricing equation is obtained as an outcome of the Markov Perfect Nash Equilibrium in pricing. In doing so, we utilize numerical dynamic programming techniques. We apply our model to a data set obtained from the U.S. video game console market. The video game console market is considered a prototypical case of two-sided markets in which the platform typically subsidizes one side of market to expand the installed base anticipating larger revenues in the other side of market resulting from the expanded installed base. The data consist of monthly observations of price, hardware unit sales and the number of compatible software titles for Sony PlayStation and Nintendo 64 from September 1996 to August 2002. Sony PlayStation was released to the market a year before Nintendo 64 was launched. We compute the expected equilibrium price path for Nintendo 64 and Playstation for both oligopoly and for monopoly. Our analysis reveals that the price level differs significantly between two competition structures. The merged monopoly is expected to set prices higher by 14.8% for Sony PlayStation and 21.8% for Nintendo 64 on average than the independent firms in an oligopoly would do. And such removal of competition would result in a reduction in consumer value by 43.1%. Higher prices are expected for the hypothetical monopoly because the merged firm does not need to engage in the battle for industry standard. This result is attributed to the distinct property of a two-sided market that competing firms tend to set low prices particularly at the initial period to attract consumers at the introductory stage and to reinforce their own networks and eventually finally to dominate the market.

  • PDF

Predictive Clustering-based Collaborative Filtering Technique for Performance-Stability of Recommendation System (추천 시스템의 성능 안정성을 위한 예측적 군집화 기반 협업 필터링 기법)

  • Lee, O-Joun;You, Eun-Soon
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.119-142
    • /
    • 2015
  • With the explosive growth in the volume of information, Internet users are experiencing considerable difficulties in obtaining necessary information online. Against this backdrop, ever-greater importance is being placed on a recommender system that provides information catered to user preferences and tastes in an attempt to address issues associated with information overload. To this end, a number of techniques have been proposed, including content-based filtering (CBF), demographic filtering (DF) and collaborative filtering (CF). Among them, CBF and DF require external information and thus cannot be applied to a variety of domains. CF, on the other hand, is widely used since it is relatively free from the domain constraint. The CF technique is broadly classified into memory-based CF, model-based CF and hybrid CF. Model-based CF addresses the drawbacks of CF by considering the Bayesian model, clustering model or dependency network model. This filtering technique not only improves the sparsity and scalability issues but also boosts predictive performance. However, it involves expensive model-building and results in a tradeoff between performance and scalability. Such tradeoff is attributed to reduced coverage, which is a type of sparsity issues. In addition, expensive model-building may lead to performance instability since changes in the domain environment cannot be immediately incorporated into the model due to high costs involved. Cumulative changes in the domain environment that have failed to be reflected eventually undermine system performance. This study incorporates the Markov model of transition probabilities and the concept of fuzzy clustering with CBCF to propose predictive clustering-based CF (PCCF) that solves the issues of reduced coverage and of unstable performance. The method improves performance instability by tracking the changes in user preferences and bridging the gap between the static model and dynamic users. Furthermore, the issue of reduced coverage also improves by expanding the coverage based on transition probabilities and clustering probabilities. The proposed method consists of four processes. First, user preferences are normalized in preference clustering. Second, changes in user preferences are detected from review score entries during preference transition detection. Third, user propensities are normalized using patterns of changes (propensities) in user preferences in propensity clustering. Lastly, the preference prediction model is developed to predict user preferences for items during preference prediction. The proposed method has been validated by testing the robustness of performance instability and scalability-performance tradeoff. The initial test compared and analyzed the performance of individual recommender systems each enabled by IBCF, CBCF, ICFEC and PCCF under an environment where data sparsity had been minimized. The following test adjusted the optimal number of clusters in CBCF, ICFEC and PCCF for a comparative analysis of subsequent changes in the system performance. The test results revealed that the suggested method produced insignificant improvement in performance in comparison with the existing techniques. In addition, it failed to achieve significant improvement in the standard deviation that indicates the degree of data fluctuation. Notwithstanding, it resulted in marked improvement over the existing techniques in terms of range that indicates the level of performance fluctuation. The level of performance fluctuation before and after the model generation improved by 51.31% in the initial test. Then in the following test, there has been 36.05% improvement in the level of performance fluctuation driven by the changes in the number of clusters. This signifies that the proposed method, despite the slight performance improvement, clearly offers better performance stability compared to the existing techniques. Further research on this study will be directed toward enhancing the recommendation performance that failed to demonstrate significant improvement over the existing techniques. The future research will consider the introduction of a high-dimensional parameter-free clustering algorithm or deep learning-based model in order to improve performance in recommendations.

An Empirical Study on Statistical Optimization Model for the Portfolio Construction of Sponsored Search Advertising(SSA) (키워드검색광고 포트폴리오 구성을 위한 통계적 최적화 모델에 대한 실증분석)

  • Yang, Hognkyu;Hong, Juneseok;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.167-194
    • /
    • 2019
  • This research starts from the four basic concepts of incentive incompatibility, limited information, myopia and decision variable which are confronted when making decisions in keyword bidding. In order to make these concept concrete, four framework approaches are designed as follows; Strategic approach for the incentive incompatibility, Statistical approach for the limited information, Alternative optimization for myopia, and New model approach for decision variable. The purpose of this research is to propose the statistical optimization model in constructing the portfolio of Sponsored Search Advertising (SSA) in the Sponsor's perspective through empirical tests which can be used in portfolio decision making. Previous research up to date formulates the CTR estimation model using CPC, Rank, Impression, CVR, etc., individually or collectively as the independent variables. However, many of the variables are not controllable in keyword bidding. Only CPC and Rank can be used as decision variables in the bidding system. Classical SSA model is designed on the basic assumption that the CPC is the decision variable and CTR is the response variable. However, this classical model has so many huddles in the estimation of CTR. The main problem is the uncertainty between CPC and Rank. In keyword bid, CPC is continuously fluctuating even at the same Rank. This uncertainty usually raises questions about the credibility of CTR, along with the practical management problems. Sponsors make decisions in keyword bids under the limited information, and the strategic portfolio approach based on statistical models is necessary. In order to solve the problem in Classical SSA model, the New SSA model frame is designed on the basic assumption that Rank is the decision variable. Rank is proposed as the best decision variable in predicting the CTR in many papers. Further, most of the search engine platforms provide the options and algorithms to make it possible to bid with Rank. Sponsors can participate in the keyword bidding with Rank. Therefore, this paper tries to test the validity of this new SSA model and the applicability to construct the optimal portfolio in keyword bidding. Research process is as follows; In order to perform the optimization analysis in constructing the keyword portfolio under the New SSA model, this study proposes the criteria for categorizing the keywords, selects the representing keywords for each category, shows the non-linearity relationship, screens the scenarios for CTR and CPC estimation, selects the best fit model through Goodness-of-Fit (GOF) test, formulates the optimization models, confirms the Spillover effects, and suggests the modified optimization model reflecting Spillover and some strategic recommendations. Tests of Optimization models using these CTR/CPC estimation models are empirically performed with the objective functions of (1) maximizing CTR (CTR optimization model) and of (2) maximizing expected profit reflecting CVR (namely, CVR optimization model). Both of the CTR and CVR optimization test result show that the suggested SSA model confirms the significant improvements and this model is valid in constructing the keyword portfolio using the CTR/CPC estimation models suggested in this study. However, one critical problem is found in the CVR optimization model. Important keywords are excluded from the keyword portfolio due to the myopia of the immediate low profit at present. In order to solve this problem, Markov Chain analysis is carried out and the concept of Core Transit Keyword (CTK) and Expected Opportunity Profit (EOP) are introduced. The Revised CVR Optimization model is proposed and is tested and shows validity in constructing the portfolio. Strategic guidelines and insights are as follows; Brand keywords are usually dominant in almost every aspects of CTR, CVR, the expected profit, etc. Now, it is found that the Generic keywords are the CTK and have the spillover potentials which might increase consumers awareness and lead them to Brand keyword. That's why the Generic keyword should be focused in the keyword bidding. The contribution of the thesis is to propose the novel SSA model based on Rank as decision variable, to propose to manage the keyword portfolio by categories according to the characteristics of keywords, to propose the statistical modelling and managing based on the Rank in constructing the keyword portfolio, and to perform empirical tests and propose a new strategic guidelines to focus on the CTK and to propose the modified CVR optimization objective function reflecting the spillover effect in stead of the previous expected profit models.