• Title/Summary/Keyword: Stochastic variable

Search Result 182, Processing Time 0.026 seconds

A Ppoisson Regression Aanlysis of Physician Visits (외래이용빈도 분석의 모형과 기법)

  • 이영조;한달선;배상수
    • Health Policy and Management
    • /
    • v.3 no.2
    • /
    • pp.159-176
    • /
    • 1993
  • The utilization of outpatient care services involves two steps of sequential decisions. The first step decision is about whether to initiate the utilization and the second one is about how many more visits to make after the initiation. Presumably, the initiation decision is largely made by the patient and his or her family, while the number of additional visits is decided under a strong influence of the physician. Implication is that the analysis of the outpatient care utilization requires to specify each of the two decisions underlying the utilization as a distinct stochastic process. This paper is concerned with the number of physician visits, which is, by definition, a discrete variable that can take only non-negative integer values. Since the initial visit is considered in the analysis of whether or not having made any physician visit, the focus on the number of visits made in addition to the initial one must be enough. The number of additional visits, being a kind of count data, could be assumed to exhibit a Poisson distribution. However, it is likely that the distribution is over dispersed since the number of physician visits tends to cluster around a few values but still vary widely. A recently reported study of outpatient care utilization employed an analysis based upon the assumption of a negative binomial distribution which is a type of overdispersed Poisson distribution. But there is an indication that the use of Poisson distribution making adjustments for over-dispersion results in less loss of efficiency in parameter estimation compared to the use of a certain type of distribution like a negative binomial distribution. An analysis of the data for outpatient care utilization was performed focusing on an assessment of appropriateness of available techniques. The data used in the analysis were collected by a community survey in Hwachon Gun, Kangwon Do in 1990. It was observed that a Poisson regression with adjustments for over-dispersion is superior to either an ordinary regression or a Poisson regression without adjustments oor over-dispersion. In conclusion, it seems the most approprite to assume that the number of physician visits made in addition to the initial visist exhibits an overdispersed Poisson distribution when outpatient care utilization is studied based upon a model which embodies the two-part character of the decision process uderlying the utilization.

  • PDF

Reliability Analysis of Final Settlement Using Terzaghi's Consolidation Theory (테르자기 압밀이론을 이용한 최종압밀침하량에 관한 신뢰성 해석)

  • Chae, Jong Gil;Jung, Min Su
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.6C
    • /
    • pp.349-358
    • /
    • 2008
  • In performing the reliability analysis for predicting the settlement with time of alluvial clay layer at Kobe airport, the uncertainties of geotechnical properties were examined based on the stochastic and probabilistic theory. By using Terzaghi's consolidation theory as the objective function, the failure probability was normalized based on AFOSM method. As the result of reliability analysis, the occurrence probabilities for the cases of the target settlement of ${\pm}10%,\;{\pm}25%$ of the total settlement from the deterministic analysis were 30~50%, 60%~90%, respectively. Considering that the variation coefficients of input variable are almost similar as those of past researches, the acceptable error range of the total settlement would be expected in the range of 10% of the predicted total settlement. As the result of sensitivity analysis, the factors which affect significantly on the settlement analysis were the uncertainties of the compression coefficient Cc, the pre-consolidation stress Pc, and the prediction model employed. Accordingly, it is very important for the reliable prediction with high reliability to obtain reliable soil properties such as Cc and Pc by performing laboratory tests in which the in-situ stress and strain conditions are properly simulated.

The Economics Value of Electric Vehicle Demand Resource under the Energy Transition Plan (에너지전환 정책하에 전기차 수요자원의 경제적 가치 분석: 9차 전력수급계획 중심으로)

  • Jeon, Wooyoung;Cho, Sangmin;Cho, Ilhyun
    • Environmental and Resource Economics Review
    • /
    • v.30 no.2
    • /
    • pp.237-268
    • /
    • 2021
  • As variable renewable sources rapidly increase due to the Energy Transition plan, integration cost of renewable sources to the power system is rising sharply. The increase in variable renewable energy reduces the capacity factor of existing traditional power capacity, and this undermines the efficiency of the overall power supply, and demand resources are drawing attention as a solution. In this study, we analyzed how much electric vehicle demand resouces, which has great potential among other demand resources, can reduce power supply costs if it is used as a flexible resource for renewable generation. As a methodology, a stochastic form of power system optimization model that can effectively reflect the volatile characteristics of renewable generation is used to analyze the cost induced by renewable energy and the benefits offered by electric vehicle demand resources. The result shows that virtual power plant-based direct control method has higher benefits than the time-of-use tariff, and the higher the proportion of renewable energy is in the power system, the higher the benefits of electric vehicle demand resources are. The net benefit after considering commission fee for aggregators and battery wear-and-tear costs was estimated as 67% to 85% of monthly average fuel cost under virtual power plant with V2G capability, and this shows that a sufficient incentive for market participation can be offered when a rate system is applied in which these net benefits of demand resources are effectively distributed to consumers.

Demand Shifting or Ancillary Service?: Optimal Allocation of Storage Resource to Maximize the Efficiency of Power Supply (Demand Shifting or Ancillary Service?: 효율적 재생발전 수용을 위한 에너지저장장치 최적 자원 분배 연구)

  • Wooyoung Jeon
    • Environmental and Resource Economics Review
    • /
    • v.33 no.2
    • /
    • pp.113-133
    • /
    • 2024
  • Variable renewable energy (VRE) such as solar and wind power is the main sources of achieving carbon net zero, but it undermines the stability of power supply due to high variability and uncertainty. Energy storage system (ESS) can not only reduce the curtailment of VRE by load shifting but also contribute to stable power system operation by providing ancillary services. This study analyzes how the allocation of ESS resources between load shifting and ancillary service can contribute to maximizing the efficiency of power supply in a situation where the problems caused by VRE are becoming more and more serious. A stochastic power system optimization model that can realistically simulate the variability and uncertainty of VRE was applied. The analysis time point was set to 2023 and 2036, and the optimal resource allocation strategy and benefits of ESS by varying VRE penetration levels were analyzed. The analysis results can be largely summarized into the following three. First, ESS provides excellent functions for both load shifting and ancillary service, and it was confirmed that the higher the reserve price, the more limited the load shifting and focused on providing reserve. Second, the curtailment of VRE can be a effective substitute for the required reserve, and the higher the reserve price level, the higher the curtailment of VRE and the lower the required amount of reserve. Third, if a reasonable reserve offer price reflecting the opportunity cost is applied, ESS can secure economic feasibility in the near future, and the higher the proportion of VRE, the greater the economic feasibility of ESS. This study suggests that cost-effective low-carbon transition in the power system is possible when the price signal is correctly designed so that power supply resources can be efficiently utilized.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Technical Inefficiency in Korea's Manufacturing Industries (한국(韓國) 제조업(製造業)의 기술적(技術的) 효율성(效率性) : 산업별(産業別) 기술적(技術的) 효율성(效率性)의 추정(推定))

  • Yoo, Seong-min;Lee, In-chan
    • KDI Journal of Economic Policy
    • /
    • v.12 no.2
    • /
    • pp.51-79
    • /
    • 1990
  • Research on technical efficiency, an important dimension of market performance, had received little attention until recently by most industrial organization empiricists, the reason being that traditional microeconomic theory simply assumed away any form of inefficiency in production. Recently, however, an increasing number of research efforts have been conducted to answer questions such as: To what extent do technical ineffciencies exist in the production activities of firms and plants? What are the factors accounting for the level of inefficiency found and those explaining the interindustry difference in technical inefficiency? Are there any significant international differences in the levels of technical efficiency and, if so, how can we reconcile these results with the observed pattern of international trade, etc? As the first in a series of studies on the technical efficiency of Korea's manufacturing industries, this paper attempts to answer some of these questions. Since the estimation of technical efficiency requires the use of plant-level data for each of the five-digit KSIC industries available from the Census of Manufactures, one may consture the findings of this paper as empirical evidence of technical efficiency in Korea's manufacturing industries at the most disaggregated level. We start by clarifying the relationship among the various concepts of efficiency-allocative effciency, factor-price efficiency, technical efficiency, Leibenstein's X-efficiency, and scale efficiency. It then becomes clear that unless certain ceteris paribus assumptions are satisfied, our estimates of technical inefficiency are in fact related to factor price inefficiency as well. The empirical model employed is, what is called, a stochastic frontier production function which divides the stochastic term into two different components-one with a symmetric distribution for pure white noise and the other for technical inefficiency with an asymmetric distribution. A translog production function is assumed for the functional relationship between inputs and output, and was estimated by the corrected ordinary least squares method. The second and third sample moments of the regression residuals are then used to yield estimates of four different types of measures for technical (in) efficiency. The entire range of manufacturing industries can be divided into two groups, depending on whether or not the distribution of estimated regression residuals allows a successful estimation of technical efficiency. The regression equation employing value added as the dependent variable gives a greater number of "successful" industries than the one using gross output. The correlation among estimates of the different measures of efficiency appears to be high, while the estimates of efficiency based on different regression equations seem almost uncorrelated. Thus, in the subsequent analysis of the determinants of interindustry variations in technical efficiency, the choice of the regression equation in the previous stage will affect the outcome significantly.

  • PDF

An Analysis of the Efficiency of Agricultural Business Corporations Using the Stochastic DEA Model (농업생산법인의 경영효율성 분석: 부트스트래핑 기법 활용)

  • Lee, Sang-Ho;Kim, Chung-Sil;Kwon, Kyung-Sup
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.6 no.4
    • /
    • pp.137-152
    • /
    • 2011
  • The purpose of this study is to estimate efficiency of agricultural business corporations using Data Envelopment Analysis. A proposed method employs a bootstrapping approach to generate efficiency estimates through Monte Carlo simulation re-sampling process. The technical efficiency, pure technical efficiency, and scale efficiency measure of the corporations is 0.749 0.790, 0.948 respectively. Among the 692 agricultural business corporations, the number of Increasing Returns to Scale (IRS)-type corporations was analyzed to be 539(77.9%). The number of Constant Returns to Scale (CRS)-type corporations was 108(15.6%), and that of Decreasing Returns to Scale (DRS)-type corporations was 45(6.5%). Since an increase in input is lower than an increase in output in IRS, an increase in input factors such as new investments is required. The Tobit model suggests that the type of corporation, capital level, and period of operation affect the efficiency score more than others. The positive coefficient of capital level and period of operation variable indicates that efficiency score increases as capital level and period of operation increases.

  • PDF

Valuing the Risks Created by Road Transport Demand Forecasting in PPP Projects (민간투자 도로사업의 교통수요 예측위험의 경제적 가치)

  • Kim, Kangsoo;Cho, Sungbin;Yang, Inseok
    • KDI Journal of Economic Policy
    • /
    • v.35 no.4
    • /
    • pp.31-61
    • /
    • 2013
  • The purpose of this study is to calculate the economic value of transport demand forecasting risks in the road PPP project. Under the assumption that volatility of the road PPP project value occurs only in regard with uncertainty of traffic volume forecasting, this study calculates the economic value of the traffic forecasting risks in the case of the road PPP project. To that end, forecasted traffic volume is assumed to be a stochastic variable and to follow the Geometric Brownian motion as time passes. In particular, this study attempts to differentiate itself from existing studies that simply use an arbitrary assumption by presenting the application of different traffic volume growth volatility and the rates before and after the ramp-up period. Analysis of the case projects reveals that the risk premium related to traffic volume forecast of the project turns out as 7.39~8.30%, without considering option value-such as minimum revenue guarantee-while the project value volatility caused by transport demand forecasting risks is 17.11%. As the discount rate grows higher, the project value volatility tends to decrease and volatility in project value is always suggested to be larger than that in transport volume influenced by leverage effect due to fixed expenditure. The market value of transport demand forecasting risk-calculated using the project value volatility and risk premium-is analyzed to be between 0.42~0.50, implying that a 1% increase or decrease in the transport amount volatility would lead to a 0.42~0.50% increase or decrease in risk premium of the project.

  • PDF

Forecasts of the BDI in 2010 -Using the ARIMA-Type Models and HP Filtering (2010년 BDI의 예측 -ARIMA모형과 HP기법을 이용하여)

  • Mo, Soo-Won
    • Journal of Korea Port Economic Association
    • /
    • v.26 no.1
    • /
    • pp.222-233
    • /
    • 2010
  • This paper aims at predicting the BDI from Jan. to Dec. 2010 using such econometric techniues of the univariate time series as stochastic ARIMA-type models and Hodrick-Prescott filtering technique. The multivariate cause-effect econometric model is not employed for not assuring a higher degree of forecasting accuracy than the univariate variable model. Such a cause-effect econometric model also fails in adjusting itself for the post-sample. This article introduces the two ARIMA models and five Intervention-ARIMA models. The monthly data cover the period January 2000 through December 2009. The out-of-sample forecasting performance is compared between the ARIMA-type models and the random walk model. Forecasting performance is measured by three summary statistics: root mean squared error (RMSE), mean absolute error (MAE) and mean error (ME). The RMSE and MAE indicate that the ARIMA-type models outperform the random walk model And the mean errors for all models are small in magnitude relative to the MAE's, indicating that all models don't have a tendency of overpredicting or underpredicting systematically in forecasting. The pessimistic ex-ante forecasts are expected to be 2,820 at the end of 2010 compared with the optimistic forecasts of 4,230.

Forecast of the Daily Inflow with Artificial Neural Network using Wavelet Transform at Chungju Dam (웨이블렛 변환을 적용한 인공신경망에 의한 충주댐 일유입량 예측)

  • Ryu, Yongjun;Shin, Ju-Young;Nam, Woosung;Heo, Jun-Haeng
    • Journal of Korea Water Resources Association
    • /
    • v.45 no.12
    • /
    • pp.1321-1330
    • /
    • 2012
  • In this study, the daily inflow at the basin of Chungju dam is predicted using wavelet-artificial neural network for nonlinear model. Time series generally consists of a linear combination of trend, periodicity and stochastic component. However, when framing time series model through these data, trend and periodicity component have to be removed. Wavelet transform which is denoising technique is applied to remove nonlinear dynamic noise such as trend and periodicity included in hydrometeorological data and simple noise that arises in the measurement process. The wavelet-artificial neural network (WANN) using data applied wavelet transform as input variable and the artificial neural network (ANN) using only raw data are compared. As a results, coefficient of determination and the slope through linear regression show that WANN is higher than ANN by 0.031 and 0.0115 respectively. And RMSE and RRMSE of WANN are smaller than those of ANN by 37.388 and 0.099 respectively. Therefore, WANN model applied in this study shows more accurate results than ANN and application of denoising technique through wavelet transforms is expected that more accurate predictions than the use of raw data with noise.