• Title/Summary/Keyword: Stochastic analysis

Search Result 1,259, Processing Time 0.024 seconds

Reliability Analysis of the Gravity Retaing Wall (중력식(重力式) 옹벽(擁壁)의 신뢰도(信賴度)에 관한 연구(研究))

  • Paik, Young Shik;Lee, Yong Il
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.3 no.2
    • /
    • pp.127-135
    • /
    • 1983
  • A new approach is developed to analyze the reliability of the earth retaining wall using the concept of probability of failure, instead of conventional factor of safety. Many uncertainties, which are included in the conventional stability analysis, can be excluded by using the stochastic approach. And the reliability, more consistent with the reality, can be obtained by the simulation. The strength parameters of soil properties are assumed to be random variables to follow a generalized beta distribution. The interval [A, B] of the random variables could be determined using the maximum likelihood estimation. The pseudo-random values corresponding to the proposed beta distribution are generated using the rejection method. The probability of failure defined as follows, is obtained by using the Monte Carlo Method. $$P_f=\frac{M}{N}$$ where, $P_f$ : Probability of failure N : Total number of trials M : Total number of failure out of N A computer program is developed for the computation procedure mentioned above. Finally, a numerical example is solved using the developed program.

  • PDF

An Empirical Analysis on the Relationship between Stock Price, Interest Rate, Price Index and Housing Price using VAR Model (VAR 모형을 이용한 주가, 금리, 물가, 주택가격의 관계에 대한 실증연구)

  • Kim, Jae-Gyeong
    • Journal of Distribution Science
    • /
    • v.11 no.10
    • /
    • pp.63-72
    • /
    • 2013
  • Purpose - This study analyzes the relationship and dynamic interactions between stock price index, interest rate, price index, and housing price indices using Korean monthly data from 2000 to 2013, based on a VAR model. This study also examines Granger causal relationships among these variables in order to determine whether the time series of one is useful in forecasting another, or to infer certain types of causal dependency between stochastic variables. Research design, data, and methodology - We used Korean monthly data for all variables from 2000: M1 to 2013: M3. First, we checked the correlations among different variables. Second, we conducted the Augmented Dickey-Fuller (ADF) test and the co-integration test using the VAR model. Third, we employed Granger Causality tests to quantify the causal effect from time series observations. Fourth, we used the impulse response function and variance decomposition based on the VAR model to examine the dynamic relationships among the variables. Results - First, stock price Granger affects interest rate and all housing price indices. Price index Granger, in turn, affects the stock price and six metropolitan housing price indices. However, none of the Granger variables affect the price index. Therefore, it is the stock markets (and not the housing market) that affects the housing prices. Second, the impulse response tests show that maximum influence on stock price is its own, and though it is influenced a little by interest rate, price index affects it negatively. One standard deviation (S.D.) shock to stock price increases the housing price by 0.08 units after two months, whereas an impulse shock to the interest rate negatively impacts the housing price. Third, the variance decomposition results report that the shock to the stock price accounts for 96% of the variation in the stock price, and the shock to the price index accounts for 2.8% after two periods. In contrast, the shock to the interest rate accounts for 80% of the variation in the interest rate after ten periods; the shock to the stock price accounts for 19% of the variation; however, shock to the price index does not affect the interest rate. The housing price index in 10 periods is explained up to 96.7% by itself, 2.62% by stock price, 0.68% by price index, and 0.04% by interest rate. Therefore, the housing market is explained most by its own variation, whereas the interest rate has little impact on housing price. Conclusions - The results of the study elucidate the relationship and dynamic interactions among stock price index, interest rate, price index, and housing price indices using VAR model. This study could help form the basis for more appropriate economic policies in the future. As the housing market is very important in Korean economy, any changes in house price affect the other markets, thereby resulting in a shock to the entire economy. Therefore, the analysis on the dynamic relationships between the housing market and economic variables will help with the decision making regarding the housing market policy.

A Study on the Presentation of Entrance Surface Dose Model using Semiconductor Dosimeter, General Dosimeter, Glass Dosimeter: Focusing on Comparative Analysis of Effective Dose and Disease Risk through PCXMC 2.0 based on Monte Carlo Simulation (반도체 선량계, 일반 선량계, 유리 선량계를 이용한 입사표면선량 모델 제시에 관한 연구: 몬테카를로 시뮬레이션 기반의 PCXMC 2.0을 통한 유효선량과 발병 위험도의 비교분석을 중심으로)

  • Hwang, Jun-Ho;Lee, Kyung-Bae
    • Journal of the Korean Society of Radiology
    • /
    • v.12 no.2
    • /
    • pp.149-157
    • /
    • 2018
  • One of the purposes of radiation protection is to minimize stochastic effects. PCXMC 2.0 is a Monte Carlo Simulation based program and makes it possible to predict effective dose and the probability of cancer development through entrance surface dose. Therefore, it is especially important to measure entrance surface dose through dosimeter. The purpose of this study is to measure entrance surface dose through semiconductor dosimeter, general dosimeter, glass dosimeter, and to compare and analyze the effective dose and probability of disease of critical organs. As an experimental method, the entrance surface dose of skull, chest, abdomen was measured per dosimeter and the effective dose and the probability of cancer development of critical organs per area was evaluated by PCXMC 2.0. As a result, the entrance surface dose per area was different in the order of a general dosimeter, a semiconductor dosimeter, and a glass dosimeter even under the same condition. Base on this analysis, the effective dose and probability of developing cancer of critical organs were also different in the order of a general dosimeter, a semiconductor dosimeter, and a glass dosimeter. In conclusion, it was found that the effective dose and the risk of diseases differ according to the dosimeter used, even under the same conditions, and through this study it was found that it is important to present an accurate entrance surface dose model according to each dosimeter.

Process Development for Optimizing Sensor Placement Using 3D Information by LiDAR (LiDAR자료의 3차원 정보를 이용한 최적 Sensor 위치 선정방법론 개발)

  • Yu, Han-Seo;Lee, Woo-Kyun;Choi, Sung-Ho;Kwak, Han-Bin;Kwak, Doo-Ahn
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.18 no.2
    • /
    • pp.3-12
    • /
    • 2010
  • In previous studies, the digital measurement systems and analysis algorithms were developed by using the related techniques, such as the aerial photograph detection and high resolution satellite image process. However, these studies were limited in 2-dimensional geo-processing. Therefore, it is necessary to apply the 3-dimensional spatial information and coordinate system for higher accuracy in recognizing and locating of geo-features. The objective of this study was to develop a stochastic algorithm for the optimal sensor placement using the 3-dimensional spatial analysis method. The 3-dimensional information of the LiDAR was applied in the sensor field algorithm based on 2- and/or 3-dimensional gridded points. This study was conducted with three case studies using the optimal sensor placement algorithms; the first case was based on 2-dimensional space without obstacles(2D-non obstacles), the second case was based on 2-dimensional space with obstacles(2D-obstacles), and lastly, the third case was based on 3-dimensional space with obstacles(3D-obstacles). Finally, this study suggested the methodology for the optimal sensor placement - especially, for ground-settled sensors - using the LiDAR data, and it showed the possibility of algorithm application in the information collection using sensors.

Pervaporation Characteristics of Water/Ethanol and Water/Isopropyl Alcohol Mixtures through Zeolite 4A Membranes: Activity Coefficient Model and Maxwell Stefan Model (제올라이트 4A 분리막을 이용한 물/에탄올, 물/이소프로필알코올 혼합물의 투과증발 특성 연구 : 활동도계수모형 및 Generalized Maxwell Stefan 모형)

  • Oh, Woong Jin;Jung, Jae-Chil;Lee, Jung Hyun;Yeo, Jeong-gu;Lee, Da Hun;Park, Young Cheol;Kim, Hyunuk;Lee, Dong-Ho;Cho, Churl-Hee;Moon, Jong-Ho
    • Clean Technology
    • /
    • v.24 no.3
    • /
    • pp.239-248
    • /
    • 2018
  • In this study, pervaporation experiments of water, ethanol and IPA (Isopropyl alcohol) single components and water/ethanol, water/IPA mixtures were carried out using zeolite 4A membranes developed by Fine Tech Co. Ltd. Those membranes were fabricated by hydrothermal synthesis (growth in hydrothermal condition) after uniformly dispersing the zeolite seeds on the tubular alumina supports. They have a pore size of about $4{\AA}$ by ion exchange of $Na^+$ to the LTA structure with Si/Al ratio of 1.0, and shows strong hydrophilic property. Physical characteristics of prepared membranes were evaluated by using SEM (surface morphology), porosimetry (macro- or meso- pore analysis), BET (micropore analysis), and load tester (compressive strength). Pervaporation experiments with various temperature and concentration conditions confirmed that the zeolite 4A membrane can selectively separate water from ethanol and IPA. Water/ethanol separation factor was over 3,000 and water/IPA separation factor was over 1,500 (50 : 50 wt%, initial feed concentration). Pervaporation behaviors of single components and binary mixtures were predicted using ACM (activity coefficient model), GMS (generalized Maxwell Stefan) model and DGM (Dusty Gas Model). The adsorption and diffusion coefficients of the zeolite top layer were obtained by parameter estimation using GA (Genetic Algorithm, stochastic optimization method). All the calculations were carried out using MATLAB 2018a version.

Univariate Analysis of Soil Moisture Time Series for a Hillslope Located in the KoFlux Gwangneung Supersite (광릉수목원 내 산지사면에서의 토양수분 시계열 자료의 단변량 분석)

  • Son, Mi-Na;Kim, Sang-Hyun;Kim, Do-Hoon;Lee, Dong-Ho;Kim, Joon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.9 no.2
    • /
    • pp.88-99
    • /
    • 2007
  • Soil moisture is one of the essential components in determining surface hydrological processes such as infiltration, surface runoff as well as meteorological, ecological and water quality responses at watershed scale. This paper discusses soil moisture transfer processes measured at hillslope scale in the Gwangneung forest catchment to understand and provide the basis of stochastic structures of soil moisture variation. Measured soil moisture series were modelled based upon the developed univariate model platform. The modeling consists of a series of procedures: pre-treatment of data, model structure investigation, selection of candidate models, parameter estimation and diagnostic checking. The spatial distribution of model is associated with topographic characteristics of the hillslope. The upslope area computed by the multiple flow direction algorithm and the local slope are found to be effective parameters to explain the distribution of the model structure. This study enables us to identify the key factors affecting the soil moisture distribution and to ultimately construct a realistic soil moisture map in a complex landscape such as the Gwangneung Supersite.

Application of an Automated Time Domain Reflectometry to Solute Transport Study at Field Scale: Transport Concept (시간영역 광전자파 분석기 (Automatic TDR System)를 이용한 오염물질의 거동에 관한 연구: 오염물질 운송개념)

  • Kim, Dong-Ju
    • Economic and Environmental Geology
    • /
    • v.29 no.6
    • /
    • pp.713-724
    • /
    • 1996
  • The time-series resident solute concentrations, monitored at two field plots using the automated 144-channel TDR system by Kim (this issue), are used to investigate the dominant transport mechanism at field scale. Two models, based on contradictory assumptions for describing the solute transport in the vadose zone, are fitted to the measured mean breakthrough curves (BTCs): the deterministic one-dimensional convection-dispersion model (CDE) and the stochastic-convective lognormal transfer function model (CLT). In addition, moment analysis has been performed using the probability density functions (pdfs) of the travel time of resident concentration. Results of moment analysis have shown that the first and second time moments of resident pdf are larger than those of flux pdf. Based on the time moments, expressed in function of model parameters, variance and dispersion of resident solute travel times are derived. The relationship between variance or dispersion of solute travel time and depth has been found to be identical for both the time-series flux and resident concentrations. Based on these relationships, the two models have been tested. However, due to the significant variations of transport properties across depth, the test has led to unreliable results. Consequently, the model performance has been evaluated based on predictability of the time-series resident BTCs at other depths after calibration at the first depth. The evaluation of model predictability has resulted in a clear conclusion that for both experimental sites the CLT model gives more accurate prediction than the CDE model. This suggests that solute transport at natural field soils is more likely governed by a stream tube model concept with correlated flow than a complete mixing model. Poor prediction of CDE model is attributed to the underestimation of solute spreading and thus resulting in an overprediction of peak concentration.

  • PDF

An Analysis of Determinants of Medical Cost Inflation using both Deterministic and Stochastic Models (의료비 상승 요인 분석)

  • Kim, Han-Joong;Chun, Ki-Hong
    • Journal of Preventive Medicine and Public Health
    • /
    • v.22 no.4 s.28
    • /
    • pp.542-554
    • /
    • 1989
  • The skyrocketing inflation of medical costs has become a major health problem among most developed countries. Korea, which recently covered the entire population with National Health Insurance, is facing the same problem. The proportion of health expenditure to GNP has increased from 3% to 4.8% during the last decade. This was remarkable, if we consider the rapid economic growth during that time. A few policy analysts began to raise cost containment as an agenda, after recognizing the importance of medical cost inflation. In order to Prepare an appropriate alternative for the agenda, it is necessary to find out reasons for the cost inflation. Then, we should focus on the reasons which are controllable, and those whose control are socially desirable. This study is designed to articulate the theory of medical cost inflation through literature reviews, to find out reasons for cost inflation, by analyzing aggregated data with a deterministic model. Finally to identify determinants of changes in both medical demand and service intensity which are major reasons for cost inflation. The reasons for cost inflation are classified into cost push inflation and demand pull inflation, The former consists of increases in price and intensity of services, while the latter is made of consumer derived demand and supplier induced demand. We used a time series (1983-1987), and cross sectional (over regions) data of health insurance. The deterministic model reveals, that an increase in service intensity is a major cause of inflation in the case of inpatient care, while, more utilization, is a primary attribute in the case of physician visits. Multiple regression analysis shows that an increase in hospital beds is a leading explanatory variable for the increase in hospital care. It also reveals, that an introduction of a deductible clause, an increase in hospital beds and degree of urbanization, are statistically significant variables explaining physician visits. The results are consistent with the existing theory, The magnitude of service intensity is influenced by the level of co-payment, the proportion of old age and an increase in co-payment. In short, an increase in co-payment reduced the utilization, but it induced more intensities or services. We can conclude that the strict fee regulation or increase in the level of co-payment can not be an effective measure for cost containment under the fee for service system. Because the provider can react against the regulation by inducing more services.

  • PDF

A Model for the Optimal Mission Allocation of Naval Warship Based on Absorbing Markov Chain Simulation (흡수 마코프 체인 시뮬레이션 기반 최적 함정 임무 할당 모형)

  • Kim, Seong-Woo;Choi, Kyung-Hwan
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.6
    • /
    • pp.558-565
    • /
    • 2021
  • The Republic of Korea Navy has deployed naval fleets in the East, West, and South seas to effectively respond to threats from North Korea and its neighbors. However, it is difficult to allocate proper missions due to high uncertainties, such as the year of introduction for the ship, the number of mission days completed, arms capabilities, crew shift times, and the failure rate of the ship. For this reason, there is an increasing proportion of expenses, or mission alerts with high fatigue in the number of workers and traps. In this paper, we present a simulation model that can optimize the assignment of naval vessels' missions by using a continuous time absorbing Markov chain that is easy to model and that can analyze complex phenomena with varying event rates over time. A numerical analysis model allows us to determine the optimal mission durations and warship quantities to maintain the target operating rates, and we find that allocating optimal warships for each mission reduces unnecessary alerts and reduces crew fatigue and failures. This model is significant in that it can be expanded to various fields, not only for assignment of duties but also for calculation of appropriate requirements and for inventory analysis.

Water Quality Assessment and Turbidity Prediction Using Multivariate Statistical Techniques: A Case Study of the Cheurfa Dam in Northwestern Algeria

  • ADDOUCHE, Amina;RIGHI, Ali;HAMRI, Mehdi Mohamed;BENGHAREZ, Zohra;ZIZI, Zahia
    • Applied Chemistry for Engineering
    • /
    • v.33 no.6
    • /
    • pp.563-573
    • /
    • 2022
  • This work aimed to develop a new equation for turbidity (Turb) simulation and prediction using statistical methods based on principal component analysis (PCA) and multiple linear regression (MLR). For this purpose, water samples were collected monthly over a five year period from Cheurfa dam, an important reservoir in Northwestern Algeria, and analyzed for 12 parameters, including temperature (T°), pH, electrical conductivity (EC), turbidity (Turb), dissolved oxygen (DO), ammonium (NH4+), nitrate (NO3-), nitrite (NO2-), phosphate (PO43-), total suspended solids (TSS), biochemical oxygen demand (BOD5) and chemical oxygen demand (COD). The results revealed a strong mineralization of the water and low dissolved oxygen (DO) content during the summer period. High levels of TSS and Turb were recorded during rainy periods. In addition, water was charged with phosphate (PO43-) in the whole period of study. The PCA results revealed ten factors, three of which were significant (eigenvalues >1) and explained 75.5% of the total variance. The F1 and F2 factors explained 36.5% and 26.7% of the total variance, respectively and indicated anthropogenic pollution of domestic agricultural and industrial origin. The MLR turbidity simulation model exhibited a high coefficient of determination (R2 = 92.20%), indicating that 92.20% of the data variability can be explained by the model. TSS, DO, EC, NO3-, NO2-, and COD were the most significant contributing parameters (p values << 0.05) in turbidity prediction. The present study can help with decision-making on the management and monitoring of the water quality of the dam, which is the primary source of drinking water in this region.