• Title/Summary/Keyword: stochastic simulation.

Search Result 786, Processing Time 0.023 seconds

Water Quality Assessment and Turbidity Prediction Using Multivariate Statistical Techniques: A Case Study of the Cheurfa Dam in Northwestern Algeria

  • ADDOUCHE, Amina;RIGHI, Ali;HAMRI, Mehdi Mohamed;BENGHAREZ, Zohra;ZIZI, Zahia
    • Applied Chemistry for Engineering
    • /
    • v.33 no.6
    • /
    • pp.563-573
    • /
    • 2022
  • This work aimed to develop a new equation for turbidity (Turb) simulation and prediction using statistical methods based on principal component analysis (PCA) and multiple linear regression (MLR). For this purpose, water samples were collected monthly over a five year period from Cheurfa dam, an important reservoir in Northwestern Algeria, and analyzed for 12 parameters, including temperature (T°), pH, electrical conductivity (EC), turbidity (Turb), dissolved oxygen (DO), ammonium (NH4+), nitrate (NO3-), nitrite (NO2-), phosphate (PO43-), total suspended solids (TSS), biochemical oxygen demand (BOD5) and chemical oxygen demand (COD). The results revealed a strong mineralization of the water and low dissolved oxygen (DO) content during the summer period. High levels of TSS and Turb were recorded during rainy periods. In addition, water was charged with phosphate (PO43-) in the whole period of study. The PCA results revealed ten factors, three of which were significant (eigenvalues >1) and explained 75.5% of the total variance. The F1 and F2 factors explained 36.5% and 26.7% of the total variance, respectively and indicated anthropogenic pollution of domestic agricultural and industrial origin. The MLR turbidity simulation model exhibited a high coefficient of determination (R2 = 92.20%), indicating that 92.20% of the data variability can be explained by the model. TSS, DO, EC, NO3-, NO2-, and COD were the most significant contributing parameters (p values << 0.05) in turbidity prediction. The present study can help with decision-making on the management and monitoring of the water quality of the dam, which is the primary source of drinking water in this region.

A Stochastic Study for the Emergency Treatment of Carbon Monoxide Poisoning in Korea (일산화탄소중독(一酸化炭素中毒)의 진료대책(診療對策) 수립(樹立)을 위한 추계학적(推計學的) 연구(硏究))

  • Kim, Yong-Ik;Yun, Dork-Ro;Shin, Young-Soo
    • Journal of Preventive Medicine and Public Health
    • /
    • v.16 no.1
    • /
    • pp.135-152
    • /
    • 1983
  • Emergency medical service is an important part of the health care delivery system, and the optimal allocation of resources and their efficient utilization are essentially demanded. Since these conditions are the prerequisite to prompt treatment which, in turn, will be crucial for life saving and in reducing the undesirable sequelae of the event. This study, taking the hyperbaric chamber for carbon monoxide poisoning as an example, is to develop a stochastic approach for solving the problems of optimal allocation of such emergency medical facility in Korea. The hyperbaric chamber, in Korea, is used almost exclusively for the treatment of acute carbon monoxide poisoning, most of which occur at home, since the coal briquette is used as domestic fuel by 69.6 per cent of the Korean population. The annual incidence rate of the comatous and fatal carbon monoxide poisoning is estimated at 45.5 per 10,000 of coal briquette-using population. It offers a serious public health problem and occupies a large portion of the emergency outpatients, especially in the winter season. The requirement of hyperbaric chambers can be calculated by setting the level of the annual queueing rate, which is here defined as the proportion of the annual number of the queued patients among the annual number of the total patients. The rate is determined by the size of the coal briquette-using population which generate a certain number of carbon monoxide poisoning patients in terms of the annual incidence rate, and the number of hyperbaric chambers per hospital to which the patients are sent, assuming that there is no referral of the patients among hospitals. The queueing occurs due to the conflicting events of the 'arrival' of the patients and the 'service' of the hyperbaric chambers. Here, we can assume that the length of the service time of hyperbaric chambers is fixed at sixty minutes, and the service discipline is based on 'first come, first served'. The arrival pattern of the carbon monoxide poisoning is relatively unique, because it usually occurs while the people are in bed. Diurnal variation of the carbon monoxide poisoning can hardly be formulated mathematically, so empirical cumulative distribution of the probability of the hourly arrival of the patients was used for Monte Carlo simulation to calculate the probability of queueing by the number of the patients per day, for the cases of one, two or three hyperbaric chambers assumed to be available per hospital. Incidence of the carbon monoxide poisoning also has strong seasonal variation, because of the four distinctive seasons in Korea. So the number of the patients per day could not be assumed to be distributed according to the Poisson distribution. Testing the fitness of various distributions of rare event, it turned out to be that the daily distribution of the carbon monoxide poisoning fits well to the Polya-Eggenberger distribution. With this model, we could forecast the number of the poisonings per day by the size of the coal-briquette using population. By combining the probability of queueing by the number of patients per day, and the probability of the number of patients per day in a year, we can estimate the number of the queued patients and the number of the patients in a year by the number of hyperbaric chamber per hospital and by the size of coal briquette-using population. Setting 5 per cent as the annual queueing rate, the required number of hyperbaric chambers was calculated for each province and for the whole country, in the cases of 25, 50, 75 and 100 per cent of the treatment rate which stand for the rate of the patients treated by hyperbaric chamber among the patients who are to be treated. Findings of the study were as follows. 1. Probability of the number of patients per day follows Polya-Eggenberger distribution. $$P(X=\gamma)=\frac{\Pi\limits_{k=1}^\gamma[m+(K-1)\times10.86]}{\gamma!}\times11.86^{-{(\frac{m}{10.86}+\gamma)}}$$ when$${\gamma}=1,2,...,n$$$$P(X=0)=11.86^{-(m/10.86)}$$ when $${\gamma}=0$$ Hourly arrival pattern of the patients turned out to be bimodal, the large peak was observed in $7 : 00{\sim}8 : 00$ a.m., and the small peak in $11 : 00{\sim}12 : 00$ p.m. 2. In the cases of only one or two hyperbaric chambers installed per hospital, the annual queueing rate will be at the level of more than 5 per cent. Only in case of three chambers, however, the rate will reach 5 per cent when the average number of the patients per day is 0.481. 3. According to the results above, a hospital equipped with three hyperbaric chambers will be able to serve 166,485, 83,242, 55,495 and 41,620 of population, when the treatmet rate are 25, 50, 75 and 100 per cent. 4. The required number of hyperbaric chambers are estimated at 483, 963, 1,441 and 1,923 when the treatment rate are taken as 25, 50, 75 and 100 per cent. Therefore, the shortage are respectively turned out to be 312, 791. 1,270 and 1,752. The author believes that the methodology developed in this study will also be applicable to the problems of resource allocation for the other kinds of the emergency medical facilities.

  • PDF

Optimization of Single-stage Mixed Refrigerant LNG Process Considering Inherent Explosion Risks (잠재적 폭발 위험성을 고려한 단단 혼합냉매 LNG 공정의 설계 변수 최적화)

  • Kim, Ik Hyun;Dan, Seungkyu;Cho, Seonghyun;Lee, Gibaek;Yoon, En Sup
    • Korean Chemical Engineering Research
    • /
    • v.52 no.4
    • /
    • pp.467-474
    • /
    • 2014
  • Preliminary design in chemical process furnishes economic feasibility through calculation of both mass balance and energy balance and makes it possible to produce a desired product under the given conditions. Through this design stage, the process possesses unchangeable characteristics, since the materials, reactions, unit configuration, and operating conditions were determined. Unique characteristics could be very economic, but it also implies various potential risk factors as well. Therefore, it becomes extremely important to design process considering both economics and safety by integrating process simulation and quantitative risk analysis during preliminary design stage. The target of this study is LNG liquefaction process. By the simulation using Aspen HYSYS and quantitative risk analysis, the design variables of the process were determined in the way to minimize the inherent explosion risks and operating cost. Instead of the optimization tool of Aspen HYSYS, the optimization was performed by using stochastic optimization algorithm (Covariance Matrix Adaptation-Evolution Strategy, CMA-ES) which was implemented through automation between Aspen HYSYS and Matlab. The research obtained that the important variable to enhance inherent safety was the operation pressure of mixed refrigerant. The inherent risk was able to be reduced about 4~18% by increasing the operating cost about 0.5~10%. As the operating cost increases, the absolute value of risk was decreased as expected, but cost-effectiveness of risk reduction had decreased. Integration of process simulation and quantitative risk analysis made it possible to design inherently safe process, and it is expected to be useful in designing the less risky process since risk factors in the process can be numerically monitored during preliminary process design stage.

On-Line Determination Steady State in Simulation Output (시뮬레이션 출력의 안정상태 온라인 결정에 관한 연구)

  • 이영해;정창식;경규형
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 1996.05a
    • /
    • pp.1-3
    • /
    • 1996
  • 시뮬레이션 기법을 이용한 시스템의 분석에 있어서 실험의 자동화는 현재 많은 연구와 개발이 진행 중인 분야이다. 컴퓨터와 정보통신 시스템에 대한 시뮬레이션의 예를 들어 보면, 수많은 모델을 대한 시뮬레이션을 수행할 경우 자동화된 실험의 제어가 요구되고 있다. 시뮬레이션 수행회수, 수행길이, 데이터 수집방법 등과 관련하여 시뮬레이션 실험방법이 자동화가 되지 않으면, 시뮬레이션 실험에 필요한 시간과 인적 자원이 상당히 커지게 되며 출력데이터에 대한 분석에 있어서도 어려움이 따르게 된다. 시뮬레이션 실험방법을 자동화하면서 효율적인 시뮬레이션 출력분석을 위해서는 시뮬레이션을 수행하는 경우에 항상 발생하는 초기편의 (initial bias)를 제거하는 문제가 선결되어야 한다. 시뮬레이션 출력분석에 사용되는 데이터들이 초기편의를 반영하지 않는 안정상태에서 수집된 것이어야만 실제 시스템에 대한 올바른 해석이 가능하다. 실제로 시뮬레이션 출력분석과 관련하여 가장 중요하면서도 어려운 문제는 시뮬레이션의 출력데이터가 이루는 추계적 과정 (stochastic process)의 안정상태 평균과 이 평균에 대한 신뢰구간(confidence interval: c. i.)을 구하는 것이다. 한 신뢰구간에 포함되어 있는 정보는 의사결정자에게 얼마나 정확하게 평균을 추정할 구 있는지 알려 준다. 그러나, 신뢰구간을 구성하는 일은 하나의 시뮬레이션으로부터 얻어진 출력데이터가 일반적으로 비정체상태(nonstationary)이고 자동상관(autocorrelated)되어 있기 때문에, 전통적인 통계적인 기법을 직접적으로 이용할 수 없다. 이러한 문제를 해결하기 위해 시뮬레이션 출력데이터 분석기법이 사용된다.본 논문에서는 초기편의를 제거하기 위해서 필요한 출력데이터의 제거시점을 찾는 새로운 기법으로, 유클리드 거리(Euclidean distance: ED)를 이용한 방법과 현재 패턴 분류(pattern classification) 문제에 널리 사용 중인 역전파 신경망(backpropagation neural networks: BNN) 알고리듬을 이용하는 방법을 제시한다. 이 기법들은 대다수의 기존의 기법과는 달리 시험수행(pilot run)이 필요 없으며, 시뮬레이션의 단일수행(single run) 중에 제거시점을 결정할 수 있다. 제거시점과 관련된 기존 연구는 다음과 같다. 콘웨이방법은 현재의 데이터가 이후 데이터의 최대값이나 최소값이 아니면 이 데이터를 제거시점으로 결정하는데, 알고기듬 구조상 온라인으로 제거시점 결정이 불가능하다. 콘웨이방법이 알고리듬의 성격상 온라인이 불가능한 반면, 수정콘웨이방법 (Modified Conway Rule: MCR)은 현재의 데이터가 이전 데이터와 비교했을 때 최대값이나 최소값이 아닌 경우 현재의 데이터를 제거시점으로 결정하기 때문에 온라인이 가능하다. 평균교차방법(Crossings-of-the-Mean Rule: CMR)은 누적평균을 이용하면서 이 평균을 중심으로 관측치가 위에서 아래로, 또는 아래서 위로 교차하는 회수로 결정한다. 이 기법을 사용하려면 교차회수를 결정해야 하는데, 일반적으로 결정된 교차회수가 시스템에 상관없이 일반적으로 적용가능하지 않다는 문제점이 있다. 누적평균방법(Cumulative-Mean Rule: CMR2)은 여러 번의 시험수행을 통해서 얻어진 출력데이터에 대한 총누적평균(grand cumulative mean)을 그래프로 그린 다음, 안정상태인 점을 육안으로 결정한다. 이 방법은 여러 번의 시뮬레이션을 수행에서 얻어진 데이터들의 평균들에 대한 누적평균을 사용하기 매문에 온라인 제거시점 결정이 불가능하며, 작업자가 그래프를 보고 임의로 결정해야 하는 단점이 있다. Welch방법(Welch's Method: WM)은 브라운 브리지(Brownian bridge) 통계량()을 사용하는데, n이 무한에 가까워질 때, 이 브라운 브리지 분포(Brownian bridge distribution)에 수렴하는 성질을 이용한다. 시뮬레이션 출력데이터를 가지고 배치를 구성한 후 하나의 배치를 표본으로 사용한다. 이 기법은 알고리듬이 복잡하고, 값을 추정해야 하는 단점이 있다. Law-Kelton방법(Law-Kelton's Method: LKM)은 회귀 (regression)이론에 기초하는데, 시뮬레이션이 종료된 후 누적평균데이터에 대해서 회귀직선을 적합(fitting)시킨다. 회귀직선의 기울기가 0이라는 귀무가설이 채택되면 그 시점을 제거시점으로 결정한다. 일단 시뮬레이션이 종료된 다음, 데이터가 모아진 순서의 반대 순서로 데이터를 이용하기 때문에 온라인이 불가능하다. Welch절차(Welch's Procedure: WP)는 5회이상의 시뮬레이션수행을 통해 수집한 데이터의 이동평균을 이용해서 시각적으로 제거시점을 결정해야 하며, 반복제거방법을 사용해야 하기 때문에 온라인 제거시점의 결정이 불가능하다. 또한, 한번에 이동할 데이터의 크기(window size)를 결정해야 한다. 지금까지 알아 본 것처럼, 기존의 방법들은 시뮬레이션의 단일 수행 중의 온라인 제거시점 결정의 관점에서는 미약한 면이 있다. 또한, 현재의 시뮬레이션 상용소프트웨어는 작업자로 하여금 제거시점을 임의로 결정하도록 하기 때문에, 실험중인 시스템에 대해서 정확하고도 정량적으로 제거시점을 결정할 수 없게 되어 있다. 사용자가 임의로 제거시점을 결정하게 되면, 초기편의 문제를 효과적으로 해결하기 어려울 뿐만 아니라, 필요 이상으로 너무 많은 양을 제거하거나 초기편의를 해결하지 못할 만큼 너무 적은 양을 제거할 가능성이 커지게 된다. 또한, 기존의 방법들의 대부분은 제거시점을 찾기 위해서 시험수행이 필요하다. 즉, 안정상태 시점만을 찾기 위한 시뮬레이션 수행이 필요하며, 이렇게 사용된 시뮬레이션은 출력분석에 사용되지 않기 때문에 시간적인 손실이 크게 된다.

  • PDF

Efficient Structral Safety Monitoring of Large Structures Using Substructural Identification (부분구조추정법을 이용한 대형구조물의 효율적인 구조안전도 모니터링)

  • 윤정방;이형진
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.1 no.2
    • /
    • pp.1-15
    • /
    • 1997
  • This paper presents substructural identification methods for the assessment of local damages in complex and large structural systems. For this purpose, an auto-regressive and moving average with stochastic input (ARMAX) model is derived for a substructure to process the measurement data impaired by noises. Using the substructural methods, the number of unknown parameters for each identification can be significantly reduced, hence the convergence and accuracy of estimation can be improved. Secondly, the damage index is defined as the ratio of the current stiffness to the baseline value at each element for the damage assessment. The indirect estimation method was performed using the estimated results from the identification of the system matrices from the substructural identification. To demonstrate the proposed techniques, several simulation and experimental example analyses are carried out for structural models of a 2-span truss structure, a 3-span continuous beam model and 3-story building model. The results indicate that the present substructural identification method and damage estimation methods are effective and efficient for local damage estimation of complex structures.

  • PDF

Stability of suspension bridge catwalks under a wind load

  • Zheng, Shixiong;Liao, Haili;Li, Yongle
    • Wind and Structures
    • /
    • v.10 no.4
    • /
    • pp.367-382
    • /
    • 2007
  • A nonlinear numerical method was developed to assess the stability of suspension bridge catwalks under a wind load. A section model wind tunnel test was used to obtain a catwalk's aerostatic coefficients, from which the displacement-dependent wind loads were subsequently derived. The stability of a suspension bridge catwalk was analyzed on the basis of the geometric nonlinear behavior of the structure. In addition, a full model test was conducted on the catwalk, which spanned 960 m. A comparison of the displacement values between the test and the numerical simulation shows that a numerical method based on a section model test can be used to effectively and accurately evaluate the stability of a catwalk. A case study features the stability of the catwalk of the Runyang Yangtze suspension bridge, the main span of which is 1490 m. Wind can generally attack the structure from any direction. Whenever the wind comes at a yaw angle, there are six wind load components that act on the catwalk. If the yaw angle is equal to zero, the wind is normal to the catwalk (called normal wind) and the six load components are reduced to three components. Three aerostatic coefficients of the catwalk can be obtained through a section model test with traditional test equipment. However, six aerostatic coefficients of the catwalk must be acquired with the aid of special section model test equipment. A nonlinear numerical method was used study the stability of a catwalk under a yaw wind, while taking into account the six components of the displacement-dependent wind load and the geometric nonlinearity of the catwalk. The results show that when wind attacks with a slight yaw angle, the critical velocity that induces static instability of the catwalk may be lower than the critical velocity of normal wind. However, as the yaw angle of the wind becomes larger, the critical velocity increases. In the atmospheric boundary layer, the wind is turbulent and the velocity history is a random time history. The effects of turbulent wind on the stability of a catwalk are also assessed. The wind velocity fields are regarded as stationary Gaussian stochastic processes, which can be simulated by a spectral representation method. A nonlinear finite-element model set forepart and the Newmark integration method was used to calculate the wind-induced buffeting responses. The results confirm that the turbulent character of wind has little influence on the stability of the catwalk.

Fracture Network Analysis of Groundwater Folw in the Vicinity of a Large Cavern (분리열극개념을 이용한 지하공동주변의 지하수유동해석)

  • 강병무
    • The Journal of Engineering Geology
    • /
    • v.3 no.2
    • /
    • pp.125-148
    • /
    • 1993
  • Groundwater flow in fractured rock masses is controlled by combined effects of fracture networks, state of geostafic stresses and crossflow between fractures and rock matrix. Furthermore the scaie dependent, anisotropic properties of hydraulic parameters results mainly from irregular paftems of fracture system, which can not be evaluated properly with the methods available at present. The basic assumpfion of discrete fracture network model is that groundwater flows only along discrete fractures and the flow paths in rock mass are determined by geometric paftems of interconnected fractures. The characteristics of fracture distribution in space and fracture hydraulic parameters are represented as the probability density functions by stochastic simulation. The discrete fracture network modelling was aftempted to characterize the groundwater flow in the vicinity of existing large cavems located in Wonjeong-ri, Poseung-myon, Pyeungtaek-kun. The fracture data of $1\textrm{km}^2$ area were analysed. The result indicates that the fracture sets evaluated from an equal area projection can be grouped into 6 sets and the fracture sizes are distributed in longnormal. The conductive fracture density of set 1 shows the highest density of 0.37. The groundwater inflow into a carvem was calculated as 29ton/day with the fracture transmissivity of $10^{-8}\textrm{m}^2/s$. When the fracture transmissivity increases in an order, the inflow amount estimated increases dramatically as much as fold, i.e 651 ton/day. One of the great advantages of this model is a forward modelling which can provide a thinking tool for site characterization and allow to handle the quantitative data as well as qualitative data.

  • PDF

A Development of Generalized Coupled Markov Chain Model for Stochastic Prediction on Two-Dimensional Space (수정 연쇄 말콥체인을 이용한 2차원 공간의 추계론적 예측기법의 개발)

  • Park Eun-Gyu
    • Journal of Soil and Groundwater Environment
    • /
    • v.10 no.5
    • /
    • pp.52-60
    • /
    • 2005
  • The conceptual model of under-sampled study area will include a great amount of uncertainty. In this study, we investigate the applicability of Markov chain model in a spatial domain as a tool for minimizing the uncertainty arose from the lack of data. A new formulation is developed to generalize the previous two-dimensional coupled Markov chain model, which has more versatility to fit any computational sequence. Furthermore, the computational algorithm is improved to utilize more conditioning information and reduce the artifacts, such as the artificial parcel inclination, caused by sequential computation. A generalized 20 coupled Markov chain (GCMC) is tested through applying a hypothetical soil map to evaluate the appropriateness as a substituting model for conventional geostatistical models. Comparing to sequential indicator model (SIS), the simulation results from GCMC shows lower entropy at the boundaries of indicators which is closer to real soil maps. For under-sampled indicators, however, GCMC under-estimates the presence of the indicators, which is a common aspect of all other geostatistical models. To improve this under-estimation, further study on data fusion (or assimilation) inclusion in the GCMC is required.

An Analysis of the Efficiency of Agricultural Business Corporations Using the Stochastic DEA Model (농업생산법인의 경영효율성 분석: 부트스트래핑 기법 활용)

  • Lee, Sang-Ho;Kim, Chung-Sil;Kwon, Kyung-Sup
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.6 no.4
    • /
    • pp.137-152
    • /
    • 2011
  • The purpose of this study is to estimate efficiency of agricultural business corporations using Data Envelopment Analysis. A proposed method employs a bootstrapping approach to generate efficiency estimates through Monte Carlo simulation re-sampling process. The technical efficiency, pure technical efficiency, and scale efficiency measure of the corporations is 0.749 0.790, 0.948 respectively. Among the 692 agricultural business corporations, the number of Increasing Returns to Scale (IRS)-type corporations was analyzed to be 539(77.9%). The number of Constant Returns to Scale (CRS)-type corporations was 108(15.6%), and that of Decreasing Returns to Scale (DRS)-type corporations was 45(6.5%). Since an increase in input is lower than an increase in output in IRS, an increase in input factors such as new investments is required. The Tobit model suggests that the type of corporation, capital level, and period of operation affect the efficiency score more than others. The positive coefficient of capital level and period of operation variable indicates that efficiency score increases as capital level and period of operation increases.

  • PDF

Two-phases Hybrid Approaches and Partitioning Strategy to Solve Dynamic Commercial Fleet Management Problem Using Real-time Information (실시간 정보기반 동적 화물차량 운용문제의 2단계 하이브리드 해법과 Partitioning Strategy)

  • Kim, Yong-Jin
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.2 s.73
    • /
    • pp.145-154
    • /
    • 2004
  • The growing demand for customer-responsive, made-to-order manufacturing is stimulating the need for improved dynamic decision-making processes in commercial fleet operations. Moreover, the rapid growth of electronic commerce through the internet is also requiring advanced and precise real-time operation of vehicle fleets. Accompanying these demand side developments/pressures, the growing availability of technologies such as AVL(Automatic Vehicle Location) systems and continuous two-way communication devices is driving developments on the supply side. These technologies enable the dispatcher to identify the current location of trucks and to communicate with drivers in real time affording the carrier fleet dispatcher the opportunity to dynamically respond to changes in demand, driver and vehicle availability, as well as traffic network conditions. This research investigates key aspects of real time dynamic routing and scheduling problems in fleet operation particularly in a truckload pickup-and-delivery problem under various settings, in which information of stochastic demands is revealed on a continuous basis, i.e., as the scheduled routes are executed. The most promising solution strategies for dealing with this real-time problem are analyzed and integrated. Furthermore, this research develops. analyzes, and implements hybrid algorithms for solving them, which combine fast local heuristic approach with an optimization-based approach. In addition, various partitioning algorithms being able to deal with large fleet of vehicles are developed based on 'divided & conquer' technique. Simulation experiments are developed and conducted to evaluate the performance of these algorithms.