• Title/Summary/Keyword: Deterministic models

Search Result 228, Processing Time 0.03 seconds

Comparison of Benefit Estimation Models in Cost-Benefit Analysis: A Case of Chronic Hypertension Management Programs

  • Lim, Ji-Young;Kim, Mi-Ja;Park, Chang-Gi;Kim, Jung-Yun
    • Journal of Korean Academy of Nursing
    • /
    • v.41 no.6
    • /
    • pp.750-757
    • /
    • 2011
  • Purpose: Cost-benefit analysis is one of the most commonly used economic evaluation methods, which helps to inform the economic value of a program to decision makers. However, the selection of a correct benefit estimation method remains critical for accurate cost-benefit analysis. This paper compared benefit estimations among three different benefit estimation models. Methods: Data from community-based chronic hypertension management programs in a city in South Korea were used. Three different benefit estimation methods were compared. The first was a standard deterministic estimation model; second, a repeated-measures deterministic estimation model; and third, a transitional probability estimation model. Results: The estimated net benefit of the three different methods were $1,273.01, $-3,749.42, and $-5,122.55 respectively. Conclusion: The transitional probability estimation model showed the most correct and realistic benefit estimation, as it traced possible paths of changing status between time points and it accounted for both positive and negative benefits.

The Effect of Deterministic and Stochastic VTG Schemes on the Application of Backpropagation of Multivariate Time Series Prediction (시계열예측에 대한 역전파 적용에 대한 결정적, 추계적 가상항 기법의 효과)

  • Jo, Tae-Ho
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2001.10a
    • /
    • pp.535-538
    • /
    • 2001
  • Since 1990s, many literatures have shown that connectionist models, such as back propagation, recurrent network, and RBF (Radial Basis Function) outperform the traditional models, MA (Moving Average), AR (Auto Regressive), and ARIMA (Auto Regressive Integrated Moving Average) in time series prediction. Neural based approaches to time series prediction require the enough length of historical measurements to generate the enough number of training patterns. The more training patterns, the better the generalization of MLP is. The researches about the schemes of generating artificial training patterns and adding to the original ones have been progressed and gave me the motivation of developing VTG schemes in 1996. Virtual term is an estimated measurement, X(t+0.5) between X(t) and X(t+1), while the given measurements in the series are called actual terms. VTG (Virtual Tern Generation) is the process of estimating of X(t+0.5), and VTG schemes are the techniques for the estimation of virtual terms. In this paper, the alternative VTG schemes to the VTG schemes proposed in 1996 will be proposed and applied to multivariate time series prediction. The VTG schemes proposed in 1996 are called deterministic VTG schemes, while the alternative ones are called stochastic VTG schemes in this paper.

  • PDF

Estimating the mean number of objects in M/H2/1 model for web service

  • Lee, Yongjin
    • International journal of advanced smart convergence
    • /
    • v.11 no.3
    • /
    • pp.1-6
    • /
    • 2022
  • In this paper, we estimate the mean number of objects in the M/H2/1 model for web service when the mean object size in the M/H2/1 model is equal to that of the M/G/1/PS and M/BP/1 models. To this end, we use the mean object size obtained by assuming that the mean latency of deterministic model is equal to that of M/H2/1, M/G/1/PS, and M/BP/1 models, respectively. Computational experiments show that if the shape parameter of the M/BP/1 model is 1.1 and the system load is greater than 0.35, the mean number of objects in the M/H2/1 model when mean object size of M/H2/1 model is the same as that of M/G/1/PS model is almost equal to the mean number of objects in the M/H2/1 model when the mean object size of M/H2/1 model is the same as that of M/BP/1 model. In addition, as the upper limit of the M/BP/1 model increases, the number of objects in the M/H2/1 model converges to one, which increases latency. These results mean that it is efficient to use small-sized objects in the web service environment.

Prediction of Strong Ground Motion in Moderate-Seismicity Regions Using Deterministic Earthquake Scenarios

  • Kang, Tae-Seob
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.11 no.4
    • /
    • pp.25-31
    • /
    • 2007
  • For areas such as the Korean Peninsula, which have moderate seismic activity but no available records of strong ground motion, synthetic seismograms can be used to evaluate ground motion without waiting for a strong earthquake. Such seismograms represent the estimated ground motions expected from a set of possible earthquake scenarios. Local site effects are especially important in assessing the seismic hazard and possible ground motion scenarios for a specific fault. The earthquake source and rupture dynamics can be described as a two-step process of rupture initiation and front propagation controlled by a frictional sliding mechanism. The seismic wavefield propagates through heterogeneous geological media and finally undergoes near-surface modulations such as amplification or deamplification. This is a complex system in which various scales of physical phenomena are integrated. A unified approach incorporates multi-scale problems of dynamic rupture, radiated wave propagation, and site effects into an all-in-one model using a three-dimensional, fourth-order, staggered-grid, finite-difference method. The method explains strong ground motions as products of complex systems that can be modified according to a variety of fine-scale rupture scenarios and friction models. A series of such deterministic earthquake scenarios can shed light on the kind of damage that would result and where it would be located.

Deterministic Private Matching with Perfect Correctness (정확성을 보장하는 결정적 Private Matching)

  • Hong, Jeong-Dae;Kim, Jin-Il;Cheon, Jung-Hee;Park, Kun-Soo
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.10
    • /
    • pp.502-510
    • /
    • 2007
  • Private Matching is a problem of computing the intersection of private datasets of two parties. One could envision the usage of private matching for Insurance fraud detection system, Do-not-fly list, medical databases, and many other applications. In 2004, Freedman et at. [1] introduced a probabilistic solution for this problem, and they extended it to malicious adversary model and multi-party computation. In this paper, we propose a new deterministic protocol for private matching with perfect correctness. We apply this technique to adversary models, achieving more reliable and higher speed computation.

Learning Ability of Deterministic Boltzmann Machine with Non-Monotonic Neurons in Hidden Layer (은닉층에 비단조 뉴런을 갖는 결정론적 볼츠만 머신의 학습능력에 관한 연구)

  • 박철영
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.6
    • /
    • pp.505-509
    • /
    • 2001
  • In this paper, we evaluate the learning ability of non-monotonic DMM(Deterministic Boltzmann Machine) network through numerical simulations. The simulation results show that the proposed system has higher performance than monotonic DBM network model. Non-monotonic DBM network also show an interesting result that network itself adjusts the number of hidden layer neurons. DBM network can be realized with fewer components than other neural network models. These results enhance the utilization of non-monotonic neurons in the large scale integration of neuro-chips.

  • PDF

Neutronic simulation of the CEFR experiments with the nodal diffusion code system RAST-F

  • Tran, Tuan Quoc;Lee, Deokjung
    • Nuclear Engineering and Technology
    • /
    • v.54 no.7
    • /
    • pp.2635-2649
    • /
    • 2022
  • CEFR is a small core-size sodium-cooled fast reactor (SFR) using high enrichment fuel with stainless-steel reflectors, which brings a significant challenge to the deterministic methodologies due to the strong spectral effect. The neutronic simulation of the start-up experiments conducted at the CEFR have been performed with a deterministic code system RAST-F, which is based on the two-step approach that couples a multi-group cross-section generation Monte-Carlo (MC) code and a multi-group nodal diffusion solver. The RAST-F results were compared against the measurement data. Moreover, the characteristic of neutron spectrum in the fuel rings, and adjacent reflectors was evaluated using different models for generation of accurate nuclear libraries. The numerical solution of RAST-F system was verified against the full core MC solution MCS at all control rods fully inserted and withdrawn states. A good agreement between RAST-F and MCS solutions was observed with less than 120 pcm discrepancies and 1.2% root-mean-square error in terms of keff and power distribution, respectively. Meanwhile, the RAST-F result agreed well with the experimental values within two-sigma of experimental uncertainty. The good agreement of these results indicating that RAST-F can be used to neutronic steady-state simulations for small core-size SFR, which was challenged to deterministic code system.

A Systems Engineering Approach to Multi-Physics Analysis of CEA Ejection Accident

  • Sebastian Grzegorz Dzien;Aya Diab
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.19 no.2
    • /
    • pp.46-58
    • /
    • 2023
  • Deterministic safety analysis is a crucial part of safety assessment, particularly when it comes to demonstrating the safety of nuclear power plant designs. The traditional approach to deterministic safety analysis models is to model the nuclear core using point kinetics. However, this simplified approach does not fully reflect the real core behavior with proper moderator and fuel reactivity feedbacks during the transient. The use of Multi-Physics approach allows more precise simulation reflecting the inherent three-dimensionality (3D) of the problem by representing the detailed 3D core, with instantaneous updates of feedback mechanisms due to changes of important reactivity parameters like fuel temperature coefficient (FTC) and moderator temperature coefficient (MTC). This paper addresses a CEA ejection accident at hot full power (HFP), in which the underlying strong and un-symmetric feedback between thermal-hydraulics and reactor kinetics exist. For this purpose, a multi-physics analysis tool has been selected with the nodal kinetics code, 3DKIN, implicitly coupled to the thermal-hydraulic code, RELAP5, for real-time communication and data exchange. This coupled approach enables high fidelity three-dimensional simulation and is therefore especially relevant to reactivity initiated accident (RIA) scenarios and power distribution anomalies with strong feedback mechanisms and/or un-symmetrical characteristics as in the CEA ejection accident. The Systems Engineering approach is employed to provide guidance in developing the work in a systematic and efficient fashion.

Two Uncertain Programming Models for Inverse Minimum Spanning Tree Problem

  • Zhang, Xiang;Wang, Qina;Zhou, Jian
    • Industrial Engineering and Management Systems
    • /
    • v.12 no.1
    • /
    • pp.9-15
    • /
    • 2013
  • An inverse minimum spanning tree problem makes the least modification on the edge weights such that a predetermined spanning tree is a minimum spanning tree with respect to the new edge weights. In this paper, the concept of uncertain ${\alpha}$-minimum spanning tree is initiated for minimum spanning tree problem with uncertain edge weights. Using different decision criteria, two uncertain programming models are presented to formulate a specific inverse minimum spanning tree problem with uncertain edge weights involving a sum-type model and a minimax-type model. By means of the operational law of independent uncertain variables, the two uncertain programming models are transformed to their equivalent deterministic models which can be solved by classic optimization methods. Finally, some numerical examples on a traffic network reconstruction problem are put forward to illustrate the effectiveness of the proposed models.