• Title/Summary/Keyword: response probability

Search Result 711, Processing Time 0.022 seconds

Application of Rainfall Runoff Model with Rainfall Uncertainty (강우자료의 불확실성을 고려한 강우 유출 모형의 적용)

  • Lee, Hyo-Sang;Jeon, Min-Woo;Balin, Daniela;Rode, Michael
    • Journal of Korea Water Resources Association
    • /
    • v.42 no.10
    • /
    • pp.773-783
    • /
    • 2009
  • The effects of rainfall input uncertainty on predictions of stream flow are studied based extended GLUE (Generalized Likelihood Uncertainty Estimation) approach. The uncertainty in the rainfall data is implemented by systematic/non-systematic rainfall measurement analysis in Weida catchment, Germany. PDM (Probability Distribution Model) rainfall runoff model is selected for hydrological representation of the catchment. Using general correction procedure and DUE(Data Uncertainty Engine), feasible rainfall time series are generated. These series are applied to PDM in MC(Monte Carlo) and GLUE method; Posterior distributions of the model parameters are examined and behavioural model parameters are selected for simplified GLUE prediction of stream flow. All predictions are combined to develop ensemble prediction and 90 percentile of ensemble prediction, which are used to show the effects of uncertainty sources of input data and model parameters. The results show acceptable performances in all flow regime, except underestimation of the peak flows. These results are not definite proof of the effects of rainfall uncertainty on parameter estimation; however, extended GLUE approach in this study is a potential method which can include major uncertainty in the rainfall-runoff modelling.

Comparison of Deterministic and Probabilistic Approaches through Cases of Exposure Assessment of Child Products (어린이용품 노출평가 연구에서의 결정론적 및 확률론적 방법론 사용실태 분석 및 고찰)

  • Jang, Bo Youn;Jeong, Da-In;Lee, Hunjoo
    • Journal of Environmental Health Sciences
    • /
    • v.43 no.3
    • /
    • pp.223-232
    • /
    • 2017
  • Objectives: In response to increased interest in the safety of children's products, a risk management system is being prepared through exposure assessment of hazardous chemicals. To estimate exposure levels, risk assessors are using deterministic and probabilistic approaches to statistical methodology and a commercialized Monte Carlo simulation based on tools (MCTool) to efficiently support calculation of the probability density functions. This study was conducted to analyze and discuss the usage patterns and problems associated with the results of these two approaches and MCTools used in the case of probabilistic approaches by reviewing research reports related to exposure assessment for children's products. Methods: We collected six research reports on exposure and risk assessment of children's products and summarized the deterministic results and corresponding underlying distributions for exposure dose and concentration results estimated through deterministic and probabilistic approaches. We focused on mechanisms and differences in the MCTools used for decision making with probabilistic distributions to validate the simulation adequacy in detail. Results: The estimation results of exposure dose and concentration from the deterministic approaches were 0.19-3.98 times higher than the results from the probabilistic approach. For the probabilistic approach, the use of lognormal, Student's T, and Weibull distributions had the highest frequency as underlying distributions of the input parameters. However, we could not examine the reasons for the selection of each distribution because of the absence of test-statistics. In addition, there were some cases estimating the discrete probability distribution model as the underlying distribution for continuous variables, such as weight. To find the cause of abnormal simulations, we applied two MCTools used for all reports and described the improper usage routes of MCTools. Conclusions: For transparent and realistic exposure assessment, it is necessary to 1) establish standardized guidelines for the proper use of the two statistical approaches, including notes by MCTool and 2) consider the development of a new software tool with proper configurations and features specialized for risk assessment. Such guidelines and software will make exposure assessment more user-friendly, consistent, and rapid in the future.

RFID Reader Anti-collision Algorithm using the Channel Monitoring Mechanism (채널 모니터링 기법을 이용한 RFID 리더 충돌방지 알고리즘)

  • Lee Su-Ryun;Lee Chae-Woo
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.8 s.350
    • /
    • pp.35-46
    • /
    • 2006
  • When an RFID reader attempts to read the tags, interference might occur if the neighboring readers also attempt to communicate with the same tag at the same time or the neighboring readers use the same frequency simultaneously. These interferences cause the RFID reader collision. When the RFID reader collision occurs, either the command from the reader cannot be transmitted to the tags or the response of the tags cannot receive to the reader correctly, Therefore, the international standard for RFID and some papers proposed the methods to reduce the reader collision. Among those, Colorwave and Enhanced Colorwave is the reader anti-collision algorithm using the frame slotted ALOHA based a TDM(Time Division Multiplex) and are able to reduce the reader collision because theses change the frame size according to a collision probability. However, these can generate the reader collisions or interrupt the tag reading of other readers because the reader that collides with another reader randomly chooses a new slot in the frame. In this paper, we propose a new RFID reader anti-collision algorithm that each reader monitors the slots in the frame and chooses the slot having the minimum occupation probability when the reader collision occurs. Then we analyze the performance of the proposed algorithm using simulation tool.

Estimating Cumulative Distribution Functions with Maximum Likelihood to Sample Data Sets of a Sea Floater Model (해상 부유체 모델의 표본 데이터에 대해서 최대우도를 갖는 누적분포함수 추정)

  • Yim, Jeong-Bin;Yang, Won-Jae
    • Journal of Navigation and Port Research
    • /
    • v.37 no.5
    • /
    • pp.453-461
    • /
    • 2013
  • This paper describes evaluation procedures and experimental results for the estimation of Cumulative Distribution Functions (CDF) giving best-fit to the sample data in the Probability based risk Evaluation Techniques (PET) which is to assess the risks of a small-sized sea floater. The CDF in the PET is to provide the reference values of risk acceptance criteria which are to evaluate the risk level of the floater and, it can be estimated from sample data sets of motion response functions such as Roll, Pitch and Heave in the floater model. Using Maximum Likelihood Estimates and with the eight kinds of regulated distribution functions, the evaluation tests for the CDF having maximum likelihood to the sample data are carried out in this work. Throughout goodness-of-fit tests to the distribution functions, it is shown that the Beta distribution is best-fit to the Roll and Pitch sample data with smallest averaged probability errors $\bar{\delta}(0{\leq}\bar{\delta}{\leq}1.0)$ of 0.024 and 0.022, respectively and, Gamma distribution is best-fit to the Heave sample data with smallest $\bar{\delta}$ of 0.027. The proposed method in this paper can be expected to adopt in various application areas estimating best-fit distributions to the sample data.

Frequency Analysis Using Bootstrap Method and SIR Algorithm for Prevention of Natural Disasters (풍수해 대응을 위한 Bootstrap방법과 SIR알고리즘 빈도해석 적용)

  • Kim, Yonsoo;Kim, Taegyun;Kim, Hung Soo;Noh, Huisung;Jang, Daewon
    • Journal of Wetlands Research
    • /
    • v.20 no.2
    • /
    • pp.105-115
    • /
    • 2018
  • The frequency analysis of hydrometeorological data is one of the most important factors in response to natural disaster damage, and design standards for a disaster prevention facilities. In case of frequency analysis of hydrometeorological data, it assumes that observation data have statistical stationarity, and a parametric method considering the parameter of probability distribution is applied. For a parametric method, it is necessary to sufficiently collect reliable data; however, snowfall observations are needed to compensate for insufficient data in Korea, because of reducing the number of days for snowfall observations and mean maximum daily snowfall depth due to climate change. In this study, we conducted the frequency analysis for snowfall using the Bootstrap method and SIR algorithm which are the resampling methods that can overcome the problems of insufficient data. For the 58 meteorological stations distributed evenly in Korea, the probability of snowfall depth was estimated by non-parametric frequency analysis using the maximum daily snowfall depth data. The results of frequency based snowfall depth show that most stations representing the rate of change were found to be consistent in both parametric and non-parametric frequency analysis. According to the results, observed data and Bootstrap method showed a difference of -19.2% to 3.9%, and the Bootstrap method and SIR(Sampling Importance Resampling) algorithm showed a difference of -7.7 to 137.8%. This study shows that the resampling methods can do the frequency analysis of the snowfall depth that has insufficient observed samples, which can be applied to interpretation of other natural disasters such as summer typhoons with seasonal characteristics.

Weighting Effect on the Weighted Mean in Finite Population (유한모집단에서 가중평균에 포함된 가중치의 효과)

  • Kim, Kyu-Seong
    • Survey Research
    • /
    • v.7 no.2
    • /
    • pp.53-69
    • /
    • 2006
  • Weights can be made and imposed in both sample design stage and analysis stage in a sample survey. While in design stage weights are related with sample data acquisition quantities such as sample selection probability and response rate, in analysis stage weights are connected with external quantities, for instance population quantities and some auxiliary information. The final weight is the product of all weights in both stage. In the present paper, we focus on the weight in analysis stage and investigate the effect of such weights imposed on the weighted mean when estimating the population mean. We consider a finite population with a pair of fixed survey value and weight in each unit, and suppose equal selection probability designs. Under the condition we derive the formulas of the bias as well as mean square error of the weighted mean and show that the weighted mean is biased and the direction and amount of the bias can be explained by the correlation between survey variate and weight: if the correlation coefficient is positive, then the weighted mein over-estimates the population mean, on the other hand, if negative, then under-estimates. Also the magnitude of bias is getting larger when the correlation coefficient is getting greater. In addition to theoretical derivation about the weighted mean, we conduct a simulation study to show quantities of the bias and mean square errors numerically. In the simulation, nine weights having correlation coefficient with survey variate from -0.2 to 0.6 are generated and four sample sizes from 100 to 400 are considered and then biases and mean square errors are calculated in each case. As a result, in the case or 400 sample size and 0.55 correlation coefficient, the amount or squared bias of the weighted mean occupies up to 82% among mean square error, which says the weighted mean might be biased very seriously in some cases.

  • PDF

EM Algorithm and Two Stage Model for Incomplete Data (불완전한 자료에 대한 보완기법(EM 알고리듬과 2단계(Two Stage) 모델))

  • 박경숙
    • Korea journal of population studies
    • /
    • v.21 no.1
    • /
    • pp.162-183
    • /
    • 1998
  • This study examines the sampling bias that may have resulted from the large number of missing observations. Despite well-designed and reliable sampling procedures, the observed sample values in DSFH(Demographic Survey on Changes in Family and Household Structure, Japan) included many missing observations. The head administerd survey method of DSFH resulted in a large number of missing observations regarding characteristics of elderly non-head parents and their children. In addition, the response probability of a particular item in DSFH significantly differs by characteristics of elderly parents and their children. Furthermore, missing observations of many items occurred simultaneously. This complex pattern of missing observations critically limits the ability to produce an unbiased analysis. First, the large number of missing observations is likely to cause a misleading estimate of the standard error. Even worse, the possible dependency of missing observations on their latent values is likely to produce biased estimates of covariates. Two models are employed to solve the possible inference biases. First, EM algorithm is used to infer the missing values based on the knowledge of the association between the observed values and other covariates. Second, a selection model was employed given the suspicion that the probability of missing observations of proximity depends on its unobserved outcome.

  • PDF

확률의 상관 빈도이론과 포퍼

  • Song, Ha-Seok
    • Korean Journal of Logic
    • /
    • v.8 no.1
    • /
    • pp.23-46
    • /
    • 2005
  • The purpose of the paper Is to discuss and estimate early Popper's theory of probability, which is presented in his book, The Logic of of Scientific Discovery. For this, Von Mises' frequency theory shall be discussed in detail, which is regarded as the most systematic and sophisticated frequency theory among others. Von Mises developed his theory to response to various critical questions such as how finite and empirical collectives can be represented in terms of infinite and mathematical collectives, and how the axiom of randomness can be mathematically formulated. But his theory still has another difficulty, which is concerned with the inconsistency between the axiom of convergence and the axiom of randomness. Defending the objective theory of probability, Popper tries to present his own frequency theory, solving the difficulty. He suggests that the axiom of convergence be given up and that the axiom of randomness be modified to solve Von Mises' problem. That is, Popper introduces the notion of ordinal selection and neighborhood selection to modify the axiom of randomness. He then shows that Bernoulli's theorem is derived from the modified axiom. Consequently, it can be said that Popper solves the problem of inconsistency which is regarded as crucial to Von Mises' theory. However, Popper's suggestion has not drawn much attention. I think it is because his theory seems anti-intuitive in the sense that it gives up the axiom of convergence which is the basis of the frequency theory So for more persuasive frequency theory, it is necessary to formulate the axiom of randomness to be consistent with the axiom of convergence.

  • PDF

A Prediction and Analysis for Functional Change of Ecosystem in South Korea (생태계 용역가치를 이용한 대한민국 생태계의 기능적 변화 예측 및 분석)

  • Kim, Jin-Soo;Park, So-Young
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.16 no.2
    • /
    • pp.114-128
    • /
    • 2013
  • Rapid industrialization and economic growth have led to serious problems including reduced open space, environmental degradation, traffic congestion, and urban sprawl. These problems have been exacerbated by the absence of effective conservation and governance, and have resulted in various social conflicts. In response to these challenges, many scholar and government hope to achieve sustainable development through the establishment and management of environment-friendly planning. For this purpose, we would like to analyze functional change for ecosystem by future land-use/cover changes in South Korea. Toward this goal, we predicted land-use/cover changes from 2010 to 2060 using the future population of Statistics Korea and urban growth probability map created by logistic regression analysis and analyzed ecosystem service value using costanza's coefficient. In the case of scenario 1, ecosystem service value represented 6,783~7,092 million USD. In the case of scenario 2, ecosystem represented 6,775~7,089 million USD, 2.9~7.6 million USD decreased compared by scenario 1. This was the result of area reduction for farmland and wetland which have high environmental value relatively according to urban growth by development point of view. The results of this analysis indicate that environmentally sustainable systems and urban development must be applied to achieve sustainable development and environmental protection. Quantitative analysis of environmental values in accordance with environmental policy can help inform the decisions of policy makers and urban developers. Furthermore, forecasting urban growth based on future demand will provide more precise predictive analysis.

A Storage and Computation Efficient RFID Distance Bounding Protocol (저장 공간 및 연산 효율적인 RFID 경계 결정 프로토콜)

  • Ahn, Hae-Soon;Yoon, Eun-Jun;Bu, Ki-Dong;Nam, In-Gil
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.9B
    • /
    • pp.1350-1359
    • /
    • 2010
  • Recently many researchers have been proved that general RFID system for proximity authentication is vulnerable to various location-based relay attacks such as distance fraud, mafia fraud and terrorist fraud attacks. The distance-bounding protocol is used to prevent the relay attacks by measuring the round trip time of single challenge-response bit. In 2008, Munilla and Peinado proposed an improved distance-bounding protocol applying void-challenge technique based on Hancke-Kuhn's protocol. Compare with Hancke-Kuhn's protocol, Munilla and Peinado's protocol is more secure because the success probability of an adversary has (5/8)n. However, Munilla and Peinado's protocol is inefficient for low-cost passive RFID tags because it requires large storage space and many hash function computations. Thus, this paper proposes a new RFID distance-bounding protocol for low-cost passive RFID tags that can be reduced the storage space and hash function computations. As a result, the proposed distance-bounding protocol not only can provide both storage space efficiency and computational efficiency, but also can provide strong security against the relay attacks because the adversary's success probability can be reduced by $(5/8)^n$.