• Title/Summary/Keyword: Conditional Probability

Search Result 332, Processing Time 0.022 seconds

Non-stationary Frequency Analysis with Climate Variability using Conditional Generalized Extreme Value Distribution (기후변동을 고려한 조건부 GEV 분포를 이용한 비정상성 빈도분석)

  • Kim, Byung-Sik;Lee, Jung-Ki;Kim, Hung-Soo;Lee, Jin-Won
    • Journal of Wetlands Research
    • /
    • v.13 no.3
    • /
    • pp.499-514
    • /
    • 2011
  • An underlying assumption of traditional hydrologic frequency analysis is that climate, and hence the frequency of hydrologic events, is stationary, or unchanging over time. Under stationary conditions, the distribution of the variable of interest is invariant to temporal translation. Water resources infrastructure planning and design, such as dams, levees, canals, bridges, and culverts, relies on an understanding of past conditions and projection of future conditions. But, Water managers have always known our world is inherently non-stationary, and they routinely deal with this in management and planning. The aim of this paper is to give a brief introduction to non-stationary extreme value analysis methods. In this paper, a non-stationary hydrologic frequency analysis approach is introduced in order to determine probability rainfall consider changing climate. The non-stationary statistical approach is based on the conditional Generalized Extreme Value(GEV) distribution and Maximum Likelihood parameter estimation. This method are applied to the annual maximum 24 hours-rainfall. The results show that the non-stationary GEV approach is suitable for determining probability rainfall for changing climate, sucha sa trend, Moreover, Non-stationary frequency analyzed using SOI(Southern Oscillation Index) of ENSO(El Nino Southern Oscillation).

A simulation study for various propensity score weighting methods in clinical problematic situations (임상에서 발생할 수 있는 문제 상황에서의 성향 점수 가중치 방법에 대한 비교 모의실험 연구)

  • Siseong Jeong;Eun Jeong Min
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.5
    • /
    • pp.381-397
    • /
    • 2023
  • The most representative design used in clinical trials is randomization, which is used to accurately estimate the treatment effect. However, comparison between the treatment group and the control group in an observational study without randomization is biased due to various unadjusted differences, such as characteristics between patients. Propensity score weighting is a widely used method to address these problems and to minimize bias by adjusting those confounding and assess treatment effects. Inverse probability weighting, the most popular method, assigns weights that are proportional to the inverse of the conditional probability of receiving a specific treatment assignment, given observed covariates. However, this method is often suffered by extreme propensity scores, resulting in biased estimates and excessive variance. Several alternative methods including trimming, overlap weights, and matching weights have been proposed to mitigate these issues. In this paper, we conduct a simulation study to compare performance of various propensity score weighting methods under diverse situation, such as limited overlap, misspecified propensity score, and treatment contrary to prediction. From the simulation results overlap weights and matching weights consistently outperform inverse probability weighting and trimming in terms of bias, root mean squared error and coverage probability.

Application of Indicator Geostatistics for Probabilistic Uncertainty and Risk Analyses of Geochemical Data (지화학 자료의 확률론적 불확실성 및 위험성 분석을 위한 지시자 지구통계학의 응용)

  • Park, No-Wook
    • Journal of the Korean earth science society
    • /
    • v.31 no.4
    • /
    • pp.301-312
    • /
    • 2010
  • Geochemical data have been regarded as one of the important environmental variables in the environmental management. Since they are often sampled at sparse locations, it is important not only to predict attribute values at unsampled locations, but also to assess the uncertainty attached to the prediction for further analysis. The main objective of this paper is to exemplify how indicator geostatistics can be effectively applied to geochemical data processing for providing decision-supporting information as well as spatial distribution of the geochemical data. A whole geostatistical analysis framework, which includes probabilistic uncertainty modeling, classification and risk analysis, was illustrated through a case study of cadmium mapping. A conditional cumulative distribution function (ccdf) was first modeled by indicator kriging, and then e-type estimates and conditional variance were computed for spatial distribution of cadmium and quantitative uncertainty measures, respectively. Two different classification criteria such as a probability thresholding and an attribute thresholding were applied to delineate contaminated and safe areas. Finally, additional sampling locations were extracted from the coefficient of variation that accounts for both the conditional variance and the difference between attribute values and thresholding values. It is suggested that the indicator geostatistical framework illustrated in this study be a useful tool for analyzing any environmental variables including geochemical data for decision-making in the presence of uncertainty.

A Fuzzy-based Risk Assessment using Uncertainty Model (불확실성 모델을 사용한 퍼지 위험도분석)

  • Choi Hyun-Ho;Seo Jong-Won;Jung Pyung-Ki
    • Proceedings of the Korean Institute Of Construction Engineering and Management
    • /
    • autumn
    • /
    • pp.473-476
    • /
    • 2003
  • This paper presents a systematic risk assessment procedure with uncertainty modeling for general construction projects. Since the approach is able to effectively deal with all the related construction risks in terms of the assumed probability with conditional probability concept that systematically incorporate expert's experiences and subjective judgement, the proposed methods with uncertainty modeling is able to apply to all the construction projects inherent in lots of uncertain risk events. The fuzzy set theory is adopted to enhance risk assessment to effectively handle the vague and dynamic phenomenon of an event Therefore, the fuzzy-based risk assessment is very useful, for those countries, such as Korea, where objective probabilistic data for risk assessment is extremely rare, and thus the utilization of subjective judgmental data based on expert's experiences is inevitable.

  • PDF

A Study on Development of Median Encroachment Accident Model (중앙선침범사고 예측모델의 개발에 관한 연구)

  • 하태준;박제진
    • Journal of Korean Society of Transportation
    • /
    • v.19 no.5
    • /
    • pp.109-117
    • /
    • 2001
  • The median encroachment accident model proposed in this paper is the first step to develop cost-effective criteria about installing facilities preventing traffic accidents by median encroachment. This model consists of expected annual number of median encroachment on roadway and conditional probability to collide with vehicles on opposite lane after encroachment. Expected encroachment number is related to traffic volume and quote from a study of Hutchinson & Kennedy(1966). The probability of vehicle collision is composed of assumed headway distribution of opposite directional vehicles (negative exponential distribution), driving time of encroaching vehicle and Gap & Gap acceptance model. By using expected accident number yielded from the presented model, it will be able to calculate the benefit of reduced accident and to analyze the cost of installing facilities. Therefore this will help develop cost-effective criteria of what, to install in the median.

  • PDF

One-Step-Ahead Control of Waveform and Detection Threshold for Optimal Target Tracking in Clutter (클러터 환경에서 최적의 표적 추적을 위한 파형 파라미터와 검출문턱 값의 One-Step-Ahead 제어)

  • Shin Han-Seop;Hong Sun-Mog
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.43 no.1 s.307
    • /
    • pp.31-38
    • /
    • 2006
  • In this paper, we consider one-step-ahead control of waveform parameters (pulse amplitudes and lengths, and FM sweep rate) as well as detection thresholds for optimal range and range-rate tracking in clutter. The optimal control of the combined parameter set minimizes a tracking performance index under a set of parameter constraints. The performance index includes the probability of track loss and a function of estimation error covariances. The track loss probability and the error covariance are predicted using a hybrid conditional average algorithm The effect of the false alarms and clutter interference is taken into account in the prediction. Tracking performance of the one-step-ahead control is presented for several examples and compared with a control strategy heuristically derived from a finite horizon optimization.

Boundary Detection using Adaptive Bayesian Approach to Image Segmentation (적응적 베이즈 영상분할을 이용한 경계추출)

  • Kim Kee Tae;Choi Yoon Su;Kim Gi Hong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.22 no.3
    • /
    • pp.303-309
    • /
    • 2004
  • In this paper, an adaptive Bayesian approach to image segmentation was developed for boundary detection. Both image intensities and texture information were used for obtaining better quality of the image segmentation by using the C programming language. Fuzzy c-mean clustering was applied fer the conditional probability density function, and Gibbs random field model was used for the prior probability density function. To simply test the algorithm, a synthetic image (256$\times$256) with a set of low gray values (50, 100, 150 and 200) was created and normalized between 0 and 1 n double precision. Results have been presented that demonstrate the effectiveness of the algorithm in segmenting the synthetic image, resulting in more than 99% accuracy when noise characteristics are correctly modeled. The algorithm was applied to the Antarctic mosaic that was generated using 1963 Declassified Intelligence Satellite Photographs. The accuracy of the resulting vector map was estimated about 300-m.

An Analysis of Teachers' Knowledge about Correlation - Focused on Two-Way Tables - (상관관계에 대한 교사 지식 분석 - 2×2 분할표를 중심으로 -)

  • Shin, Bomi
    • School Mathematics
    • /
    • v.19 no.3
    • /
    • pp.461-480
    • /
    • 2017
  • The aim of this study was to analyze characteristics of teachers' knowledge about correlation with data presented in $2{\times}2$ tables. In order to achieve the aim, this study conducted didactical analysis about two-way tables through examining previous researches and developed a questionnaire with reference to the results of the analysis. The questionnaire was given to 53 middle and high school teachers and qualitative methods were used to analyze the data obtained from the written responses by the participants. This study also elaborated the framework descriptors for interpreting the teachers' responses in the light of the didactical analysis and the data was elucidated in terms of this framework. The specific features of teachers' knowledge about correlation with data presented in $2{\times}2$ tables were categorized into three types as a result. This study raised several implications for teachers' professional development for effective mathematics instruction about correlation and related concepts dealt with in probability and statistics.

A Lower Bound for Performance of Group Testing Problems (그룹검사 문제에 대한 성능 하한치)

  • Seong, Jin-Taek
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.5
    • /
    • pp.572-578
    • /
    • 2018
  • This paper considers Group Testing as one of combinatorial problems. The group testing first began to inspect soldier's syphilis infection during World War II and have long established an academic basis. Recently, there has been much interest in related areas because of the rediscovery of the value of the group testing. The group testing is the same as finding a few defect samples out of a large number of samples, which is similar to the inverse problem of Compressed Sensing. In this paper, we introduce the definition of the group testing, and specify the classes of the group testing and the bounds on performance of the group testing. In addition, we show a lower bound for the number of tests required to find defective samples using the theoretical theorem which is mainly used for relationship between conditional entropy and the probability of error in the information theory. We see how our result can be different from other related results.

On asymptotics for a bias-corrected version of the NPMLE of the probability of discovering a new species (신종발견확률의 편의보정 비모수 최우추정량에 관한 연구)

  • 이주호
    • The Korean Journal of Applied Statistics
    • /
    • v.6 no.2
    • /
    • pp.341-353
    • /
    • 1993
  • As an estimator of the conditional probability of discovering a new species at the next observation after a sample of certain size is taken, the one proposed by Good(1953) has been most widely used. Recently, Clayton and Frees(1987) showed via simulation that their nonparametric maximum likelihood estimator(NPMLE) has smaller MSE than Good's estimator when the population is relatively nonuniform. Lee(1989) proved that their conjecture is asymptotically true for truncated geometric population distributions. One shortcoming of the NPMLE, however, is that it has a considerable amount of negative bias. In this study we proposed a bias-corrected version of the NPMLE for virtually all realistic population distributions. We also showed that it has a smaller asymptotic MSE than Good's extimator except when the population is very uniform. A Monte Carlo simulation was performed for small sample sizes, and the result supports the asymptotic results.

  • PDF