• Title/Summary/Keyword: Probability Factor

Search Result 1,014, Processing Time 0.025 seconds

Bayesian Multiple Comparison of Binomial Populations based on Fractional Bayes Factor

  • Kim, Dal-Ho;Kang, Sang-Gil;Lee, Woo-Dong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.17 no.1
    • /
    • pp.233-244
    • /
    • 2006
  • In this paper, we develop the Bayesian multiple comparisons procedure for the binomial distribution. We suggest the Bayesian procedure based on fractional Bayes factor when noninformative priors are applied for the parameters. An example is illustrated for the proposed method. For this example, the suggested method is straightforward for specifying distributionally and to implement computationally, with output readily adapted for required comparison. Also, some simulation was performed.

  • PDF

Bayesian Multiple Comparison of Bivariate Exponential Populations based on Fractional Bayes Factor

  • Cho, Jang-Sik;Cho, Kil-Ho;Choi, Seung-Bae
    • Journal of the Korean Data and Information Science Society
    • /
    • v.17 no.3
    • /
    • pp.843-850
    • /
    • 2006
  • In this paper, we consider the Bayesian multiple comparisons problem for K bivariate exponential populations to make inferences on the relationships among the parameters based on observations. And we suggest the Bayesian procedure based on fractional Bayes factor when noninformative priors are applied for the parameters. Also, we give a numerical examples to illustrate our procedure.

  • PDF

Unbiasedness or Statistical Efficiency: Comparison between One-stage Tobit of MLE and Two-step Tobit of OLS

  • Park, Sun-Young
    • International Journal of Human Ecology
    • /
    • v.4 no.2
    • /
    • pp.77-87
    • /
    • 2003
  • This paper tried to construct statistical and econometric models on the basis of economic theory in order to discuss the issue of statistical efficiency and unbiasedness including the sample selection bias correcting problem. Comparative analytical tool were one stage Tobit of Maximum Likelihood estimation and Heckman's two-step Tobit of Ordinary Least Squares. The results showed that the adequacy of model for the analysis on demand and choice, we believe that there is no big difference in explanatory variables between the first selection model and the second linear probability model. Since the Lambda, the self- selectivity correction factor, in the Type II Tobit is not statistically significant, there is no self-selectivity in the Type II Tobit model, indicating that Type I Tobit model would give us better explanation in the demand for and choice which is less complicated statistical method rather than type II model.

A Study of IT Environment Scenario through the Application of Cross Impact Analysis (교차영향분석의 작용을 통한 국내 IT 환경 시나리오에 대한 연구)

  • Kim Jin-han;Kim Sung-hong
    • Korean Management Science Review
    • /
    • v.21 no.3
    • /
    • pp.129-147
    • /
    • 2004
  • Scenario analysis for strategic planning, unlike most forecasting methods, provides a qualitative, contextual description of how the present will evolve into the future. It normally tries to identify a set of possible futures, each of whose occurrence is plausible but not assured. In this paper, we propose the use of Cross Impact Analysis(CIA) approach for scenario generation about the future of Korean IT environments. In this analysis, we classified IT environments into technical, social, legislative, and economic factor. And various variables and events were defined in each factor. From the survey collected from IT related experts, we acquire probability of occurrence and compatibility estimates of every possible pairs of events as input. Then 2 phase analysis is used in order to choice events with high probability of occurrence and generate scenario. Finally, after CIA using Monte Carlo simulation, a detail scenario for 2010 was developed. These scenario drawn from the CIA approach is a result considered by cross impacts of various events.

EXTRACTING LINEAR FACTORS IN FEYNMAN'S OPERATIONAL CALCULI : THE CASE OF TIME DEPENDENT NONCOMMUTING OPERATORS

  • Ahn, Byung-Moo
    • Bulletin of the Korean Mathematical Society
    • /
    • v.41 no.3
    • /
    • pp.573-587
    • /
    • 2004
  • Disentangling is the essential operation of Feynman's operational calculus for noncommuting operators. Thus formulas which simplify this operation are central to the subject. In a recent paper the procedure for 'extracting a linear factor' has been established in the setting of Feynman's operational calculus for time independent operators $A_1, ... , A_n$ and associated probability measures ${\mu}_1,..., {\mu}_n$. While the setting just described is natural in many circumstances, it is not natural for evolution problems. There the measures should not be restricted to probability measures and it is worthwhile to allow the operators to depend on time. The main purpose for this paper is to extend the procedure for extracting a linear factor to this latter setting. We should mention that Feynman's primary motivation for developing an operational calculus for noncommuting operators came from a desire to describe the evolution of certain quantum systems.m systems.

Intersymbol interference due to sampling-time jitter and its approximations in a raised cosing filtered system

  • 박영미;목진담;나상신
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.11
    • /
    • pp.2942-2953
    • /
    • 1996
  • This paper studies the effect of intersymbol interference due to sampling-time jitter on the worst-case bit error probability in a digital modultation over an additive white Gaussian noise channel, with the squared-root raised-cosine filters in the transmitter and the receiver. It derives approximation formulas using the Taylor series approximations. the principal results of this paper is the relationship between the worst-casse bit error probability, the degree of jitter, the roll factor of the raised cosine filter, and other quantities. Numerical results show, as expected, that the intersymbol interference decreases as the roll-off factor increases and the jitter decreases. They also show that the approximation formulas are accurate for smally intersymbol interference, i.e., for large roll-noise ratio $E_{b/}$ $N_{0}$.leq.7 dB and begin to lose accuracy for larger signal-to-noise ratio.o.o.

  • PDF

Polarizations and Electrical Properties of PMS-PZT Ferroelectric Materials (PMS-PZT계 강유전 재료의 분극과 전기적 특성)

  • Kim, J.R.;Kim, H.S.;Lee, H.Y.;Oh, Y.W.
    • Journal of the Korean Institute of Electrical and Electronic Material Engineers
    • /
    • v.17 no.12
    • /
    • pp.1314-1319
    • /
    • 2004
  • The rosen types of piezo-transformers were prepared and electrical properties were investigated in order to establish the optimum parameters in the process of polarization for ferroelectric materials. Polarization was readily originated with increasing the external energy such as an applied voltage, time, and temperature so that the planar coupling factor and voltage gain were saturated under the conditions of over 14$0^{\circ}C$, applied voltage and time of 4 kV/mm and 3 minutes respectively. The empirical equation for domain rotation probability, which was in proportion to square of an applied voltage and temperature and square root of time, as functions of the above parameters was defined.

Bayesian multiple comparisons in Freund's bivariate exponential populations with type I censored data

  • Cho, Jang-Sik
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.3
    • /
    • pp.569-574
    • /
    • 2010
  • We consider two components system which have Freund's bivariate exponential model. In this case, Bayesian multiple comparisons procedure for failure rates is sug-gested in K Freund's bivariate exponential populations. Here we assume that the com-ponents enter the study at random over time and the analysis is carried out at some prespeci ed time. We derive fractional Bayes factor for all comparisons under non- informative priors for the parameters and calculate the posterior probabilities for all hypotheses. And we select a hypotheses which has the highest posterior probability as best model. Finally, we give a numerical examples to illustrate our procedure.

Determination and application of the weights for landslide susceptibility mapping using an artificial neural network

  • Lee, Moung-Jin;Won, Joong-Sun;Yu, Young-Tae
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2003.04a
    • /
    • pp.71-76
    • /
    • 2003
  • The purpose of this study is the development, application and assessment of probability and artificial neural network methods for assessing landslide susceptibility in a chosen study area. As the basic analysis tool, a Geographic Information System (GIS) was used for spatial data management. A probability method was used for calculating the rating of the relative importance of each factor class to landslide occurrence, For calculating the weight of the relative importance of each factor to landslide occurrence, an artificial neural network method was developed. Using these methods, the landslide susceptibility index was calculated using the rating and weight, and a landslide susceptibility map was produced using the index. The results of the landslide susceptibility analysis, with and without weights, were confirmed from comparison with the landslide location data. The comparison result with weighting was better than the results without weighting. The calculated weight and rating can be used to landslide susceptibility mapping.

  • PDF

A Predictive Two-Group Multinormal Classification Rule Accounting for Model Uncertainty

  • Kim, Hea-Jung
    • Journal of the Korean Statistical Society
    • /
    • v.26 no.4
    • /
    • pp.477-491
    • /
    • 1997
  • A new predictive classification rule for assigning future cases into one of two multivariate normal population (with unknown normal mixture model) is considered. The development involves calculation of posterior probability of each possible normal-mixture model via a default Bayesian test criterion, called intrinsic Bayes factor, and suggests predictive distribution for future cases to be classified that accounts for model uncertainty by weighting the effect of each model by its posterior probabiliy. In this paper, our interest is focused on constructing the classification rule that takes care of uncertainty about the types of covariance matrices (homogeneity/heterogeneity) involved in the model. For the constructed rule, a Monte Carlo simulation study demonstrates routine application and notes benefits over traditional predictive calssification rule by Geisser (1982).

  • PDF