• Title/Summary/Keyword: probability of correct selection

Search Result 20, Processing Time 0.024 seconds

Comparison of Several Populations with a Control Involving Folded Normal Distributions

  • Lee, Seung-Ho;Lee, Kang-Sup
    • Journal of the Korean Statistical Society
    • /
    • v.11 no.1
    • /
    • pp.45-58
    • /
    • 1982
  • The problem of comparing k normal populations with a control (or a standard) in terms of the absolute values of their means is considered. Under the framework of indifference-zone formulation a single-state and a two-stage procedures for selecting the best are proposed, according to their commom vairances known or unknown respectively. The procedures guarantee that the probability of correct selection is not less than some preassigned lower limit. Selected tables necessary to implement the procedures are provided.

  • PDF

Bayesian Method for the Multiple Test of an Autoregressive Parameter in Stationary AR(L) Model (AR(1)모형에서 자기회귀계수의 다중검정을 위한 베이지안방법)

  • 김경숙;손영숙
    • The Korean Journal of Applied Statistics
    • /
    • v.16 no.1
    • /
    • pp.141-150
    • /
    • 2003
  • This paper presents the multiple testing method of an autoregressive parameter in stationary AR(1) model using the usual Bayes factor. As prior distributions of parameters in each model, uniform prior and noninformative improper priors are assumed. Posterior probabilities through the usual Bayes factors are used for the model selection. Finally, to check whether these theoretical results are correct, simulated data and real data are analyzed.

A Two-Stage Elimination Type Selection Procedure for Stochastically Increasing Distributions : with an Application to Scale Parameters Problem

  • Lee, Seung-Ho
    • Journal of the Korean Statistical Society
    • /
    • v.19 no.1
    • /
    • pp.24-44
    • /
    • 1990
  • The purpose of this paper is to extend the idea of Tamhane and Bechhofer (1977, 1979) concerning the normal means problem to some general class of distributions. The key idea in Tamhane and Bechhofer is the derivation of the computable lower bounds on the probability of a correct selection. To derive such lower bounds, they used the specific covariance structure of a multivariate normal distribution. It is shown that such lower bounds can be obtained for a class of stochastically increasing distributions under certain conditions, which is sufficiently general so as to include the normal means problem as a special application. As an application of the general theory to the scale parameters problem, a two-stage elimination type procedure for selecting the population associated with the smallest variance from among several normal populations is proposed. The design constants are tabulated and the relative efficiencies are computed.

  • PDF

Comparison of confidence intervals for testing probabilities of a system (시스템의 확률 값 시험을 위한 신뢰구간 비교 분석)

  • Hwang, Ik-Soon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.5 no.5
    • /
    • pp.435-443
    • /
    • 2010
  • When testing systems that incorporate probabilistic behavior, it is necessary to apply test inputs a number of times in order to give a test verdict. Interval estimation can be used to assert the correctness of probabilities where the selection of confidence interval is one of the important issues for quality of testing. The Wald interval has been widely accepted for interval estimation. In this paper, we compare the Wald interval and the Agresti-Coull interval for various sizes of samples. The comparison is carried out based on the test pass probability of correct implementations and the test fail probability of incorrect implementations when these confidence intervals are used for probability testing. We consider two-sided confidence intervals to check if the probability is close to a given value. Also one-sided confidence intervals are considered in the comparison in order to check if the probability is not less than a given value. When testing probabilities using two-sided confidence intervals, we recommend the Agresti-Coull interval. For one-sided confidence intervals, the Agresti-Coull interval is recommended when the size of samples is large while either one of two confidence intervals can be used for small size samples.

Three-dimensional object recognition using efficient indexing:Part I-bayesian indexing (효율적인 인덱싱 기법을 이용한 3차원 물체 인식:Part I-Bayesian 인덱싱)

  • 이준호
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.10
    • /
    • pp.67-75
    • /
    • 1997
  • A design for a system to perform rapid recognition of three dimensional objects is presented, focusing on efficient indexing. In order to retrieve the best matched models without exploring all possible object matches, we have employed a bayesian framework. A decision-theoretic measure of the discriminatory power of a feature for a model object is defined in terms of posterior probability. Detectability of a featrue defined as a function of the feature itselt, viewpoint, sensor charcteristics, nd the feature detection algorithm(s) is also considered in the computation of discribminatory power. In order to speed up the indexing or selection of correct objects, we generate and verify the object hypotheses for rfeatures detected in a scene in the order of the discriminatory power of these features for model objects.

  • PDF

Swell Correction of Shallow Marine Seismic Reflection Data Using Genetic Algorithms

  • park, Sung-Hoon;Kong, Young-Sae;Kim, Hee-Joon;Lee, Byung-Gul
    • Journal of the korean society of oceanography
    • /
    • v.32 no.4
    • /
    • pp.163-170
    • /
    • 1997
  • Some CMP gathers acquired from shallow marine seismic reflection survey in offshore Korea do not show the hyperbolic trend of moveout. It originated from so-called swell effect of source and streamer, which are towed under rough sea surface during the data acquisition. The observed time deviations of NMO-corrected traces can be entirely ascribed to the swell effect. To correct these time deviations, a residual statics is introduced using Genetic Algorithms (GA) into the swell correction. A new class of global optimization methods known as GA has recently been developed in the field of Artificial Intelligence and has a resemblance with the genetic evolution of biological systems. The basic idea in using GA as an optimization method is to represent a population of possible solutions or models in a chromosome-type encoding and manipulate these encoded models through simulated reproduction, crossover and mutation. GA parameters used in this paper are as follows: population size Q=40, probability of multiple-point crossover P$_c$=0.6, linear relationship of mutation probability P$_m$ from 0.002 to 0.004, and gray code representation are adopted. The number of the model participating in tournament selection (nt) is 3, and the number of expected copies desired for the best population member in the scaling of fitness is 1.5. With above parameters, an optimization run was iterated for 101 generations. The combination of above parameters are found to be optimal for the convergence of the algorithm. The resulting reflection events in every NMO-corrected CMP gather show good alignment and enhanced quality stack section.

  • PDF

A Empirical Study on the Relevance of Technology Finance Supporting Business for Technologically Innovative SMEs (혁신형 중소기업 기술금융 지원사업의 적절성에 대한 실증연구)

  • Sung, Oong-Hyun
    • Journal of Korea Technology Innovation Society
    • /
    • v.16 no.1
    • /
    • pp.303-322
    • /
    • 2013
  • A relevance of supporting business of technology financing for technologically innovative SMEs is strongly required for its continuous expansion and development. This study analyzes empirically whether the selection of recipient firms from technology financing have been performed in accordance with its objectives and purposes. Results show that the probability of receiving technology financing is more likely to increase with higher technology rankings and higher operating income ratio. On the other hand, the probability of obtaining financing might be decreased gradually, as the size of capital and age of the firm are increasing. Results also show that technology rankings and firm's major characteristics are found to affect significantly on the decision-making of technology financing. Several useful comments are suggested to improve the relevance of the technology financing since the correct classification rate, which explains the appropriateness of the model, is not at high level. In addition, technology rankings are not uncorrelated with the amount of financing in regression analysis. These research results will contribute to ensure the appropriateness and credibility of the technology financing decision-making.

  • PDF

A Review of the Types and Characteristics of Healthy Life Expectancy and Methodological Issues

  • Kim, Young-Eun;Jung, Yoon-Sun;Ock, Minsu;Yoon, Seok-Jun
    • Journal of Preventive Medicine and Public Health
    • /
    • v.55 no.1
    • /
    • pp.1-9
    • /
    • 2022
  • An index that evaluates the health level of a population group considering both death and loss of function due to disease is called a summary measure of population health (SMPH). SMPHs are broadly divided into life year indices and life expectancy indices, the latter of which comprise healthy life expectancy (HLE). HLE is included as a policy target in various national and regional level healthcare plans, and the term "HLE" is commonly used in academia and by the public. However, the overall level of understanding of HLE-such as the precise definition of HLE and methods of calculating HLE-still seems to be low. As discussed in this study, the types of HLE are classified into disability-free life expectancy, disease-free life expectancy, quality-adjusted life expectancy, self-rated HLE, and disability-adjusted life expectancy. Their characteristics are examined to facilitate a correct understanding and appropriate utilization of HLE. In addition, the Sullivan method, as a representative method for calculating HLE, is presented in detail, and major issues in the process of calculating HLE, such as selection of the population group and age group, estimation of death probability, calculation of life years, and incorporation of health weights, are reviewed. This study will help researchers to select an appropriate HLE type and evaluate the validity of HLE research results, and it is expected to contribute to the vitalization of HLE research.

Comprehension and application of Tobit and Heckit models for censored data (절단자료에 대한 Tobit과 Heckit 모형의 이해와 활용)

  • Kim, Jeonghwan;Jang, Mina;Cho, Hyungjun
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.3
    • /
    • pp.357-370
    • /
    • 2022
  • In this paper, Tobit and Heckit models are introduced. These models have been used for analyzing censored data. Censoring occurs at a specific point and a large number of observations are distributed with a positive probability at a certain point. Censoring can occur due to observing limitation or exogenous variables. Tobit and Heckit models are used to correct sample selection bias, which can occur when an ordinary linear regression model is fitted to censored data. However, the difference between the two models is not clearly accounted for; hence, they have often been used interchangeably. Therefore, the suitability of the models was validated through simulated data, and demonstrated through real data. As the result, it was confirmed that both Tobit and Heckit models are well-fitted to the data censored due to observing limitation, although Tobit model was fitted parsimoniously. In contrast, only Heckit model is well-fitted to the data censored due to exogenous variables.

A Review of the Neurocognitive Mechanisms for Mathematical Thinking Ability (수학적 사고력에 관한 인지신경학적 연구 개관)

  • Kim, Yon Mi
    • Korean Journal of Cognitive Science
    • /
    • v.27 no.2
    • /
    • pp.159-219
    • /
    • 2016
  • Mathematical ability is important for academic achievement and technological renovations in the STEM disciplines. This study concentrated on the relationship between neural basis of mathematical cognition and its mechanisms. These cognitive functions include domain specific abilities such as numerical skills and visuospatial abilities, as well as domain general abilities which include language, long term memory, and working memory capacity. Individuals can perform higher cognitive functions such as abstract thinking and reasoning based on these basic cognitive functions. The next topic covered in this study is about individual differences in mathematical abilities. Neural efficiency theory was incorporated in this study to view mathematical talent. According to the theory, a person with mathematical talent uses his or her brain more efficiently than the effortful endeavour of the average human being. Mathematically gifted students show different brain activities when compared to average students. Interhemispheric and intrahemispheric connectivities are enhanced in those students, particularly in the right brain along fronto-parietal longitudinal fasciculus. The third topic deals with growth and development in mathematical capacity. As individuals mature, practice mathematical skills, and gain knowledge, such changes are reflected in cortical activation, which include changes in the activation level, redistribution, and reorganization in the supporting cortex. Among these, reorganization can be related to neural plasticity. Neural plasticity was observed in professional mathematicians and children with mathematical learning disabilities. Last topic is about mathematical creativity viewed from Neural Darwinism. When the brain is faced with a novel problem, it needs to collect all of the necessary concepts(knowledge) from long term memory, make multitudes of connections, and test which ones have the highest probability in helping solve the unusual problem. Having followed the above brain modifying steps, once the brain finally finds the correct response to the novel problem, the final response comes as a form of inspiration. For a novice, the first step of acquisition of knowledge structure is the most important. However, as expertise increases, the latter two stages of making connections and selection become more important.