• Title/Summary/Keyword: null hypothesis

Search Result 197, Processing Time 0.024 seconds

Bayesian test of homogenity in small areas: A discretization approach

  • Kim, Min Sup;Nandram, Balgobin;Kim, Dal Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.6
    • /
    • pp.1547-1555
    • /
    • 2017
  • This paper studies Bayesian test of homogeneity in contingency tables made by discretizing a continuous variable. Sometimes when we are considering events of interest in small area setup, we can think of discretization approaches about the continuous variable. If we properly discretize the continuous variable, we can find invisible relationships between areas (groups) and a continuous variable of interest. The proper discretization of the continuous variable can support the alternative hypothesis of the homogeneity test in contingency tables even if the null hypothesis was not rejected through k-sample tests involving one-way ANOVA. In other words, the proportions of variables with a particular level can vary from group to group by the discretization. If we discretize the the continuous variable, it can be treated as an analysis of the contingency table. In this case, the chi-squared test is the most commonly employed method. However, further discretization gives rise to more cells in the table. As a result, the count in the cells becomes smaller and the accuracy of the test becomes lower. To prevent this, we can consider the Bayesian approach and apply it to the setup of the homogeneity test.

Genetic association tests when a nuisance parameter is not identifiable under no association

  • Kim, Wonkuk;Kim, Yeong-Hwa
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.6
    • /
    • pp.663-671
    • /
    • 2017
  • Some genetic association tests include an unidentifiable nuisance parameter under the null hypothesis of no association. When the mode of inheritance (MOI) is not specified in a case-control design, the Cochran-Armitage (CA) trend test contains an unidentifiable nuisance parameter. The transmission disequilibrium test (TDT) in a family-based association study that includes the unaffected also contains an unidentifiable nuisance parameter. The hypothesis tests that include an unidentifiable nuisance parameter are typically performed by taking a supremum of the CA tests or TDT over reasonable values of the parameter. The p-values of the supremum test statistics cannot be obtained by a normal or chi-square distribution. A common method is to use a Davies's upper bound of the p-value instead of an exact asymptotic p-value. In this paper, we provide a unified sine-cosine process expression of the CA trend test that does not specify the MOI and the TDT that includes the unaffected. We also present a closed form expression of the exact asymptotic formulas to calculate the p-values of the supremum tests when the score function can be written as a linear form in an unidentifiable parameter. We illustrate how to use the derived formulas using a pharmacogenetics case-control dataset and an attention deficit hyperactivity disorder family-based example.

P56 LCK Inhibitor Identification by Pharmacophore Modelling and Molecular Docking

  • Bharatham, Nagakumar;Bharatham, Kavitha;Lee, Keun-Woo
    • Bulletin of the Korean Chemical Society
    • /
    • v.28 no.2
    • /
    • pp.200-206
    • /
    • 2007
  • Pharmacophore models for lymphocyte-specific protein tyrosine kinase (P56 LCK) were developed using CATALYST HypoGen with a training set comprising of 25 different P56 LCK inhibitors. The best quantitative pharmacophore hypothesis comprises of one hydrogen bond acceptor, one hydrogen bond donor, one hydrophobic aliphatic and one ring aromatic features with correlation coefficient of 0.941, root mean square deviation (RMSD) of 0.933 and cost difference (null cost-total cost) of 66.23. The pharmacophore model was validated by two methods and the validated model was further used to search databases for new compounds with good estimated LCK inhibitory activity. These compounds were evaluated for their binding properties at the active site by molecular docking studies using GOLD software. The compounds with good estimated activity and docking scores were evaluated for physiological properties based on Lipinski's rules. Finally 68 compounds satisfied all the properties required to be a successful inhibitor candidate.

Symmetric and Asymmetric Effects of Financial Innovation and FDI on Exchange Rate Volatility: Evidence from South Asian Countries

  • QAMRUZZAMAN, Md.;MEHTA, Ahmed Muneeb;KHALID, Rimsha;SERFRAZ, Ayesha;SALEEM, Hina
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.8 no.1
    • /
    • pp.23-36
    • /
    • 2021
  • The study explores the nexus between foreign direct investment (FDI), financial innovation, and exchange rate volatility in selected South Asian countries for 1980 to 2017. The study applies the unit root test, Autoregressive Distributed Lagged, nonlinear ARDL, and causality test following Toda-Yamamoto. Unit root tests ascertain that variables are integrated in a mixed order; few variables are stationary at a level and few after the first difference. Empirical model estimation with ARDL, Long-run cointegration revealed with the tests of FPSS, WPSS, and tBDM by rejecting the null hypothesis of "no cointegration." This finding suggests that, in the long-run financial innovation, FDI inflows, and exchange rate volatility move together. Moreover, study findings established adverse effects running from FDI inflows and financial innovation to exchange rate volatility in the long run. These findings suggest that continual FDI inflows and innovativeness in the financial system assist in lessening the volatility in the foreign exchange market. Furthermore, nonlinear ARDL confirms the presence of asymmetric cointegration in the model. The standard Wald test established asymmetric effects running from FDI inflows and financial innovation to exchange rate volatility, both in the long and short run. Directional causality unveils feedback hypothesis holds for explaining causality between FDI, financial innovation, and exchange rate volatility.

A Study on the Evaluation Methods of IRDS for Screening the Industrial Design Proposals (IRDS 시스템에 의한 디자인 컨셉트의 평가방법 연구)

  • 우흥룡
    • Archives of design research
    • /
    • v.20
    • /
    • pp.51-58
    • /
    • 1997
  • This paper aims to aid a practical evaluation for screening out the design proposais under applying the IRDS system, which is developed a computer aided industrial design system as a computer application system. In this study, the author adopted 3 evaluation methods with criteria which have quantitative and qualitative attributes : the Intuitive Evaluation Methods(X), the Accumulative Evaluation Method(Y), the Benchmarking Evaluation Methods(Z). The results show that the 3 Methods have reciprocal relationships under reliability(r=0.0001). In analysis on the properties of the evaluation methods, the Evaluation Method (X) shows rapidity especially, the Evaluation Method (Y); relativity, the Evaluation Method(Z); importance respectively. The correlation analysis between the evaluation methods makes the null hypothesis denying, then we could adapt the research hypothesis. Therefore the correlation between them has positive relationship, and it shows thicker than the result of the 1st study.

  • PDF

Sample Size Determination of Univariate and Bivariate Ordinal Outcomes by Nonparametric Wilcoxon Tests (단변량 및 이변량 순위변수의 비모수적 윌콕슨 검정법에 의한 표본수 결정방법)

  • Park, Hae-Gang;Song, Hae-Hiang
    • The Korean Journal of Applied Statistics
    • /
    • v.22 no.6
    • /
    • pp.1249-1263
    • /
    • 2009
  • The power function in sample size determination has to be characterized by an appropriate statistical test for the hypothesis of interest. Nonparametric tests are suitable in the analysis of ordinal data or frequency data with ordered categories which appear frequently in the biomedical research literature. In this paper, we study sample size calculation methods for the Wilcoxon-Mann-Whitney test for one- and two-dimensional ordinal outcomes. While the sample size formula for the univariate outcome which is based on the variances of the test statistic under both null and alternative hypothesis perform well, this formula requires additional information on probability estimates that appear in the variance of the test statistic under alternative hypothesis, and the values of these probabilities are generally unknown. We study the advantages and disadvantages of different sample size formulas with simulations. Sample sizes are calculated for the two-dimensional ordinal outcomes of efficacy and safety, for which bivariate Wilcoxon-Mann-Whitney test is appropriate than the multivariate parametric test.

The Aquisition and Description of Voiceless Stops of Spanish and English

  • Marie Fellbaum
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.274-274
    • /
    • 1996
  • This presents the preliminary results from work in progress of a paired study of the acquisition of voiceless stops by Spanish speakers learning English, and American English speakers learning Spanish. For this study the hypothesis was that the American speakers would have no difficulty suppressing the aspiration in Spanish unaspirated stops; the Spanish speakers would have difficulty acquiring the aspiration necessary for English voiceless stops, according to Eckman's Markedness Differential Hypothesis. The null hypothesis was proved. All subjects were given the same set of disyllabic real words of English and Spanish in carrier phrases. The tokens analyzed in this report are limited to word-initial voiceless stops, followed by a low back vowel in stressed syllables. Tokens were randomized and then arranged in a list with the words appearing three separate times. Aspiration was measured from the burst to the onset of voicing(VOT). Both the first language (Ll) tokens and second language (L2) tokens were compared for each speaker and between the two groups of language speakers. Results indicate that the Spanish speakers, as a group, were able to reach the accepted target language VOT of English, but English speakers were not able to reach the accepted range for Spanish, in spite of statistically significant changes of p<.OOl by speakers in both groups of learners. A closer analysis of the speech samples revealed wide variability within the speech of native speakers of English. Not only is variability in English due to the wide range of VOT (120 msecs. for English labials, for example) but individual speakers showed different patterns. These results are revealing for the demands requied in experimental designs and the number of speakers and tokens requied for an adequate description of different languages. In addition, a simple report of means will not distinguish the speakers and the respective language learning situation; measurements must also include the RANGE of acceptability of VOT for phonetic segments. This has immediate consequences for the learning and teaching of foreign languages involving aspirated stops. In addition, the labelling of spoken language in speech technology is shown to be inadequate without a fuller mathematical description.

  • PDF

Reproducibility of Hypothesis Testing and Confidence Interval (가설검정과 신뢰구간의 재현성)

  • Huh, Myung-Hoe
    • The Korean Journal of Applied Statistics
    • /
    • v.27 no.4
    • /
    • pp.645-653
    • /
    • 2014
  • P-value is the probability of observing a current sample and possibly other samples departing equally or more extremely from the null hypothesis toward postulated alternative hypothesis. When p-value is less than a certain level called ${\alpha}$(= 0:05), researchers claim that the alternative hypothesis is supported empirically. Unfortunately, some findings discovered in that way are not reproducible, partly because the p-value itself is a statistic vulnerable to random variation. Boos and Stefanski (2011) suggests calculating the upper limit of p-value in hypothesis testing, using a bootstrap predictive distribution. To determine the sample size of a replication study, this study proposes thought experiments by simulating boosted bootstrap samples of different sizes from given observations. The method is illustrated for the cases of two-group comparison and multiple linear regression. This study also addresses the reproducibility of the points in the given 95% confidence interval. Numerical examples show that the center point is covered by 95% confidence intervals generated from bootstrap resamples. However, end points are covered with a 50% chance. Hence this study draws the graph of the reproducibility rate for each parameter in the confidence interval.

An Analysis of the Absolute Vs. Conditional Convergency Hypothesis and the Determinants of Labor Productivity in Manufacturing Industries: The Korean Case (16개 광역시도별 제조업 부문에 대한 절대적 및 조건부 수렴가설 검증 및 생산성 결정요인 분석)

  • Park, Chuhwan;Shin, Kwang Ha
    • International Area Studies Review
    • /
    • v.17 no.4
    • /
    • pp.89-106
    • /
    • 2013
  • In this paper, we analysed the absolute and conditional convergency hypothesis and the determinants of productivity in manufacturing industries from 2000 to 2009 with 16 provinces and metro-cities by using panel analysis. In terms of convergency hypothesis test, the results show that both of the convergency hypothesis, the absolute vs. conditional hypothesis, reject the null hypothesis(H0) implying the labor productivity of the 16 province and metro-cities converged to the steady state equilibrium. Also, the speed of the absolute and conditional convergency for the 16 province and metro-cities are average 4.4% and 0.73% respectively. In addition, the results of the determinants of the labor productivity in manufacturing industry show that human capital and manufacturing location coefficient affect to the value- added per capita significantly, but government expenditure per capita doesn't affect to the value- added per capita. As for the total factor productivity, government expenditure per capita and fixed capital per capita are important factors, but research and development doesn't. Hence the government has to revise the balanced regional development policy to develop regional manufacturing industries for the vulnerable regions. Also, it requires more study regarding income disparities and productivity.

Fourier Series Approximation for the Generalized Baumgartner Statistic

  • Ha, Hyung-Tae
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.3
    • /
    • pp.451-457
    • /
    • 2012
  • Baumgartner et al. (1998) proposed a novel statistical test for the null hypothesis that two independently drawn samples of data originate from the same population, and Murakami (2006) generalized the test statistic for more than two samples. Whereas the expressions of the exact density and distribution functions of the generalized Baumgartner statistic are not yet found, the characteristic function of its limiting distribution has been obtained. Due to the development of computational power, the Fourier series approximation can be readily utilized to accurately and efficiently approximate its density function based on its Laplace transform. Numerical examples show that the Fourier series method provides an accurate approximation for statistical quantities of the generalized Baumgartner statistic.