• Title/Summary/Keyword: Bartlett Test

Search Result 39, Processing Time 0.027 seconds

Score Tests for Overdispersion

  • Kim, Choong-Rak;Jeong, Mee-Seon;Yang, Mee-Yeong
    • Journal of the Korean Statistical Society
    • /
    • v.23 no.1
    • /
    • pp.207-216
    • /
    • 1994
  • Count data are often overdispersed, and an appropriate test for the existence of the overdispersion is necessary. In this paper we derive a score test based on the extended quasi-likelihood and the pseudolikelihood after adjusting to the Bartlett factor. Also, we compare it with Levene (1960)'s F-type test suggested by Ganio and Schafer (1992).

  • PDF

Deinterlacing Algorithm Based on Statistical Tests

  • Kim, Yeong-Hwa;Nam, Ji-Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.19 no.3
    • /
    • pp.723-734
    • /
    • 2008
  • The main reason for deinterlacing is frame-rate conversion. The other reason for deinterlacing is of course improve clarity and reduce flicker. Using a deinterlacer can help clarity and stability of the image. Many deinterlacing algorithms are available in image processing literatures such as ELA and E-ELA. This paper propose a new statistical deinterlacing algorithm based on statistical tests such as the Bartlett test, the Levene test and the Kruskal-Wallis test. The results obtained from the proposed algorithms are found to be comparable to those from many well-known deinterlacers. However, the results in the proposed deinterlacers are found to be more efficient than other deinterlacers.

  • PDF

Statistical algorithm and application for the noise variance estimation (영상 잡음의 분산 추정에 관한 통계적 알고리즘 및 응용)

  • Kim, Yeong-Hwa;Nam, Ji-Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.20 no.5
    • /
    • pp.869-878
    • /
    • 2009
  • Image restoration techniques such as noise reduction and contrast enhancement have been researched for enhancing a contaminated image by the noise. An image degraded by additive random noise can be enhanced by noise reduction. Sigma filtering is one of the most widely used method to reduce the noise. In this paper, we propose a new sigma filter algorithm based on noise variance estimation which effectively enhances the degraded image by noise. Specifically, the Bartlett test is used to measure the degree of noise with respect to the degree of image feature. Simulation results are also given to show the performance of the proposed algorithm.

  • PDF

A Portmanteau Test Based on the Discrete Cosine Transform (이산코사인변환을 기반으로 한 포트맨토 검정)

  • Oh, Sung-Un;Cho, Hye-Min;Yeo, In-Kwon
    • The Korean Journal of Applied Statistics
    • /
    • v.20 no.2
    • /
    • pp.323-332
    • /
    • 2007
  • We present a new type of portmanteau test in the frequency domain which is derived from the discrete cosine transform(DCT). For the stationary time series, DCT coefficients are asymptotically independent and their variances are expressed by linear combinations of autocovariances. The covariance matrix of DCT coefficients for white noises is diagonal matrix whose diagonal elements is the variance of time series. A simple way to test the independence of time series is that we divide DCT coefficients into two or three parts and then compare sample variances. We also do this by testing the slope in the linear regression model of which the response variables are absolute values or squares of coefficients. Simulation results show that the proposed tests has much higher powers than Ljung-Box test in most cases of our experiments.

A comparison of tests for homoscedasticity using simulation and empirical data

  • Anastasios Katsileros;Nikolaos Antonetsis;Paschalis Mouzaidis;Eleni Tani;Penelope J. Bebeli;Alex Karagrigoriou
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.1
    • /
    • pp.1-35
    • /
    • 2024
  • The assumption of homoscedasticity is one of the most crucial assumptions for many parametric tests used in the biological sciences. The aim of this paper is to compare the empirical probability of type I error and the power of ten parametric and two non-parametric tests for homoscedasticity with simulations under different types of distributions, number of groups, number of samples per group, variance ratio and significance levels, as well as through empirical data from an agricultural experiment. According to the findings of the simulation study, when there is no violation of the assumption of normality and the groups have equal variances and equal number of samples, the Bhandary-Dai, Cochran's C, Hartley's Fmax, Levene (trimmed mean) and Bartlett tests are considered robust. The Levene (absolute and square deviations) tests show a high probability of type I error in a small number of samples, which increases as the number of groups rises. When data groups display a nonnormal distribution, researchers should utilize the Levene (trimmed mean), O'Brien and Brown-Forsythe tests. On the other hand, if the assumption of normality is not violated but diagnostic plots indicate unequal variances between groups, researchers are advised to use the Bartlett, Z-variance, Bhandary-Dai and Levene (trimmed mean) tests. Assessing the tests being considered, the test that stands out as the most well-rounded choice is the Levene's test (trimmed mean), which provides satisfactory type I error control and relatively high power. According to the findings of the study and for the scenarios considered, the two non-parametric tests are not recommended. In conclusion, it is suggested to initially check for normality and consider the number of samples per group before choosing the most appropriate test for homoscedasticity.

A Study on Evaluation Model for Usability of Research Data Service (연구데이터 서비스의 유용성 평가 모형 연구)

  • Park, Jin Ho;Ko, Young Man;Kim, Hyun Soo
    • Journal of the Korean Society for information Management
    • /
    • v.36 no.4
    • /
    • pp.129-159
    • /
    • 2019
  • The Purpose of this study is to develop an evaluation model for usability of research data service from the angles of evaluating usefulness of research data service itself and research data use experience-based usability. First, the various cases of evaluating usability of data services are examined and 4 rating scales and 20 measuring indicators for research data service are derived as a result of comparative analysis. In order to verify validity and reliability of the rating scale and the measuring indicators, the study conducted a survey of 164 potential research data users. KMO Bartlett Analysis was performed for validity test, and Principle Component Analysis and Verimax Rotating Method were used for component analysis on measuring indicators. The result shows that the 4 intrinsic rating scales satisfy the validity criteria of KMO Barlett; A single component was determined from component analysis, which verifies the validity of measuring indicators of the current rating scale. However, the result of 12 user experience-based measuring indicators analysis identified 2 components that are each classified as rating scale of utilization level and that of participation level. Cronbach's alpha of all 6 rating scales was 0.6 or more for the overall scale.

Scab (Venturia nashicola) Resistant Pear, "Wonkyo Na-heukseong 2" (배 검은별무늬병 저항성 "원교 나-흑성 2호")

  • Shin, Il-Sheob;Hwang, Hae-Sung;Shin, Yong-Uk;Heo, Seong;Kim, Ki-Hong;Kang, Sam-Seok;Kim, Yoon-Kyeong
    • Korean Journal of Breeding Science
    • /
    • v.41 no.3
    • /
    • pp.354-357
    • /
    • 2009
  • "Wonkyo Na-heukseong 2" was selected from a cross between "Kiyomaro", late season European cultivar with highly resistance and "Mansoo", late season Asian cultivar with long storability, large size and low susceptibility to pear scab made in 1997 at the National Institute of Horticultural & Herbal Science of Rural Development Administration in Korea. "Kiyomaro", released cross between "Taiheiyo" and "Bartlett" with scab resistance caused by Venturia nashicola in Japan, with no visual symptoms on any leaves was used as scab resistant source after field investigation and artificial inoculation test during 1997~1999. "Wonkyo Na-heukseong 2" blooms 1 day earlier than "Mansoo" and 3 days later than "Kiyomaro" in 2008. It is strong in tree vigor and upright-spreading in tree habit. It is classified as highly resistant to pear scab as "Kiyomaro" and "Bartlett", and cross-compatible with parental variety and Korean major pear varieties such as "Niitaka" and "Wonwhang". The average optimum harvest time of "Wonkyo Na-heukseong 2" was approximately 180 days after full bloom and it matured about 20 days shorter than parental varieties. The fruit is spindle in shape and yellowish greenish brown in skin color. Average fruit weight was 484 g and soluble solids content was $13.2^{\circ}Brix$. The flesh had medium to high juice and negligible grit. Its fruit was crisp like Asian pear.

A Development of Hourly Rainfall Simulation Technique Based on Bayesian MBLRP Model (Bayesian MBLRP 모형을 이용한 시간강수량 모의 기법 개발)

  • Kim, Jang Gyeong;Kwon, Hyun Han;Kim, Dong Kyun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.34 no.3
    • /
    • pp.821-831
    • /
    • 2014
  • Stochastic rainfall generators or stochastic simulation have been widely employed to generate synthetic rainfall sequences which can be used in hydrologic models as inputs. The calibration of Poisson cluster stochastic rainfall generator (e.g. Modified Bartlett-Lewis Rectangular Pulse, MBLRP) is seriously affected by local minima that is usually estimated from the local optimization algorithm. In this regard, global optimization techniques such as particle swarm optimization and shuffled complex evolution algorithm have been proposed to better estimate the parameters. Although the global search algorithm is designed to avoid the local minima, reliable parameter estimation of MBLRP model is not always feasible especially in a limited parameter space. In addition, uncertainty associated with parameters in the MBLRP rainfall generator has not been properly addressed yet. In this sense, this study aims to develop and test a Bayesian model based parameter estimation method for the MBLRP rainfall generator that allow us to derive the posterior distribution of the model parameters. It was found that the HBM based MBLRP model showed better performance in terms of reproducing rainfall statistic and underlying distribution of hourly rainfall series.

Genetic Variation in the Natural Populations of Abies holophylla Max. Based on RAPD Analysis (RAPD 분석(分析)에 의한 전나무 천연집단(天然集團)의 유전변이(遺傳變異))

  • Kim, In Sik;Hyun, Jung Oh
    • Journal of Korean Society of Forest Science
    • /
    • v.88 no.3
    • /
    • pp.408-418
    • /
    • 1999
  • On the basin of RAPD analysis, genetic diversity and structure of the natural populations of Abies holophylla was estimated by AMOVA procedure. The average value of percent of polymorphic markers was 71.9%. Most variation existed among individuals within population(80.2%). Genetic differentiation among populations(${\Phi}_{ST}$) was 0.198. When the populations were grouped as two region(i.e., Taebaek and Sobaek Mountain Regions), 8.5% of the total genetic variation was explained as regional differences. The heterogeneity of molecular variance among populations was investigated with Bartlett's test, which revealed that populations of Mt. Taebaek and Mt. Gariwang were more heterogeneous. Generally, the populations of Taebaek Mountain Reion were more heterogeneous than those of Sobaek Mountain Reion. Finally, the applicability of AMOVA to the populations frenetic study was discussed in comparison with other measures of genetic differentiation which were widely used.

  • PDF

Factor Analysis for Exploratory Research in the Distribution Science Field (유통과학분야에서 탐색적 연구를 위한 요인분석)

  • Yim, Myung-Seong
    • Journal of Distribution Science
    • /
    • v.13 no.9
    • /
    • pp.103-112
    • /
    • 2015
  • Purpose - This paper aims to provide a step-by-step approach to factor analytic procedures, such as principal component analysis (PCA) and exploratory factor analysis (EFA), and to offer a guideline for factor analysis. Authors have argued that the results of PCA and EFA are substantially similar. Additionally, they assert that PCA is a more appropriate technique for factor analysis because PCA produces easily interpreted results that are likely to be the basis of better decisions. For these reasons, many researchers have used PCA as a technique instead of EFA. However, these techniques are clearly different. PCA should be used for data reduction. On the other hand, EFA has been tailored to identify any underlying factor structure, a set of measured variables that cause the manifest variables to covary. Thus, it is needed for a guideline and for procedures to use in factor analysis. To date, however, these two techniques have been indiscriminately misused. Research design, data, and methodology - This research conducted a literature review. For this, we summarized the meaningful and consistent arguments and drew up guidelines and suggested procedures for rigorous EFA. Results - PCA can be used instead of common factor analysis when all measured variables have high communality. However, common factor analysis is recommended for EFA. First, researchers should evaluate the sample size and check for sampling adequacy before conducting factor analysis. If these conditions are not satisfied, then the next steps cannot be followed. Sample size must be at least 100 with communality above 0.5 and a minimum subject to item ratio of at least 5:1, with a minimum of five items in EFA. Next, Bartlett's sphericity test and the Kaiser-Mayer-Olkin (KMO) measure should be assessed for sampling adequacy. The chi-square value for Bartlett's test should be significant. In addition, a KMO of more than 0.8 is recommended. The next step is to conduct a factor analysis. The analysis is composed of three stages. The first stage determines a rotation technique. Generally, ML or PAF will suggest to researchers the best results. Selection of one of the two techniques heavily hinges on data normality. ML requires normally distributed data; on the other hand, PAF does not. The second step is associated with determining the number of factors to retain in the EFA. The best way to determine the number of factors to retain is to apply three methods including eigenvalues greater than 1.0, the scree plot test, and the variance extracted. The last step is to select one of two rotation methods: orthogonal or oblique. If the research suggests some variables that are correlated to each other, then the oblique method should be selected for factor rotation because the method assumes all factors are correlated in the research. If not, the orthogonal method is possible for factor rotation. Conclusions - Recommendations are offered for the best factor analytic practice for empirical research.