• Title/Summary/Keyword: K-sample problem

Search Result 844, Processing Time 0.026 seconds

A Study on the Bi-Aspect Test for the Two-Sample Problem

  • Hong, Seung-Man;Park, Hyo-Il
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.1
    • /
    • pp.129-134
    • /
    • 2012
  • In this paper we review a bi-aspect nonparametric test for the two-sample problem under the location translation model and propose a new one to accommodate a more broad class of underlying distributions. Then we compare the performance of our proposed test with other existing ones by obtaining empirical powers through a simulation study. Then we discuss some interesting features related to the bi-aspect test with a comment on a possible expansion for the proposed test as concluding remarks.

Local Similarity based Discriminant Analysis for Face Recognition

  • Xiang, Xinguang;Liu, Fan;Bi, Ye;Wang, Yanfang;Tang, Jinhui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.11
    • /
    • pp.4502-4518
    • /
    • 2015
  • Fisher linear discriminant analysis (LDA) is one of the most popular projection techniques for feature extraction and has been widely applied in face recognition. However, it cannot be used when encountering the single sample per person problem (SSPP) because the intra-class variations cannot be evaluated. In this paper, we propose a novel method called local similarity based linear discriminant analysis (LS_LDA) to solve this problem. Motivated by the "divide-conquer" strategy, we first divide the face into local blocks, and classify each local block, and then integrate all the classification results to make final decision. To make LDA feasible for SSPP problem, we further divide each block into overlapped patches and assume that these patches are from the same class. To improve the robustness of LS_LDA to outliers, we further propose local similarity based median discriminant analysis (LS_MDA), which uses class median vector to estimate the class population mean in LDA modeling. Experimental results on three popular databases show that our methods not only generalize well SSPP problem but also have strong robustness to expression, illumination, occlusion and time variation.

A novel PSO-based algorithm for structural damage detection using Bayesian multi-sample objective function

  • Chen, Ze-peng;Yu, Ling
    • Structural Engineering and Mechanics
    • /
    • v.63 no.6
    • /
    • pp.825-835
    • /
    • 2017
  • Significant improvements to methodologies on structural damage detection (SDD) have emerged in recent years. However, many methods are related to inversion computation which is prone to be ill-posed or ill-conditioning, leading to low-computing efficiency or inaccurate results. To explore a more accurate solution with satisfactory efficiency, a PSO-INM algorithm, combining particle swarm optimization (PSO) algorithm and an improved Nelder-Mead method (INM), is proposed to solve multi-sample objective function defined based on Bayesian inference in this study. The PSO-based algorithm, as a heuristic algorithm, is reliable to explore solution to SDD problem converted into a constrained optimization problem in mathematics. And the multi-sample objective function provides a stable pattern under different level of noise. Advantages of multi-sample objective function and its superior over traditional objective function are studied. Numerical simulation results of a two-storey frame structure show that the proposed method is sensitive to multi-damage cases. For further confirming accuracy of the proposed method, the ASCE 4-storey benchmark frame structure subjected to single and multiple damage cases is employed. Different kinds of modal identification methods are utilized to extract structural modal data from noise-contaminating acceleration responses. The illustrated results show that the proposed method is efficient to exact locations and extents of induced damages in structures.

Distribution-Free k-Sample Tests for Ordered Alternatives of Scale Parameters

  • Jeong, Kwang-Mo;Song, Moon-Sup
    • Journal of the Korean Statistical Society
    • /
    • v.17 no.2
    • /
    • pp.61-80
    • /
    • 1988
  • For testing homogeneity of scale parameters aginst ordered alternatives, some nonparametric test statistics based on pairwise ranking method are proposed. The proposed tests are distribution-free. The asymptotic distributions of the proposed statistcs are also investigated. It is shown that the Pitman efficiencies of the proposed rank tests are the same as those of the corresponding two-sample rank tests in the scale problem. A small-sample Monte Carlo study is also performed. The results show that the proposed tests are robust and efficient.

  • PDF

Variance Estimation Using Poststratified Complex Sample

  • Kim, Kyu-Seong
    • Communications for Statistical Applications and Methods
    • /
    • v.6 no.1
    • /
    • pp.131-142
    • /
    • 1999
  • Estimators for domains and approximate estimators of their variance are derived using post-stratified complex sample. Furthermore we propose an adjusted variance estimator of a domain mean in case of considering the post-stratified complex sample as simple random sample. A simulation study based on the data of Farm Household Economy Survey is presented to compare variance estimators numerically. From the study we showed that our adjusted variance estimator compensate for the under-estimation problem considerably.

  • PDF

A Variables Repetitive Group Sampling Plan for Minimizing Average Sample Number (평균 샘플 수 최소화를 통한 계량형 반복 샘플링 검사의 설계)

  • Park, Heekon;Moon, Young-gun;Jun, Chi-Hyuck;Balamurali, S.;Lee, Jaewook
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.30 no.3
    • /
    • pp.205-212
    • /
    • 2004
  • This paper proposes the variables repetitive group sampling plan where the quality characteristic following normal distribution has upper or lower specification limit. The problem is formulated as a non-linear programming problem where the objective function to minimize is the average sample number and the constraints are related to lot acceptance probabilities at acceptable quality level (AQL) and limiting quality level (LQL) under the operating characteristic curve. Sampling plan tables are constructed for the selection of parameters indexed by AQL and LQL in the cases of known standard deviation and unknown standard deviation. It is shown that the proposed sampling plan significantly reduces the average sample number as compared with the single or the double sampling plan.

A Note on Determination of Sample Size for a Likert Scale

  • Park, Jin-Woo;Jung, Mi-Sook
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.4
    • /
    • pp.669-673
    • /
    • 2009
  • When a social scientist prepares to conduct a survey, he/she faces the problem of deciding an appropriate sample size. Sample size is closely connected with cost, time, and the precision of the sample estimate. It is thus important to choose a size appropriate for the survey, but this may be difficult for survey researchers not skilled in a sampling theory. In this study we propose a method to determine a sample size under certain assumptions when the quantity of interest is measured by a Likert scale.

Median Ranked Ordering-Set Sample Test for Ordered Alternatives

  • Kim, Dong-Hee;Ock, Bong-Seak
    • Communications for Statistical Applications and Methods
    • /
    • v.15 no.6
    • /
    • pp.947-957
    • /
    • 2008
  • In this paper, we consider the c-sample location problem for ordered alternatives using median ranked ordering-set samples(MROSS). We propose the test statistic using the median of samples that have the same ranked order in each cycle of ranted ordering-set sample(ROSS). We obtain the asymptotic property of the proposed test statistic and Pitman efficiency with respect to other test statistic. In simulation study, our proposed test statistic has good powers for some underlying distributions we consider.

A Note on the Decision of Sample Size by Relative Standard Error in Successive Occasions (계속조사에서 상대표준오차를 이용한 표본크기 결정에 관한 고찰)

  • Han, GeunShik;Lee, Gi-Sung
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.3
    • /
    • pp.477-483
    • /
    • 2015
  • This study deals with the decision problem of sample size by the relative standard error of estimates derived from survey results in successive occasions. The population of the construction in business survey results is used to calculate quartile of the relative standard error of the 1,000 sample obtained from simple or stratified random sampling. The sample size at time t with a relative standard error of the point (t-1) in the successive occasions were calculated according to the sampling method. As a result, in terms of the sample size according to the size of the relative standard error of the (t-1), simple random sampling differs significantly from stratified sampling. In addition, we could see differences in sample size (depending on how the population is stratified) and that careful attention is required in the problem of sample size by the relative standard error of estimates derived from survey results in successive occasions.

The Unified Framework for AUC Maximizer

  • Jun, Jong-Jun;Kim, Yong-Dai;Han, Sang-Tae;Kang, Hyun-Cheol;Choi, Ho-Sik
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.6
    • /
    • pp.1005-1012
    • /
    • 2009
  • The area under the curve(AUC) is commonly used as a measure of the receiver operating characteristic(ROC) curve which displays the performance of a set of binary classifiers for all feasible ratios of the costs associated with true positive rate(TPR) and false positive rate(FPR). In the bipartite ranking problem where one has to compare two different observations and decide which one is "better", the AUC measures the quantity that ranking score of a randomly chosen sample in one class is larger than that of a randomly chosen sample in the other class and hence, the function which maximizes an AUC of bipartite ranking problem is different to the function which maximizes (minimizes) accuracy (misclassification error rate) of binary classification problem. In this paper, we develop a way to construct the unified framework for AUC maximizer including support vector machines based on maximizing large margin and logistic regression based on estimating posterior probability. Moreover, we develop an efficient algorithm for the proposed unified framework. Numerical results show that the propose unified framework can treat various methodologies successfully.