• Title/Summary/Keyword: Non-Parametric Statistics

Search Result 131, Processing Time 0.031 seconds

Estimation and Sensitivity Analysis on the Effect of Job Training for Non-Regular Employees (비정규직 직업훈련효과 추정과 민감도 분석)

  • Lee, Sang-Jun
    • The Korean Journal of Applied Statistics
    • /
    • v.25 no.1
    • /
    • pp.163-181
    • /
    • 2012
  • This paper studies the effect of job training for non-regular employees in the Korea labor market. Using an economically active population data set of statistics Korea, we apply a non-parametric matching and sensitivity analysis method to measure the effect of the training for non-regular employees and to look for the impact of an unobservable variable or confounding factor in regards to the selection effect and outcome effect. In the our empirical results, we conclude that the effect of the training for non-regular employees has a better employment effect for getting a regular job rather than a wage effect; in addition, the impact of unobservable variables or confounding factors do not exercise a statistically strong influence on the baseline ATT.

Literature Review on the Statistical Methods in KSQM for 50 Years (품질경영학회 50주년 특별호: 통계적 기법 분야 연구 리뷰)

  • Lim, Yong Bin;Kim, Sang Ik;Lee, Sang Bok;Jang, Dae Heung
    • Journal of Korean Society for Quality Management
    • /
    • v.44 no.2
    • /
    • pp.221-244
    • /
    • 2016
  • Purpose: This research reviews the papers, published in the Journal of the Korean Society for Quality Control (KSQC) and the Journal of the Korean Society for Quality Management (KSQM) since 1965, in the area of statistical methods. The literature review is performed in the four fields of the statistical methods and we categorize the published articles into the several sub-areas in each field. Methods: The reviewed articles are classified into the four main categories: probability model and estimation, Bayesian analysis and non-parametric analysis, regression and time series analysis, and application of data analysis. We examine the contents and relationships of the published articles of the several sub-areas in each category. Results: We summarize the reviewed papers in the chronological road-maps for each sub-area, and outline the relations of the connected papers. Some comments on the contents and the contributions of the reviewed papers are also provided in this paper. Conclusion: Various issues are employed and published on the research of the application statistical methods for past 50 years, and many worthy works are achieved in the theory and application areas of statistical methods for improving quality in the manufacturing and service industries. The future direction of the research in the statistical quality management methods also can be explored by the contents of this research.

Estimation of smooth monotone frontier function under stochastic frontier model (확률프런티어 모형하에서 단조증가하는 매끄러운 프런티어 함수 추정)

  • Yoon, Danbi;Noh, Hohsuk
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.5
    • /
    • pp.665-679
    • /
    • 2017
  • When measuring productive efficiency, often it is necessary to have knowledge of the production frontier function that shows the maximum possible output of production units as a function of inputs. Canonical parametric forms of the frontier function were initially considered under the framework of stochastic frontier model; however, several additional nonparametric methods have been developed over the last decade. Efforts have been recently made to impose shape constraints such as monotonicity and concavity on the non-parametric estimation of the frontier function; however, most existing methods along that direction suffer from unnecessary non-smooth points of the frontier function. In this paper, we propose methods to estimate the smooth frontier function with monotonicity for stochastic frontier models and investigate the effect of imposing a monotonicity constraint into the estimation of the frontier function and the finite dimensional parameters of the model. Simulation studies suggest that imposing the constraint provide better performance to estimate the frontier function, especially when the sample size is small or moderate. However, no apparent gain was observed concerning the estimation of the parameters of the error distribution regardless of sample size.

Non-parametric approach for the grouped dissimilarities using the multidimensional scaling and analysis of distance (다차원척도법과 거리분석을 활용한 그룹화된 비유사성에 대한 비모수적 접근법)

  • Nam, Seungchan;Choi, Yong-Seok
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.4
    • /
    • pp.567-578
    • /
    • 2017
  • Grouped multivariate data can be tested for differences between two or more groups using multivariate analysis of variance (MANOVA). However, this method cannot be used if several assumptions of MANOVA are violated. In this case, multidimensional scaling (MDS) and analysis of distance (AOD) can be applied to grouped dissimilarities based on the various distances. A permutation test is a non-parametric method that can also be used to test differences between groups. MDS is used to calculate the coordinates of observations from dissimilarities and AOD is useful for finding group structure using the coordinates. In particular, AOD is mathematically associated with MANOVA if using the Euclidean distance when computing dissimilarities. In this paper, we study the between and within group structure by applying MDS and AOD to the grouped dissimilarities. In addition, we propose a new test statistic using the group structure for the permutation test. Finally, we investigate the relationship between AOD and MANOVA from dissimilarities based on the Euclidean distance.

A simulation comparison on the analysing methods of Likert type data (모의실험에 의한 리커트형 설문분석 방법의 비교)

  • Kim, Hyun Chul;Choi, Seung Kyoung;Choi, Dong Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.2
    • /
    • pp.373-380
    • /
    • 2016
  • Even though Likert type data is ordinal scale, many researchers who regard Likert type data as interval scale adapt as parametric methods. In this research, simulations have been used to find out a proper analysis of Likert type data. The locations and response distributions of five point Likert type data samples having diverse distribution have been evaluated. In estimating samples' locations, we considered parametric method and non-parametric method, which are t-test and Mann-Whitney test respectively. In addition, to test response distribution, we employed Chi-squared test and Kolmogorov-Smirnov test. In this study, we assessed the performance of the four aforementioned methods by comparing Type I error ratio and statistical power.

The Significance Test on the AHP-based Alternative Evaluation: An Application of Non-Parametric Statistical Method (AHP를 이용한 대안 평가의 유의성 분석: 비모수적 통계 검정 적용)

  • Park, Joonsoo;Kim, Sung-Chul
    • The Journal of Society for e-Business Studies
    • /
    • v.22 no.1
    • /
    • pp.15-35
    • /
    • 2017
  • The method of weighted sum of evaluation using AHP is widely used in feasibility analysis and alternative selection. Final scores are given in forms of weighted sums and the alternative with largest score is selected. With two alternatives, as in feasibility analysis, the final score greater than 0.5 gives the selection but there remains a question that how large is large enough. KDI suggested a concept of 'grey area' where scores are between 0.45 and 0.55 in which decisions are to be made with caution, but it lacks theoretical background. Statistical testing was introduced to answer the question in some studies. It was assumed some kinds of probability distribution, but did not give the validity on them. We examine the various cases of weighted sum of evaluation score and show why the statistical testing has to be introduced. We suggest a non-parametric testing procedure which does not assume a specific distribution. A case study is conducted to analyze the validity of our suggested testing procedure. We conclude our study with remarks on the implication of analysis and the future way of research development.

Comparison of Discriminant Analyses for Consumers' Taste Grade on Hanwoo (한우 맛 등급 판별방법 비교 연구)

  • Kim, Jae-Hee;Seo, Gu-Re-Oun-Den-Nim
    • The Korean Journal of Applied Statistics
    • /
    • v.21 no.6
    • /
    • pp.969-980
    • /
    • 2008
  • This paper presents the comparison of four methods, linear, quadratic, canonical and non-parametric discriminant analyses to discriminate the consumers' taste grade with sensory variables, such as tenderness, juiciness, flavor, and overall acceptability based on Consumer Sensory Survey. The classification ability of each method is measured and compared by the resubstitution error rate.

Determinacy on a Maximum Resolution in Wavelet Series

  • Park, Chun-Gun;Kim, Yeong-Hwa;Yang, Wan-Youn
    • Journal of the Korean Data and Information Science Society
    • /
    • v.15 no.2
    • /
    • pp.467-476
    • /
    • 2004
  • Recently, an approximation of a wavelet series has been developed in the analyses of an unknown function. Most of articles have been studied on thresholding and shrinkage methods for its wavelet coefficients based on (non)parametric and Bayesian methods when the sample size is considered as a maximum resolution in wavelet series. In this paper, regardless of the sample size, we are focusing only on the choice of a maximum resolution in wavelet series. We propose a Bayesian approach to the choice of a maximum resolution based on the linear combination of the wavelet basis functions.

  • PDF

DD-plot for Detecting the Out-of-Control State in Multivariate Process (다변량공정에서 이상상태를 탐지하기 위한 DD-plot)

  • Jang, Dae-Heung;Yi, Seongbaek;Kim, Youngil
    • The Korean Journal of Applied Statistics
    • /
    • v.26 no.2
    • /
    • pp.281-290
    • /
    • 2013
  • It is well known that the DD-plot is a useful graphical tool for non-parametric classification. In this paper, we propose another use of DD-plot for detecting the out-of-control state in multivariate process. We suggested a dynamic version of DD-plot and its accompanying a quality index plot in such case.

Methodological Problems in Information Retrieval Research (정보검색 연구의 방법론에 관한 고찰)

  • 이명희
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.7 no.1
    • /
    • pp.231-246
    • /
    • 1994
  • A major problem for information retrieval research in the past three decades has been methodology, even though some progress has been made in obtaining useful results from methodologically sound experiments. Within a methodology, potential problems include artificial data generated by the researcher, small sample size interpretation of findings. Critics have pointed out that some room exists for improving methodology of information retrieval research; using existing data, having big enough sample size, including large numbers of search queries, introducing more control in relation to variables, utilizing more appropriate performance measures, conducting rests carefully and evaluating findings properly. Relevance judgments depend entirely on the perception of the user and on the situation of the moment. In an experiment, the best judge of relevance is a user with a well defined information need. Normally more than two categories for relevance judgments are desirable becase there are degrees of relevance. In experimental design, careful control of variables is meeded for internal validity. When no single database exists for comparison, existing operational databases should be used cautiously, Careful control for the variations of search queries, inter-searcher sonsistency, intra-searcher consistency and search strategies is necessary. Parametric statistics requiring rigid assumptions are not appropriate in information retrieval research and non-parametric statistics requiring few assumptions are necessary. Particularly, the sign test and the Wilcoxon test are good alternatives.

  • PDF