• Title/Summary/Keyword: Order Statistics

Search Result 3,404, Processing Time 0.029 seconds

A Goodness of Fit Tests Based on the Partial Kullback-Leibler Information with the Type II Censored Data

  • Park, Sang-Un;Lim, Jong-Gun
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2003.10a
    • /
    • pp.233-238
    • /
    • 2003
  • Goodness of fit test statistics based on the information discrepancy have been shown to perform very well (Vasicek 1976, Dudewicz and van der Meulen 1981, Chandra et al 1982, Gohkale 1983, Arizona and Ohta 1989, Ebrahimi et al 1992, etc). Although the test is well defined for the non-censored case, censored case has not been discussed in the literature. Therefore we consider a goodness of fit test based on the partial Kullback-Leibler(KL) information with the type II censored data. We derive the partial KL information of the null distribution function and a nonparametric distribution function, and establish a goodness of fit test statistic. We consider the exponential and normal distributions and made Monte Calro simulations to compare the test statistics with some existing tests.

  • PDF

LH-Moments of Some Distributions Useful in Hydrology

  • Murshed, Md. Sharwar;Park, Byung-Jun;Jeong, Bo-Yoon;Park, Jeong-Soo
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.4
    • /
    • pp.647-658
    • /
    • 2009
  • It is already known from the previous study that flood seems to have heavier tail. Therefore, to make prediction of future extreme label, some agreement of tail behavior of extreme data is highly required. The LH-moments estimation method, the generalized form of L-moments is an useful method of characterizing the upper part of the distribution. LH-moments are based on linear combination of higher order statistics. In this study, we have formulated LH-moments of five distributions useful in hydrology such as, two types of three parameter kappa distributions, beta-${\kappa}$ distribution, beta-p distribution and a generalized Gumbel distribution. Using LH-moments reduces the undue influences that small sample may have on the estimation of large return period events.

최근 초혼연령의 변화에 관한 소고

  • 황대희;고갑석
    • Korea journal of population studies
    • /
    • v.6 no.1
    • /
    • pp.115-126
    • /
    • 1983
  • In order for handicapped people to maintain better humane life, it is necessary to get statistics of them in developing appropriate national policy. However, it is very difficult to obtain baseline statistics on regular or occasional basis. It's reason is mainly attributed to attitudes of their family's tendency to conceal any existence of such memeber in the household. As a result, the statis-tics on the handicapped population is very inaccurate and under satisfaction. We must produce such statistics periodically in time and with accuracy. Thus, this study porposes five methods which, we believe, can produce reliable statistics of thehandcapped population : 1) vitalization through enforcement of handicapped information into the registration system, 2) inclusion in population census of items related to handicapped information, 3) improvement of the physically handicapped population survey scheme, 4) utilization of hospital patients' records for development of the statistics, and 5) an estimation through the labor force survey.

  • PDF

Introductory Statistics textbooks: crisis or opportunity? (교양 통계학 교재: 위기인가? 기회인가?)

  • Choi, Sookhee;Han, Kyungsoo
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.1
    • /
    • pp.105-117
    • /
    • 2022
  • Recently, the number of students taking basic statistics in liberal arts courses at universities nationwide has been increasing significantly. Students who learn statistics only for one semester are more likely to live as consumers than producers of statistical analysis in the future. What consumers need is statistical literacy and thinking skills rather than statistical methods. This paper deals with what points should be considered in order to develop textbooks that improve statistical thinking.

Robust extreme quantile estimation for Pareto-type tails through an exponential regression model

  • Richard Minkah;Tertius de Wet;Abhik Ghosh;Haitham M. Yousof
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.6
    • /
    • pp.531-550
    • /
    • 2023
  • The estimation of extreme quantiles is one of the main objectives of statistics of extremes (which deals with the estimation of rare events). In this paper, a robust estimator of extreme quantile of a heavy-tailed distribution is considered. The estimator is obtained through the minimum density power divergence criterion on an exponential regression model. The proposed estimator was compared with two estimators of extreme quantiles in the literature in a simulation study. The results show that the proposed estimator is stable to the choice of the number of top order statistics and show lesser bias and mean square error compared to the existing extreme quantile estimators. Practical application of the proposed estimator is illustrated with data from the pedochemical and insurance industries.

Quadratic GARCH Models: Introduction and Applications (이차형식 변동성 Q-GARCH 모형의 비교연구)

  • Park, Jin-A;Choi, Moon-Sun;Hwan, Sun-Young
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.1
    • /
    • pp.61-69
    • /
    • 2011
  • In GARCH context, the conditional variance (or volatility) is of a quadratic function of the observation process. Examine standard ARCH/GARCH and their variant models in terms of quadratic formulations and it is interesting to note that most models in GARCH context have contained neither the first order term nor the interaction term. In this paper, we consider three models possessing the first order and/or interaction terms in the formulation of conditional variances, viz., quadratic GARCH, absolute value GARCH and bilinear GARCH processes. These models are investigated with a view to model comparisons and applications to financial time series in Korea

Other approaches to bivariate ranked set sampling

  • Al-Saleh, Mohammad Fraiwan;Alshboul, Hadeel Mohammad
    • Communications for Statistical Applications and Methods
    • /
    • v.25 no.3
    • /
    • pp.283-296
    • /
    • 2018
  • Ranked set sampling, as introduced by McIntyre (Australian Journal of Agriculture Research, 3, 385-390, 1952), dealt with the estimation of the mean of one population. To deal with two or more variables, different forms of bivariate and multivariate ranked set sampling were suggested. For a technique to be useful, it should be easy to implement in practice. Bivariate ranked set sampling, as introduced by Al-Saleh and Zheng (Australian & New Zealand Journal of Statistics, 44, 221-232, 2002), is not easy to implement in practice, because it requires the judgment ranking of each of the combination of the order statistics of the two characteristics. This paper investigates two modifications that make the method easier to use. The first modification is based on ranking one variable and noting the rank of the other variable for one cycle, and do the reverse for another cycle. The second approach is based on ranking of one variable and giving the second variable the same rank (Concomitant Order Statistic) for one cycle and do the reverse for the other cycle. The two procedures are investigated for an estimation of the means of some well-known distributions. It is show that the suggested approaches can be used in practice and can be more efficient than using SRS. A real data set is used to illustrate the procedure.

Identification of risk factors and development of the nomogram for delirium

  • Shin, Min-Seok;Jang, Ji-Eun;Lee, Jea-Young
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.4
    • /
    • pp.339-350
    • /
    • 2021
  • In medical research, the risk factors associated with human diseases need to be identified to predict the incidence rate and determine the treatment plan. Logistic regression analysis is primarily used in order to select risk factors. However, individuals who are unfamiliar with statistics outcomes have trouble using these methods. In this study, we develop a nomogram that graphically represents the numerical association between the disease and risk factors in order to identify the risk factors for delirium and to interpret and use the results more effectively. By using the logistic regression model, we identify risk factors related to delirium, construct a nomogram and predict incidence rates. Additionally, we verify the developed nomogram using a receiver operation characteristics (ROC) curve and calibration plot. Nursing home, stroke/epilepsy, metabolic abnormality, hemodynamic instability, and analgesics were selected as risk factors. The validation results of the nomogram, built with the factors of training set and the test set of the AUC showed a statistically significant determination of 0.893 and 0.717, respectively. As a result of drawing the calibration plot, the coefficient of determination was 0.820. By using the nomogram developed in this paper, health professionals can easily predict the incidence rate of delirium for individual patients. Based on this information, the nomogram could be used as a useful tool to establish an individual's treatment plan.

Power transformation in quasi-likelihood innovations for GARCH volatility (금융 시계열 변동성 추정을 위한 준-우도 이노베이션의 멱변환)

  • Sunah, Chung;Sun Young, Hwang;Sung Duck, Lee
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.6
    • /
    • pp.755-764
    • /
    • 2022
  • This paper is concerned with power transformations in estimating GARCH volatility. To handle a semi-parametric case for which the exact likelihood is not known, quasi-likelihood (QL) rather than maximum-likelihood method is investigated to best estimate GARCH via maximizing the information criteria. A power transformation is introduced in the innovation generating QL estimating functions and then optimum power is selected by maximizing the profile information. A combination of two different power transformations is also studied in order to increase the parameter estimation efficiency. Nine domestic stock prices data are analyzed to order to illustrate the main idea of the paper. The data span includes Covid-19 pandemic period in which financial time series are really volatile.

Change points detection for nonstationary multivariate time series

  • Yeonjoo Park;Hyeongjun Im;Yaeji Lim
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.4
    • /
    • pp.369-388
    • /
    • 2023
  • In this paper, we develop the two-step procedure that detects and estimates the position of structural changes for multivariate nonstationary time series, either on mean parameters or second-order structures. We first investigate the presence of mean structural change by monitoring data through the aggregated cumulative sum (CUSUM) type statistic, a sequential procedure identifying the likely position of the change point on its trend. If no mean change point is detected, the proposed method proceeds to scan the second-order structural change by modeling the multivariate nonstationary time series with a multivariate locally stationary Wavelet process, allowing the time-localized auto-correlation and cross-dependence. Under this framework, the estimated dynamic spectral matrices derived from the local wavelet periodogram capture the time-evolving scale-specific auto- and cross-dependence features of data. We then monitor the change point from the lower-dimensional approximated space of the spectral matrices over time by applying the dynamic principal component analysis. Different from existing methods requiring prior information on the type of changes between mean and covariance structures as an input for the implementation, the proposed algorithm provides the output indicating the type of change and the estimated location of its occurrence. The performance of the proposed method is demonstrated in simulations and the analysis of two real finance datasets.