• Title/Summary/Keyword: Robust Statistics

Search Result 397, Processing Time 0.02 seconds

Broker-Dealer Competition in the Korean Financial Securities Markets

  • Gwon, Jae-Hyun
    • The Journal of Industrial Distribution & Business
    • /
    • v.9 no.4
    • /
    • pp.19-26
    • /
    • 2018
  • Purpose - This study measures how competitive securities broker-dealers are in the Korean financial markets. It aims to test whether the markets are perfectly competitive or monopolistic since the global financial crisis of 2008. Research design, data, and methodology - We apply the method developed by Panzar and Rosse (1987), H-statistics, which offers an index for the competitiveness as well as statistical tests. The dataset in use is retrieved mainly from the quarterly statements of the financial services companies by the Financial Statistics Information System of the Financial Supervisory Service. General information on officers and employees is utilized in addition to balance sheets and income statements of securities companies. Results - H-statistics for 2009-2015 is about 0.7 that is a robust estimate regardless of model specifications such as full trans-log, partial trans-log, and Cobb-Douglas regression equations. H-statistics for each year is also computed in similar ways in that it varies between 0.3 and 0.9. Conclusions - Since the global financial crisis, H-statistics concludes that securities broker-dealer markets in Korea is neither perfectly competitive nor monopolistic. It evidences that the markets are rather monopolistically competitive. The trend in annual H-statistics leads to the same conclusion but the result is not such stable that overall H-statistics implies.

A Criterion for the Selection of Principal Components in the Robust Principal Component Regression (로버스트주성분회귀에서 최적의 주성분선정을 위한 기준)

  • Kim, Bu-Yong
    • Communications for Statistical Applications and Methods
    • /
    • v.18 no.6
    • /
    • pp.761-770
    • /
    • 2011
  • Robust principal components regression is suggested to deal with both the multicollinearity and outlier problem. A main aspect of the robust principal components regression is the selection of an optimal set of principal components. Instead of the eigenvalue of the sample covariance matrix, a selection criterion is developed based on the condition index of the minimum volume ellipsoid estimator which is highly robust against leverage points. In addition, the least trimmed squares estimation is employed to cope with regression outliers. Monte Carlo simulation results indicate that the proposed criterion is superior to existing ones.

Order-Restricted Inference with Linear Rank Statistics in Microarray Data

  • Kang, Moon-Su
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.1
    • /
    • pp.137-143
    • /
    • 2011
  • The classification of subjects with unknown distribution in a small sample size often involves order-restricted constraints in multivariate parameter setups. Those problems make the optimality of a conventional likelihood ratio based statistical inferences not feasible. Fortunately, Roy (1953) introduced union-intersection principle(UIP) which provides an alternative avenue. Multivariate linear rank statistics along with that principle, yield a considerably appropriate robust testing procedure. Furthermore, conditionally distribution-free test based upon exact permutation theory is used to generate p-values, even in a small sample. Applications of this method are illustrated in a real microarray data example (Lobenhofer et al., 2002).

A rolling analysis on the prediction of value at risk with multivariate GARCH and copula

  • Bai, Yang;Dang, Yibo;Park, Cheolwoo;Lee, Taewook
    • Communications for Statistical Applications and Methods
    • /
    • v.25 no.6
    • /
    • pp.605-618
    • /
    • 2018
  • Risk management has been a crucial part of the daily operations of the financial industry over the past two decades. Value at Risk (VaR), a quantitative measure introduced by JP Morgan in 1995, is the most popular and simplest quantitative measure of risk. VaR has been widely applied to the risk evaluation over all types of financial activities, including portfolio management and asset allocation. This paper uses the implementations of multivariate GARCH models and copula methods to illustrate the performance of a one-day-ahead VaR prediction modeling process for high-dimensional portfolios. Many factors, such as the interaction among included assets, are included in the modeling process. Additionally, empirical data analyses and backtesting results are demonstrated through a rolling analysis, which help capture the instability of parameter estimates. We find that our way of modeling is relatively robust and flexible.

A Optimization Procedure for Robust Design (로버스트 설계에 대한 최적화 방안)

  • Kwon, Yong-Man;Hong, Yeon-Woong
    • Proceedings of the Korean Society for Quality Management Conference
    • /
    • 1998.11a
    • /
    • pp.556-567
    • /
    • 1998
  • Robust design in industry is an approach to reducing performance variation of quality characteristic value in products and processes. Taguchi has used the signal-to-noise ratio(SN) to achieve the appropriate set of operating conditions where variability around target is low in the Taguchi parameter design. Taguchi has dealt with having constraints on both the mean and variability of a characteristic (the dual response problem) by combining information on both mean and variability into an SN. Many Statisticians criticize the Taguchi techniques of analysis, particularly those based on the SN. In this paper we propose a substantially simpler optimization procedure for robust design to solve the dual response problems without resorting to SN. Two examples illustrate this procedure. in the two different experimental design(product array, combined array) approaches.

  • PDF

Simultaneous Optimization for Robust Design using Distance and Desirability Function

  • Kwon, Yong-Man
    • Communications for Statistical Applications and Methods
    • /
    • v.8 no.3
    • /
    • pp.685-696
    • /
    • 2001
  • Robust design is an approach to reducing performance variation of response values in products and processes. In the Taguchl parameter design, the product-array approach using orthogonal arrays is mainly used. However, it often requires an excessive number of experiments. An alternative approach, which is called the combined-array approach, was suggested by Welch et. al. (1990) and studied by others. In these studies, only single response variable was considered. We propose how to simultaneously optimize multiple responses when there are correlations among responses, and when we use the combined-array approach to assign control and noise factors. An example is illustrated to show the difference between the Taguchi's product-array approach and the combined-array approach.

  • PDF

On the Efficiency of Outlier Cleaners in Spatial Data Analysis (공간통계분석에서 이상점 수정방법의 효율성비교)

  • 이진희;신기일
    • The Korean Journal of Applied Statistics
    • /
    • v.17 no.2
    • /
    • pp.327-336
    • /
    • 2004
  • Many researchers have used the robust variogram to reduce the effect of outliers in spatial data analysis. Recently it is known that estimating the variogram after replacing outliers is more efficient. In this paper, we suggest a new data cleaner for geostatistic data analysis and compare the efficiency of outlier cleaners.

Identifying Multiple Leverage Points ad Outliers in Multivariate Linear Models

  • Yoo, Jong-Young
    • Communications for Statistical Applications and Methods
    • /
    • v.7 no.3
    • /
    • pp.667-676
    • /
    • 2000
  • This paper focuses on the problem of detecting multiple leverage points and outliers in multivariate linear models. It is well known that he identification of these points is affected by masking and swamping effects. To identify them, Rousseeuw(1985) used robust estimators of MVE(Minimum Volume Ellipsoids), which have the breakdown point of 50% approximately. And Rousseeuw and van Zomeren(1990) suggested the robust distance based on MVE, however, of which the computation is extremely difficult when the number of observations n is large. In this study, e propose a new algorithm to reduce the computational difficulty of MVE. The proposed method is powerful in identifying multiple leverage points and outlies and also effective in reducing the computational difficulty of MVE.

  • PDF

Finding Cost-Effective Mixtures Robust to Noise Variables in Mixture-Process Experiments

  • Lim, Yong B.
    • Communications for Statistical Applications and Methods
    • /
    • v.21 no.2
    • /
    • pp.161-168
    • /
    • 2014
  • In mixture experiments with process variables, we consider the case that some of process variables are either uncontrollable or hard to control, which are called noise variables. Given the such mixture experimental data with process variables, first we study how to search for candidate models. Good candidate models are screened by the sequential variables selection method and checking the residual plots for the validity of the model assumption. Two methods, which use numerical optimization methods proposed by Derringer and Suich (1980) and minimization of the weighted expected loss, are proposed to find a cost-effective robust optimal condition in which the performance of the mean as well as the variance of the response for each of the candidate models is well-behaved under the cost restriction of the mixture. The proposed methods are illustrated with the well known fish patties texture example described by Cornell (2002).

Large Robust Designs for Generalized Linear Model

  • Kim, Young-Il;Kahng, Myung-Wook
    • Journal of the Korean Data and Information Science Society
    • /
    • v.10 no.2
    • /
    • pp.289-298
    • /
    • 1999
  • We consider a minimax approach to make a design robust to many types or uncertainty arising in reality when dealing with non-normal linear models. We try to build a design to protect against the worst case, i.e. to improve the "efficiency" of the worst situation that can happen. In this paper, we especially deal with the generalized linear model. It is a known fact that the generalized linear model is a universal approach, an extension of the normal linear regression model to cover other distributions. Therefore, the optimal design for the generalized linear model has very similar properties as the normal linear model except that it has some special characteristics. Uncertainties regarding the unknown parameters, link function, and the model structure are discussed. We show that the suggested approach is proven to be highly efficient and useful in practice. In the meantime, a computer algorithm is discussed and a conclusion follows.

  • PDF