• 제목/요약/키워드: Robust Statistics

검색결과 397건 처리시간 0.022초

Broker-Dealer Competition in the Korean Financial Securities Markets

  • Gwon, Jae-Hyun
    • 산경연구논집
    • /
    • 제9권4호
    • /
    • pp.19-26
    • /
    • 2018
  • Purpose - This study measures how competitive securities broker-dealers are in the Korean financial markets. It aims to test whether the markets are perfectly competitive or monopolistic since the global financial crisis of 2008. Research design, data, and methodology - We apply the method developed by Panzar and Rosse (1987), H-statistics, which offers an index for the competitiveness as well as statistical tests. The dataset in use is retrieved mainly from the quarterly statements of the financial services companies by the Financial Statistics Information System of the Financial Supervisory Service. General information on officers and employees is utilized in addition to balance sheets and income statements of securities companies. Results - H-statistics for 2009-2015 is about 0.7 that is a robust estimate regardless of model specifications such as full trans-log, partial trans-log, and Cobb-Douglas regression equations. H-statistics for each year is also computed in similar ways in that it varies between 0.3 and 0.9. Conclusions - Since the global financial crisis, H-statistics concludes that securities broker-dealer markets in Korea is neither perfectly competitive nor monopolistic. It evidences that the markets are rather monopolistically competitive. The trend in annual H-statistics leads to the same conclusion but the result is not such stable that overall H-statistics implies.

로버스트주성분회귀에서 최적의 주성분선정을 위한 기준 (A Criterion for the Selection of Principal Components in the Robust Principal Component Regression)

  • 김부용
    • Communications for Statistical Applications and Methods
    • /
    • 제18권6호
    • /
    • pp.761-770
    • /
    • 2011
  • 회귀모형에 연관성이 높은 설명변수들이 포함되면 다중공선성의 문제가 야기되며, 동시에 자료에 회귀 이상점들이 포함되면 최소자승추정량에 바탕을 둔 제반 통계적 추론은 심각한 결함을 갖게 된다. 이러한 현상들은 데이터마이닝 분야에서 많이 볼 수 있는데, 본 논문에서는 두 가지 문제를 동시에 해결하기 위한 방안으로서 로버스트주성분회귀를 제안하였다. 특히 최적의 주성분을 선정하기 위한 새로운 기준을 개발하였는데, 설명변수들의 표본공분산 대신에 MVE-추정량을 기반으로 하였으며, 고유치가 아니라 상태지수의 크기에 바탕을 둔 선정기준을 제안하였다. 그리고 주성분모형에서의 추정을 위하여 회귀이상점에 대해 로버스트한 LTS-추정을 도입하였다. 제안된 선정기준이 기존의 기준들보다 다중공선성과 이상점이 유발하는 문제들을 잘 해결할 수 있음을 모의실험을 통하여 확인하였다.

Order-Restricted Inference with Linear Rank Statistics in Microarray Data

  • Kang, Moon-Su
    • 응용통계연구
    • /
    • 제24권1호
    • /
    • pp.137-143
    • /
    • 2011
  • The classification of subjects with unknown distribution in a small sample size often involves order-restricted constraints in multivariate parameter setups. Those problems make the optimality of a conventional likelihood ratio based statistical inferences not feasible. Fortunately, Roy (1953) introduced union-intersection principle(UIP) which provides an alternative avenue. Multivariate linear rank statistics along with that principle, yield a considerably appropriate robust testing procedure. Furthermore, conditionally distribution-free test based upon exact permutation theory is used to generate p-values, even in a small sample. Applications of this method are illustrated in a real microarray data example (Lobenhofer et al., 2002).

A rolling analysis on the prediction of value at risk with multivariate GARCH and copula

  • Bai, Yang;Dang, Yibo;Park, Cheolwoo;Lee, Taewook
    • Communications for Statistical Applications and Methods
    • /
    • 제25권6호
    • /
    • pp.605-618
    • /
    • 2018
  • Risk management has been a crucial part of the daily operations of the financial industry over the past two decades. Value at Risk (VaR), a quantitative measure introduced by JP Morgan in 1995, is the most popular and simplest quantitative measure of risk. VaR has been widely applied to the risk evaluation over all types of financial activities, including portfolio management and asset allocation. This paper uses the implementations of multivariate GARCH models and copula methods to illustrate the performance of a one-day-ahead VaR prediction modeling process for high-dimensional portfolios. Many factors, such as the interaction among included assets, are included in the modeling process. Additionally, empirical data analyses and backtesting results are demonstrated through a rolling analysis, which help capture the instability of parameter estimates. We find that our way of modeling is relatively robust and flexible.

로버스트 설계에 대한 최적화 방안 (A Optimization Procedure for Robust Design)

  • 권용만;홍연웅
    • 한국품질경영학회:학술대회논문집
    • /
    • 한국품질경영학회 1998년도 The 12th Asia Quality Management Symposium* Total Quality Management for Restoring Competitiveness
    • /
    • pp.556-567
    • /
    • 1998
  • Robust design in industry is an approach to reducing performance variation of quality characteristic value in products and processes. Taguchi has used the signal-to-noise ratio(SN) to achieve the appropriate set of operating conditions where variability around target is low in the Taguchi parameter design. Taguchi has dealt with having constraints on both the mean and variability of a characteristic (the dual response problem) by combining information on both mean and variability into an SN. Many Statisticians criticize the Taguchi techniques of analysis, particularly those based on the SN. In this paper we propose a substantially simpler optimization procedure for robust design to solve the dual response problems without resorting to SN. Two examples illustrate this procedure. in the two different experimental design(product array, combined array) approaches.

  • PDF

Simultaneous Optimization for Robust Design using Distance and Desirability Function

  • Kwon, Yong-Man
    • Communications for Statistical Applications and Methods
    • /
    • 제8권3호
    • /
    • pp.685-696
    • /
    • 2001
  • Robust design is an approach to reducing performance variation of response values in products and processes. In the Taguchl parameter design, the product-array approach using orthogonal arrays is mainly used. However, it often requires an excessive number of experiments. An alternative approach, which is called the combined-array approach, was suggested by Welch et. al. (1990) and studied by others. In these studies, only single response variable was considered. We propose how to simultaneously optimize multiple responses when there are correlations among responses, and when we use the combined-array approach to assign control and noise factors. An example is illustrated to show the difference between the Taguchi's product-array approach and the combined-array approach.

  • PDF

공간통계분석에서 이상점 수정방법의 효율성비교 (On the Efficiency of Outlier Cleaners in Spatial Data Analysis)

  • 이진희;신기일
    • 응용통계연구
    • /
    • 제17권2호
    • /
    • pp.327-336
    • /
    • 2004
  • 이상점이 존재하는 공간자료(spatial data) 분석에서 이상점(outlier)의 영향력를 줄이기 위 한 방법으로 로버스트 변이도(robust variogram)를 사용한다. 최근 이상점을 먼저 수정한 후 변이도를 추정하는 방법을 사용하면 더 좋은 분석결과를 얻을 수 있다는 것이 알려졌다. 본 논문에서는 이상점이 존재하는 공간자료 분석에서 Mugglestone 등(2000)이 제안한 이상점 수정법과 본 논문에서 제안한 새로운 이상점 수정법의 효율성을 비교하였다.

Identifying Multiple Leverage Points ad Outliers in Multivariate Linear Models

  • Yoo, Jong-Young
    • Communications for Statistical Applications and Methods
    • /
    • 제7권3호
    • /
    • pp.667-676
    • /
    • 2000
  • This paper focuses on the problem of detecting multiple leverage points and outliers in multivariate linear models. It is well known that he identification of these points is affected by masking and swamping effects. To identify them, Rousseeuw(1985) used robust estimators of MVE(Minimum Volume Ellipsoids), which have the breakdown point of 50% approximately. And Rousseeuw and van Zomeren(1990) suggested the robust distance based on MVE, however, of which the computation is extremely difficult when the number of observations n is large. In this study, e propose a new algorithm to reduce the computational difficulty of MVE. The proposed method is powerful in identifying multiple leverage points and outlies and also effective in reducing the computational difficulty of MVE.

  • PDF

Finding Cost-Effective Mixtures Robust to Noise Variables in Mixture-Process Experiments

  • Lim, Yong B.
    • Communications for Statistical Applications and Methods
    • /
    • 제21권2호
    • /
    • pp.161-168
    • /
    • 2014
  • In mixture experiments with process variables, we consider the case that some of process variables are either uncontrollable or hard to control, which are called noise variables. Given the such mixture experimental data with process variables, first we study how to search for candidate models. Good candidate models are screened by the sequential variables selection method and checking the residual plots for the validity of the model assumption. Two methods, which use numerical optimization methods proposed by Derringer and Suich (1980) and minimization of the weighted expected loss, are proposed to find a cost-effective robust optimal condition in which the performance of the mean as well as the variance of the response for each of the candidate models is well-behaved under the cost restriction of the mixture. The proposed methods are illustrated with the well known fish patties texture example described by Cornell (2002).

Large Robust Designs for Generalized Linear Model

  • Kim, Young-Il;Kahng, Myung-Wook
    • Journal of the Korean Data and Information Science Society
    • /
    • 제10권2호
    • /
    • pp.289-298
    • /
    • 1999
  • We consider a minimax approach to make a design robust to many types or uncertainty arising in reality when dealing with non-normal linear models. We try to build a design to protect against the worst case, i.e. to improve the "efficiency" of the worst situation that can happen. In this paper, we especially deal with the generalized linear model. It is a known fact that the generalized linear model is a universal approach, an extension of the normal linear regression model to cover other distributions. Therefore, the optimal design for the generalized linear model has very similar properties as the normal linear model except that it has some special characteristics. Uncertainties regarding the unknown parameters, link function, and the model structure are discussed. We show that the suggested approach is proven to be highly efficient and useful in practice. In the meantime, a computer algorithm is discussed and a conclusion follows.

  • PDF