• Title/Summary/Keyword: 분위수 방법

Search Result 51, Processing Time 0.02 seconds

Combination of Value-at-Risk Models with Support Vector Machine (서포트벡터기계를 이용한 VaR 모형의 결합)

  • Kim, Yong-Tae;Shim, Joo-Yong;Lee, Jang-Taek;Hwang, Chang-Ha
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.5
    • /
    • pp.791-801
    • /
    • 2009
  • Value-at-Risk(VaR) has been used as an important tool to measure the market risk. However, the selection of the VaR models is controversial. This paper proposes VaR forecast combinations using support vector machine quantile regression instead of selecting a single model out of historical simulation and GARCH.

Threshold Modelling of Spatial Extremes - Summer Rainfall of Korea (공간 극단값의 분계점 모형 사례 연구 - 한국 여름철 강수량)

  • Hwang, Seungyong;Choi, Hyemi
    • The Korean Journal of Applied Statistics
    • /
    • v.27 no.4
    • /
    • pp.655-665
    • /
    • 2014
  • An adequate understanding and response to natural hazards such as heat wave, heavy rainfall and severe drought is required. We apply extreme value theory to analyze these abnormal weather phenomena. It is common for extremes in climatic data to be nonstationary in space and time. In this paper, we analyze summer rainfall data in South Korea using exceedance values over thresholds estimated by quantile regression with location information and time as covariates. We group weather stations in South Korea into 5 clusters and t extreme value models to threshold exceedances for each cluster under the assumption of independence in space and time as well as estimates of uncertainty for spatial dependence as proposed in Northrop and Jonathan (2011).

Estimation of Car Insurance Loss Ratio Using the Peaks over Threshold Method (POT방법론을 이용한 자동차보험 손해율 추정)

  • Kim, S.Y.;Song, J.
    • The Korean Journal of Applied Statistics
    • /
    • v.25 no.1
    • /
    • pp.101-114
    • /
    • 2012
  • In car insurance, the loss ratio is the ratio of total losses paid out in claims divided by the total earned premiums. In order to minimize the loss to the insurance company, estimating extreme quantiles of loss ratio distribution is necessary because the loss ratio has essential prot and loss information. Like other types of insurance related datasets, the distribution of the loss ratio has heavy-tailed distribution. The Peaks over Threshold(POT) and the Hill estimator are commonly used to estimate extreme quantiles for heavy-tailed distribution. This article compares and analyzes the performances of various kinds of parameter estimating methods by using a simulation and the real loss ratio of car insurance data. In addition, we estimate extreme quantiles using the Hill estimator. As a result, the simulation and the loss ratio data applications demonstrate that the POT method estimates quantiles more accurately than the Hill estimation method in most cases. Moreover, MLE, Zhang, NLS-2 methods show the best performances among the methods of the GPD parameters estimation.

Estimating the CoVaR for Korean Banking Industry (한국 은행산업의 CoVaR 추정)

  • Choi, Pilsun;Min, Insik
    • KDI Journal of Economic Policy
    • /
    • v.32 no.3
    • /
    • pp.71-99
    • /
    • 2010
  • The concept of CoVaR introduced by Adrian and Brunnermeier (2009) is a useful tool to measure the risk spillover effect. It can capture the risk contribution of each institution to overall systemic risk. While Adrian and Brunnermeier rely on the quantile regression method in the estimation of CoVaR, we propose a new estimation method using parametric distribution functions such as bivariate normal and $S_U$-normal distribution functions. Based on our estimates of CoVaR for Korean banking industry, we investigate the practical usefulness of CoVaR for a systemic risk measure, and compare the estimation performance of each model. Empirical results show that bank makes a positive contribution to system risk. We also find that quantile regression and normal distribution models tend to considerably underestimate the CoVaR (in absolute value) compared to $S_U$-normal distribution model, and this underestimation becomes serious when the crisis in a financial system is assumed.

  • PDF

Nonparametric estimation of conditional quantile with censored data (조건부 분위수의 중도절단을 고려한 비모수적 추정)

  • Kim, Eun-Young;Choi, Hyemi
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.2
    • /
    • pp.211-222
    • /
    • 2013
  • We consider the problem of nonparametrically estimating the conditional quantile function from censored data and propose new estimators here. They are based on local logistic regression technique of Lee et al. (2006) and "double-kernel" technique of Yu and Jones (1998) respectively, which are modified versions under random censoring. We compare those with two existing estimators based on a local linear fits using the check function approach. The comparison is done by a simulation study.

Cross Platform Data Analysis in Microarray Experiment (서로 다른 플랫폼의 마이크로어레이 연구 통합 분석)

  • Lee, Jangmee;Lee, Sunho
    • The Korean Journal of Applied Statistics
    • /
    • v.26 no.2
    • /
    • pp.307-319
    • /
    • 2013
  • With the rapid accumulation of microarray data, it is a significant challenge to integrate available data sets addressing the same biological questions that can provide more samples and better experimental results. Sometimes, different microarray platforms make it difficult to effectively integrate data from several studies and there is no consensus on which method is the best to produce a single and unified data set. Methods using median rank score, quantile discretization and standardization (which directly combine rescaled gene expression values) and meta-analysis (which combine the results of individual studies at the interpretative level) are reviewed. Real data examples downloaded from GEO are used to compare the performance of these methods and to evaluate if the combined data set detects more reliable information from the separated data sets or not.

대학입시에서의 선택과목 등화에 대한 연구

  • 박성현;김춘원
    • Proceedings of the Korean Society for Quality Management Conference
    • /
    • 1998.11a
    • /
    • pp.113-122
    • /
    • 1998
  • 1999년 대학입학 수학능력고사(이하 수능)부터 새롭게 선택과목제와 표준점수제가 도입된다. 선택과목제는 수리탐구II 영역에서 공통과목외 한 개의 과목을 수험생 개인이 선택해서 보는 것을 의미하고, 표준점수제는 영역별 난이도를 조정하기 위해 각 영역의 원점수를 평균 50, 표준편차 10인 점수로 표준화시키는 것을 뜻한다. 선택과목이 있는 영역의 경우는 난이도차뿐만 아니라 각 선택과목 집단별로 일반적인 학업능력의 차이가 존재할 수 있다. 따라서 점수를 표준화시킬 때 과목별 난이도뿐만 아니라 그룹별 학업능력의 차이도 고려해야 한다. 지금까지 발표된 등화방법은 대표적으로 모수적 방법인 선형등화와 비모수적 방법인 백분위수등화가 있는데 이 두 가지 방법은 모두 각 그룹의 학업능력이 동일하다는 가정 하에 전개되어왔다. 따라서 본 논문에서는 우리 나라 입시상황에 적절한 그룹별 능력차이를 보정한 선형등화와 분위수 등화 방법을 비교해 보았다.

  • PDF

A Bayesian Extreme Value Analysis of KOSPI Data (코스피 지수 자료의 베이지안 극단값 분석)

  • Yun, Seok-Hoon
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.5
    • /
    • pp.833-845
    • /
    • 2011
  • This paper conducts a statistical analysis of extreme values for both daily log-returns and daily negative log-returns, which are computed using a collection of KOSPI data from January 3, 1998 to August 31, 2011. The Poisson-GPD model is used as a statistical analysis model for extreme values and the maximum likelihood method is applied for the estimation of parameters and extreme quantiles. To the Poisson-GPD model is also added the Bayesian method that assumes the usual noninformative prior distribution for the parameters, where the Markov chain Monte Carlo method is applied for the estimation of parameters and extreme quantiles. According to this analysis, both the maximum likelihood method and the Bayesian method form the same conclusion that the distribution of the log-returns has a shorter right tail than the normal distribution, but that the distribution of the negative log-returns has a heavier right tail than the normal distribution. An advantage of using the Bayesian method in extreme value analysis is that there is nothing to worry about the classical asymptotic properties of the maximum likelihood estimators even when the regularity conditions are not satisfied, and that in prediction it is effective to reflect the uncertainties from both the parameters and a future observation.

Impact of Oil Price Shocks on Stock Prices by Industry (국제유가 충격이 산업별 주가에 미치는 영향)

  • Lee, Yun-Jung;Yoon, Seong-Min
    • Environmental and Resource Economics Review
    • /
    • v.31 no.2
    • /
    • pp.233-260
    • /
    • 2022
  • In this paper, we analyzed how oil price fluctuations affect stock price by industry using the non-parametric quantile causality test method. We used weekly data of WTI spot price, KOSPI index, and 22 industrial stock indices from January 1998 to April 2021. The empirical results show that the effect of changes in oil prices on the KOSPI index was not significant, which can be attributed to mixed responses of diverse stock prices in several industries included in the KOSPI index. Looking at the stock price response to oil price by industry, the 9 of 18 industries, including Cloth, Paper, and Medicine show a causality with oil prices, while 9 industries, including Food, Chemical, and Non-metal do not show a causal relationship. Four industries including Medicine and Communication (0.45~0.85), Cloth (0.15~0.45), and Construction (0.5~0.6) show causality with oil prices more than three quantiles consecutively. However, the quantiles in which causality appeared were different for each industry. From the result, we find that the effects of oil price on the stock prices differ significantly by industry, and even in one industry, and the response to oil price changes is different depending on the market situation. This suggests that the government's macroeconomic policies, such as industrial and employment policies, should be performed in consideration of the differences in the effects of oil price fluctuations by industry and market conditions. It also shows that investors have to rebalance their portfolio by industry when oil prices fluctuate.

Particulate Matter Prediction using Quantile Boosting (분위수 부스팅을 이용한 미세먼지 농도 예측)

  • Kwon, Jun-Hyeon;Lim, Yaeji;Oh, Hee-Seok
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.1
    • /
    • pp.83-92
    • /
    • 2015
  • Concerning the national health, it is important to develop an accurate prediction method of atmospheric particulate matter (PM) because being exposed to such fine dust can trigger not only respiratory diseases as well as dermatoses, ophthalmopathies and cardiovascular diseases. The National Institute of Environmental Research (NIER) employs a decision tree to predict bad weather days with a high PM concentration. However, the decision tree method (even with the inherent unstableness) cannot be a suitable model to predict bad weather days which represent only 4% of the entire data. In this paper, while presenting the inaccuracy and inappropriateness of the method used by the NIER, we present the utility of a new prediction model which adopts boosting with quantile loss functions. We evaluate the performance of the new method over various ${\tau}$-value's and justify the proposed method through comparison.