• Title/Summary/Keyword: 두꺼운 꼬리 분포

Search Result 27, Processing Time 0.032 seconds

VaR and ES as Tail-Related Risk Measures for Heteroscedastic Financial Series (이분산성 및 두꺼운 꼬리분포를 가진 금융시계열의 위험추정 : VaR와 ES를 중심으로)

  • Moon, Seong-Ju;Yang, Sung-Kuk
    • The Korean Journal of Financial Management
    • /
    • v.23 no.2
    • /
    • pp.189-208
    • /
    • 2006
  • In this paper we are concerned with estimation of tail related risk measures for heteroscedastic financial time series and VaR limits that VaR tells us nothing about the potential size of the loss given. So we use GARCH-EVT model describing the tail of the conditional distribution for heteroscedastic financial series and adopt Expected Shortfall to overcome VaR limits. The main results can be summarized as follows. First, the distribution of stock return series is not normal but fat tail and heteroscedastic. When we calculate VaR under normal distribution we can ignore the heavy tails of the innovations or the stochastic nature of the volatility. Second, GARCH-EVT model is vindicated by the very satisfying overall performance in various backtesting experiments. Third, we founded the expected shortfall as an alternative risk measures.

  • PDF

A Strategy of Adjusted Internet Traffic Modeling using Heavy-Tailed Distributions (두꺼운 꼬리 분포를 이용한 수정된 인터넷 트래픽 모델)

  • Ji, Seon-Su
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.12 no.3
    • /
    • pp.10-18
    • /
    • 2007
  • According to the recent growth of the internet commercialization and differentiated QoS(quality of service), statistical traffic modeling is necessary for forecasting and controlling future network capacity. This paper reviews tile essential components in web workloads. And I propose adjusted internet traffic modeling using heavy-tailed distributions and intervention techniques.

  • PDF

Frequency Analyses for Extreme Rainfall Data using the Burr XII Distribution (Burr XII 모형을 이용한 우리나라 극한 강우자료 빈도해석)

  • Seo, Jungho;Shin, Ju-Young;Jung, Younghun;Heo, Jun-Haeng
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2018.05a
    • /
    • pp.335-335
    • /
    • 2018
  • 최근 이상기후현상으로 지구상의 여러 지역에서 극치 수문 사상의 발생 빈도와 강도가 날로 증가하고 있는 추세이다. 이에 대해 수공구조물의 설계를 위한 극치강우사상의 빈도해석에 있어서 적절한 확률분포모형의 적용은 매우 중요하다. 이에 수문통계분야에서는 generalized extreme value(GEV), generalized logistic(GLO), Gumbel(GUM) 모형과 같은 극치 분포를 이용한 수문통계적 특성에 대한 접근이 주로 이루어지고 있다. 하지만 우리나라 강우 사상의 경우 GEV 분포와 GUM 분포가 비교적 적합한 것으로 알려져 있지만 하나의 형상매개변수를 가지고 있어 분포 모형이 표현할 수 있는 통계적 특성에 한계를 가지고 있다. 기존의 GEV나 GUM분포로는 적절히 재현되지 않는 자료들을 분석하기 위해서 두 개의 형상매개변수를 가지는 분포형에 대한 연구가 진행되고 있다. 이에 본 연구에서는 두 개의 형상매개변수를 가지는 Burr XII 분포형의 우리나라 극한 강우자료에 대한 적용성을 평가하였다. Burr XII 분포형은 gamma나 exponential 분포 모형처럼 양의 확률변수만을 가지고, Cauchy나 Pareto 분포 모형처럼 두꺼운 꼬리(heavy-tailed distribution) 형상을 나타내기 때문에 비교적 큰 확률변수가 빈번히 나타나는 극치사상에도 적합한 것으로 알려져 있다. 이를 위해 Burr XII 분포 모형을 이용하여 우리나라 강우자료에 대해 지점빈도해석 및 지역빈도해석을 수행하고 우리나라 강우자료에 비교적 적합하다고 알려진 분포인 GEV, GLO, GUM 분포형을 통해 산정된 결과와 비교하였다.

  • PDF

Value at Risk with Peaks over Threshold: Comparison Study of Parameter Estimation (Peacks over threshold를 이용한 Value at Risk: 모수추정 방법론의 비교)

  • Kang, Minjung;Kim, Jiyeon;Song, Jongwoo;Song, Seongjoo
    • The Korean Journal of Applied Statistics
    • /
    • v.26 no.3
    • /
    • pp.483-494
    • /
    • 2013
  • The importance of financial risk management has been highlighted after several recent incidences of global financial crisis. One of the issues in financial risk management is how to measure the risk; currently, the most widely used risk measure is the Value at Risk(VaR). We can consider to estimate VaR using extreme value theory if the financial data have heavy tails as the recent market trend. In this paper, we study estimations of VaR using Peaks over Threshold(POT), which is a common method of modeling fat-tailed data using extreme value theory. To use POT, we first estimate parameters of the Generalized Pareto Distribution(GPD). Here, we compare three different methods of estimating parameters of GPD by comparing the performance of the estimated VaR based on KOSPI 5 minute-data. In addition, we simulate data from normal inverse Gaussian distributions and examine two parameter estimation methods of GPD. We find that the recent methods of parameter estimation of GPD work better than the maximum likelihood estimation when the kurtosis of the return distribution of KOSPI is very high and the simulation experiment shows similar results.

Estimation of Car Insurance Loss Ratio Using the Peaks over Threshold Method (POT방법론을 이용한 자동차보험 손해율 추정)

  • Kim, S.Y.;Song, J.
    • The Korean Journal of Applied Statistics
    • /
    • v.25 no.1
    • /
    • pp.101-114
    • /
    • 2012
  • In car insurance, the loss ratio is the ratio of total losses paid out in claims divided by the total earned premiums. In order to minimize the loss to the insurance company, estimating extreme quantiles of loss ratio distribution is necessary because the loss ratio has essential prot and loss information. Like other types of insurance related datasets, the distribution of the loss ratio has heavy-tailed distribution. The Peaks over Threshold(POT) and the Hill estimator are commonly used to estimate extreme quantiles for heavy-tailed distribution. This article compares and analyzes the performances of various kinds of parameter estimating methods by using a simulation and the real loss ratio of car insurance data. In addition, we estimate extreme quantiles using the Hill estimator. As a result, the simulation and the loss ratio data applications demonstrate that the POT method estimates quantiles more accurately than the Hill estimation method in most cases. Moreover, MLE, Zhang, NLS-2 methods show the best performances among the methods of the GPD parameters estimation.

Robust Bayesian meta analysis (로버스트 베이지안 메타분석)

  • Choi, Seong-Mi;Kim, Dal-Ho;Shin, Im-Hee;Kim, Ho-Gak;Kim, Sang-Gyung
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.3
    • /
    • pp.459-466
    • /
    • 2011
  • This article addresses robust Bayesian modeling for meta analysis which derives general conclusion by combining independently performed individual studies. Specifically, we propose hierarchical Bayesian models with unknown variances for meta analysis under priors which are scale mixtures of normal, and thus have tail heavier than that of the normal. For the numerical analysis, we use the Gibbs sampler for calculating Bayesian estimators and illustrate the proposed methods using actual data.

Time-varying modeling of the composite LN-GPD (시간에 따라 변화하는 로그-정규분포와 파레토 합성 분포의 모형 추정)

  • Park, Sojin;Baek, Changryong
    • The Korean Journal of Applied Statistics
    • /
    • v.31 no.1
    • /
    • pp.109-122
    • /
    • 2018
  • The composite lognormal-generalized Pareto distribution (LN-GPD) is a mixture of right-truncated lognormal and GPD for a given threshold value. Scollnik (Scandinavian Actuarial Journal, 2007, 20-33, 2007) shows that the composite LN-GPD is adequate to describe body distribution and heavy-tailedness. This paper considers time-varying modeling of the LN-GPD based on local polynomial maximum likelihood estimation. Time-varying model provides significant detailed information of time dependent data, hence it can be applied to disciplines such as service engineering for staffing and resources management. Our work also extends to Beirlant and Goegebeur (Journal of Multivariate Analysis, 89, 97-118, 2004) in the sense of losing no data by including truncated lognormal distribution. Our proposed method is shown to perform adequately in simulation. Real data application to the service time of the Israel bank call center shows interesting findings on the staffing policy.

Distribution fitting for the rate of return and value at risk (수익률 분포의 적합과 리스크값 추정)

  • Hong, Chong-Sun;Kwon, Tae-Wan
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.2
    • /
    • pp.219-229
    • /
    • 2010
  • There have been many researches on the risk management due to rapid increase of various risk factors for financial assets. Aa a method for comprehensive risk management, Value at Risk (VaR) is developed. For estimation of VaR, it is important task to solve the problem of asymmetric distribution of the return rate with heavy tail. Most real distributions of the return rate have high positive kurtosis and low negative skewness. In this paper, some alternative distributions are used to be fitted to real distributions of the return rate of financial asset. And estimates of VaR obtained by using these fitting distributions are compared with those obtained from real distribution. It is found that normal mixture distribution is the most fitted where its skewness and kurtosis of practical distribution are close to real ones, and the VaR estimation using normal mixture distribution is more accurate than any others using other distributions including normal distribution.

Convergence Analysis of Adaptive L-Filter (적응 L-필터의 수렴성 해석)

  • Kim, Soo-Yong;Bae, Sung-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.9
    • /
    • pp.1210-1216
    • /
    • 2009
  • In this paper we analyze the convergence behavior of the recursive least rank (RLR) L-filter. The RLR L-filter is an order statistics filter, filter coefficients of which are the weights according to the order of magnitude of inputs. And RLR L-filter is a non-linear adaptive filter, that uses RLR algorithm for coefficient updating. The RLR algorithm is a non-linear adaptive algorithm based on rank estimates in Robust statistics. The mean and mean-squared convergence behavior of the RLR L-filter is examined with variable step-sizes. The RLR L-filter adapts the median filter type to the heavy-tailed distribution function of impulse noise, and adapts the average filter type to Gaussian noises.

  • PDF

A comparative study of feature screening methods for ultrahigh dimensional multiclass classification (초고차원 다범주분류를 위한 변수선별 방법 비교 연구)

  • Lee, Kyungeun;Kim, Kyoung Hee;Shin, Seung Jun
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.5
    • /
    • pp.793-808
    • /
    • 2017
  • We compare various variable screening methods on multiclass classification problems when the data is ultrahigh-dimensional. Two different approaches were considered: (1) pairwise extension from binary classification via one versus one or one versus rest comparisons and (2) direct classification of multiclass responses. We conducted extensive simulation studies under different conditions: heavy tailed explanatory variables, correlated signal and noise variables, correlated joint distributions but uncorrelated marginals, and unbalanced response variables. We then analyzed real data to examine the performance of the methods. The results showed that model-free methods perform better for multiclass classification problems as well as binary ones.