• Title/Summary/Keyword: Block maxima

Search Result 12, Processing Time 0.035 seconds

Reduction of blocking artifacts using the local moduls maxima and singularity detection in wavelet transform (웨이블릿 변환 영역에서의 국부 계수 최대치 및 특이점 검출을 이용한 블록화 현상 제거)

  • 이석환;김승진;김태수;이건일
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.109-120
    • /
    • 2004
  • The current paper presents an effective deblocking algorithm for block-based coded images using singularity detection in Mallat wavelet transform. In block-based coded images. The local maxima of the wavelet transform modulus detect all singularities, including blocking artifacts, from multiscale edges. Accordingly, the current study discriminates between blocking artifacts and edges by estimating the Lipschitz regularity of the local maxima and removing the wavelet transform modulus of blocking artifacts. Experimental results showed that the performance of the proposed algorithm was objectively and subjectively superior.

Blocking artifact reduction using singularities detection and Lipschitz regularity from multiscale edges (다층스케일 웨이블릿 변환영역에서 특이점 검출 및 Lipschitz 정칙 상수를 이용한 블록화 현상 제거)

  • Lee, Suk-Hwan;Kwon, Kee-Koo;Kim, Byung-Ju;Kwon, Seong-Geun;Lee, Jong-Won;Lee, Kuhn-Il
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.10A
    • /
    • pp.1011-1020
    • /
    • 2002
  • The current paper presents an effective deblocking algorithm for block-based coded images using singularity detection in a wavelet transform. In block-based coded images, the local maxima of a wavelet transform modulus detect all singularities, including blocking artifacts, from multiscale edges. Accordingly, the current study discriminates between a blocking artifact and an edge by estimation the Lipschitz regularity of the local maxima and removing the wavelet transform modulus of a blocking artifact that has a negative Lipschitz regularity exponent. Experimental results showed that the performance of the proposed algorithm was objectively and subjectively superior.

Usefulness and Limitations of Extreme Value Theory VAR model : The Korean Stock Market (극한치이론을 이용한 VAR 추정치의 유용성과 한계 - 우리나라 주식시장을 중심으로 -)

  • Kim, Kyu-Hyong;Lee, Joon-Haeng
    • The Korean Journal of Financial Management
    • /
    • v.22 no.1
    • /
    • pp.119-146
    • /
    • 2005
  • This study applies extreme value theory to get extreme value-VAR for Korean Stock market and showed the usefulness of the approach. Block maxima model and POT model were used as extreme value models and tested which model was more appropriate through back testing. It was shown that the block maxima model was unstable as the variation of the estimate was very large depending on the confidence level and the magnitude of the estimates depended largely on the block size. This shows that block maxima model was not appropriate for Korean Stock market. On the other hand POT model was relatively stable even though extreme value VAR depended on the selection of the critical value. Back test also showed VAR showed a better result than delta VAR above 97.5% confidence level. POT model performs better the higher the confidence level, which suggests that POT model is useful as a risk management tool especially for VAR estimates with a confidence level higher than 99%. This study picks up the right tail and left tail of the return distribution and estimates the EVT-VAR for each, which reflects the asymmetry of the return distribution of the Korean Stock market.

  • PDF

Quantization Noise Reduction in MPEG Postprocessing System Using the Variable Filter Adaptive to Edge Signal (에지 신호에 적응적인 가변 필터를 이용한 MPEG 후처리 시스템에서의 양자화 잡음 제거)

  • Lee Suk-Hwan;Huh So-Jung;Lee Eung-Joo;Kwon Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.3
    • /
    • pp.296-306
    • /
    • 2006
  • We proposed the algorithm for the quantization noise reduction based on variable filter adaptive to edge signal in MPEG postprocessing system. In our algorithm, edge map and local modulus maxima in the decoded images are obtained by using 2D Mallat wavelet tilter. And then, blocking artifacts in inter-block are reduced by Gaussian LPF that is variable to filtering region according to edge map. Ringing artifacts in intra-block are reduced by 2D SAF according to local modulus maxima. Experimental results show that the proposed algorithm was superior to the conventional algorithms as regards PSNR, which was improved by 0.04-0.20 dB, and the subjective image quality.

  • PDF

Extreme Value Analysis of Statistically Independent Stochastic Variables

  • Choi, Yongho;Yeon, Seong Mo;Kim, Hyunjoe;Lee, Dongyeon
    • Journal of Ocean Engineering and Technology
    • /
    • v.33 no.3
    • /
    • pp.222-228
    • /
    • 2019
  • An extreme value analysis (EVA) is essential to obtain a design value for highly nonlinear variables such as long-term environmental data for wind and waves, and slamming or sloshing impact pressures. According to the extreme value theory (EVT), the extreme value distribution is derived by multiplying the initial cumulative distribution functions for independent and identically distributed (IID) random variables. However, in the position mooring of DNVGL, the sampled global maxima of the mooring line tension are assumed to be IID stochastic variables without checking their independence. The ITTC Recommended Procedures and Guidelines for Sloshing Model Tests never deal with the independence of the sampling data. Hence, a design value estimated without the IID check would be under- or over-estimated because of considering observations far away from a Weibull or generalized Pareto distribution (GPD) as outliers. In this study, the IID sampling data are first checked in an EVA. With no IID random variables, an automatic resampling scheme is recommended using the block maxima approach for a generalized extreme value (GEV) distribution and peaks-over-threshold (POT) approach for a GPD. A partial autocorrelation function (PACF) is used to check the IID variables. In this study, only one 5 h sample of sloshing test results was used for a feasibility study of the resampling IID variables approach. Based on this study, the resampling IID variables may reduce the number of outliers, and the statistically more appropriate design value could be achieved with independent samples.

Estimation of VaR Using Extreme Losses, and Back-Testing: Case Study (극단 손실값들을 이용한 VaR의 추정과 사후검정: 사례분석)

  • Seo, Sung-Hyo;Kim, Sung-Gon
    • The Korean Journal of Applied Statistics
    • /
    • v.23 no.2
    • /
    • pp.219-234
    • /
    • 2010
  • In index investing according to KOSPI, we estimate Value at Risk(VaR) from the extreme losses of the daily returns which are obtained from KOSPI. To this end, we apply Block Maxima(BM) model which is one of the useful models in the extreme value theory. We also estimate the extremal index to consider the dependency in the occurrence of extreme losses. From the back-testing based on the failure rate method, we can see that the model is adaptable for the VaR estimation. We also compare this model with the GARCH model which is commonly used for the VaR estimation. Back-testing says that there is no meaningful difference between the two models if we assume that the conditional returns follow the t-distribution. However, the estimated VaR based on GARCH model is sensitive to the extreme losses occurred near the epoch of estimation, while that on BM model is not. Thus, estimating the VaR based on GARCH model is preferred for the short-term prediction. However, for the long-term prediction, BM model is better.

Multivariate design estimations under copulas constructions. Stage-1: Parametrical density constructions for defining flood marginals for the Kelantan River basin, Malaysia

  • Latif, Shahid;Mustafa, Firuza
    • Ocean Systems Engineering
    • /
    • v.9 no.3
    • /
    • pp.287-328
    • /
    • 2019
  • Comprehensive understanding of the flood risk assessments via frequency analysis often demands multivariate designs under the different notations of return periods. Flood is a tri-variate random consequence, which often pointing the unreliability of univariate return period and demands for the joint dependency construction by accounting its multiple intercorrelated flood vectors i.e., flood peak, volume & durations. Selecting the most parsimonious probability functions for demonstrating univariate flood marginals distributions is often a mandatory pre-processing desire before the establishment of joint dependency. Especially under copulas methodology, which often allows the practitioner to model univariate marginals separately from their joint constructions. Parametric density approximations often hypothesized that the random samples must follow some specific or predefine probability density functions, which usually defines different estimates especially in the tail of distributions. Concentrations of the upper tail often seem interesting during flood modelling also, no evidence exhibited in favours of any fixed distributions, which often characterized through the trial and error procedure based on goodness-of-fit measures. On another side, model performance evaluations and selections of best-fitted distributions often demand precise investigations via comparing the relative sample reproducing capabilities otherwise, inconsistencies might reveal uncertainty. Also, the strength & weakness of different fitness statistics usually vary and having different extent during demonstrating gaps and dispensary among fitted distributions. In this literature, selections efforts of marginal distributions of flood variables are incorporated by employing an interactive set of parametric functions for event-based (or Block annual maxima) samples over the 50-years continuously-distributed streamflow characteristics for the Kelantan River basin at Gulliemard Bridge, Malaysia. Model fitness criteria are examined based on the degree of agreements between cumulative empirical and theoretical probabilities. Both the analytical as well as graphically visual inspections are undertaken to strengthen much decisive evidence in favour of best-fitted probability density.

Concept of Seasonality Analysis of Hydrologic Extreme Variables and Effective Design Rainfall Estimation Using Nonstationary Frequency Analysis (극치수문자료의 계절성 분석 개념 및 비정상성 빈도해석을 이용한 유효확률강수량 해석)

  • Kwon, Hyun-Han;Lee, Jeong-Ju;Lee, Dong-Ryul
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2010.05a
    • /
    • pp.1434-1438
    • /
    • 2010
  • 수문자료의 계절성은 수자원관리의 관점에서 매우 중요한 요소로서 계절성의 변동은 댐의 운영, 홍수조절, 관계용수 관리 등 다양한 분야와 밀접한 관계를 가지고 있다. 그러나 지금까지의 수문 자료의 계절성 평가는 주로 이수과점에서 이루어지고 있으며 치수관점에서 극치수문량의 계절성을 평가하는 연구는 미진한 실정이다. 이는 극치수문량을 해석하는 방법론으로서 연최대치계열(annual maxima) 즉, Block Maxima가 이용됨에 따라 나타나는 문제점이다. 그러나 부분기간치계열(partial duration series)을 활용하게 되면 자료의 확충뿐만 아니라 자연적으로 극치수문량의 계절성에 대한 평가 또한 가능하다. 이러한 분석과정을 POT(peak over threshold)분석이라 하며 일정 기준값(threshold) 이상의 자료를 모두 취하여 빈도해석에 이용하는 방법으로서 기존 방법의 경우 연최대값이 일반적으로 7월과 8월에만 존재하게 되지만 POT 분석의 경우 여러 달에 걸쳐 빈도해석을 위한 자료가 구성되게 된다. 이를 빈도해석으로 연계시키기 위해서는 계절성을 비정상성으로 고려하여 모형화 할 수 있는 방법론의 개발이 필요하다. 본 연구에서는 이러한 목적을 위해서 계절성을 고려할 수 있는 비정상성빈도해석 기법의 개념을 제시하고 모형으로 개발하고자 한다. GEV 또는 Gumbel 분포의 매개변수와 계절성을 연계시키기 위해서 Fourier 급수가 활용되며 매개변수는 Bayesian 기법을 통해 최적화 된다. 이를 통하여 설계강수량의 계절적 분포를 정량적으로 해석할 수 있으며 미래의 극치강수량에 대한 분포특성 또한 확률적으로 해석이 가능하다. 본 연구에서 제안된 방법은 국내외 시간강수량자료에 적용되어 적합성과 적용성이 평가된다.

  • PDF

Construction of Bivariate Probability Distribution with Nonstationary GEV/Gumbel Marginal Distributions for Rainfall Data (비정상성 GEV/Gumbel 주변분포를 이용한 강우자료 이변량 확률분포형 구축)

  • Joo, Kyungwon;Choi, Soyung;Kim, Hanbeen;Heo, Jun-Haeng
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2016.05a
    • /
    • pp.41-41
    • /
    • 2016
  • 최근 다변량 확률모형을 이용한 빈도해석이 수문자료 등에 적용되면서 다양하게 연구되고 있으며 다변량 확률모형 중 copula 모형은 주변분포형에 대한 제약이 없어 여러 분야에 걸쳐 활발히 연구되고 있다. 강우자료는 기존 일변량 빈도해석을 수행하기 위하여 사용하던 block maxima 방법 대신 최소무강우시간(inter event time)을 통하여 강우사상을 추출하여 표본으로 사용한다. 또한 기후변화로 인한 강우량의 변화등에 대응하기 위하여 비정상성 Generalized Extreme Value(GEV)와 Gumbel 등의 확률분포형에 대한 연구도 많은 부분 이루어져 있다. 본 연구에서는, Archimedean copula 모형을 이용하여 이변량 확률모형을 구축하면서 여기에 사용되는 주변분포형에 정상성/비정상성 분포형을 적용하였다. 모형의 매개변수는 inference function for margin 방법을 이용하였으며 주변분포형으로는 정상성/비정상성 GEV, Gumbel 모형을 적용하였다. 결과로 정상성/비정상성 경향을 나타내는 지점을 구분하고 각 지점에 대한 정상성/비정상성 주변분포형을 적용한 이변량 확률분포형을 구하였다.

  • PDF

Comparison of log-logistic and generalized extreme value distributions for predicted return level of earthquake (지진 재현수준 예측에 대한 로그-로지스틱 분포와 일반화 극단값 분포의 비교)

  • Ko, Nak Gyeong;Ha, Il Do;Jang, Dae Heung
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.1
    • /
    • pp.107-114
    • /
    • 2020
  • Extreme value distributions have often been used for the analysis (e.g., prediction of return level) of data which are observed from natural disaster. By the extreme value theory, the block maxima asymptotically follow the generalized extreme value distribution as sample size increases; however, this may not hold in a small sample case. For solving this problem, this paper proposes the use of a log-logistic (LLG) distribution whose validity is evaluated through goodness-of-fit test and model selection. The proposed method is illustrated with data from annual maximum earthquake magnitudes of China. Here, we present the predicted return level and confidence interval according to each return period using LLG distribution.