• 제목/요약/키워드: Gaussian distribution model

Search Result 350, Processing Time 0.05 seconds

CONVERGENCE OF WEIGHTED U-EMPIRICAL PROCESSES

  • Park, Hyo-Il;Na, Jong-Hwa
    • Journal of the Korean Statistical Society
    • /
    • v.33 no.4
    • /
    • pp.353-365
    • /
    • 2004
  • In this paper, we define the weighted U-empirical process for simple linear model and show the weak convergence to a Gaussian process under some conditions. Then we illustrate the usage of our result with examples. In the appendix, we derive the variance of the weighted U-empirical distribution function.

Optimal Var Allocation in system planning by stochastic Linear Programming (확률 선형 계획법에 의한 최적 Var 배분 계획에 관한 연구)

  • Song, Kil-Yeong;Lee, Hee-Yeong
    • Proceedings of the KIEE Conference
    • /
    • 1988.07a
    • /
    • pp.863-865
    • /
    • 1988
  • This paper presents a optimal Var allocation algorithm for minimizing transmission line losses and improving voltage profile in a given system. In this paper, nodal input data is considered as Gaussian distribution with their mean value and their variance. A Stocastic Linear programming technique based on chance constrained method is applied, to solve the var allocation problem with probabilistic constraint. The test result in 6-Bus Model system showes that the voltage distribution of load buses is improved and the power loss is more reduced than before var allocation.

  • PDF

Numerical studies on approximate option prices (근사적 옵션 가격의 수치적 비교)

  • Yoon, Jeongyoen;Seung, Jisu;Song, Seongjoo
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.2
    • /
    • pp.243-257
    • /
    • 2017
  • In this paper, we compare several methods to approximate option prices: Edgeworth expansion, A-type and C-type Gram-Charlier expansions, a method using normal inverse gaussian (NIG) distribution, and an asymptotic method using nonlinear regression. We used two different types of approximation. The first (called the RNM method) approximates the risk neutral probability density function of the log return of the underlying asset and computes the option price. The second (called the OPTIM method) finds the approximate option pricing formula and then estimates parameters to compute the option price. For simulation experiments, we generated underlying asset data from the Heston model and NIG model, a well-known stochastic volatility model and a well-known Levy model, respectively. We also applied the above approximating methods to the KOSPI200 call option price as a real data application. We then found that the OPTIM method shows better performance on average than the RNM method. Among the OPTIM, A-type Gram-Charlier expansion and the asymptotic method that uses nonlinear regression showed relatively better performance; in addition, among RNM, the method of using NIG distribution was relatively better than others.

Bayesian Method Recognition Rates Improvement using HMM Vocabulary Recognition Model Optimization (HMM 어휘 인식 모델 최적화를 이용한 베이시안 기법 인식률 향상)

  • Oh, Sang Yeon
    • Journal of Digital Convergence
    • /
    • v.12 no.7
    • /
    • pp.273-278
    • /
    • 2014
  • In vocabulary recognition using HMM(Hidden Markov Model) by model for the observation of a discrete probability distribution indicates the advantages of low computational complexity, but relatively low recognition rate. Improve them with a HMM model is proposed for the optimization of the Bayesian methods. In this paper is posterior distribution and prior distribution in recognition Gaussian mixtures model provides a model to optimize of the Bayesian methods vocabulary recognition. The result of applying the proposed method, the recognition rate of 97.9% in vocabulary recognition, respectively.

Color Image Segmentation Based on Morphological Operation and a Gaussian Mixture Model (모폴로지 연산과 가우시안 혼합 모형에 기반한 컬러 영상 분할)

  • Lee Myung-Eun;Park Soon-Young;Cho Wan-Hyun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.3 s.309
    • /
    • pp.84-91
    • /
    • 2006
  • In this paper, we present a new segmentation algorithm for color images based on mathematical morphology and a Gaussian mixture model(GMM). We use the morphological operations to determine the number of components in a mixture model and to detect their modes of each mixture component. Next, we have adopted the GMM to represent the probability distribution of color feature vectors and used the deterministic annealing expectation maximization (DAEM) algorithm to estimate the parameters of the GMM that represents the multi-colored objects statistically. Finally, we segment the color image by using posterior probability of each pixel computed from the GMM. The experimental results show that the morphological operation is efficient to determine a number of components and initial modes of each component in the mixture model. And also it shows that the proposed DAEM provides a global optimal solution for the parameter estimation in the mixture model and the natural color images are segmented efficiently by using the GMM with parameters estimated by morphological operations and the DAEM algorithm.

Statistical Model for Emotional Video Shot Characterization (비디오 셧의 감정 관련 특징에 대한 통계적 모델링)

  • 박현재;강행봉
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.12C
    • /
    • pp.1200-1208
    • /
    • 2003
  • Affective computing plays an important role in intelligent Human Computer Interactions(HCI). To detect emotional events, it is desirable to construct a computing model for extracting emotion related features from video. In this paper, we propose a statistical model based on the probabilistic distribution of low level features in video shots. The proposed method extracts low level features from video shots and then from a GMM(Gaussian Mixture Model) for them to detect emotional shots. As low level features, we use color, camera motion and sequence of shot lengths. The features can be modeled as a GMM by using EM(Expectation Maximization) algorithm and the relations between time and emotions are estimated by MLE(Maximum Likelihood Estimation). Finally, the two statistical models are combined together using Bayesian framework to detect emotional events in video.

IMPLEMENTATION OF DATA ASSIMILATION METHODOLOGY FOR PHYSICAL MODEL UNCERTAINTY EVALUATION USING POST-CHF EXPERIMENTAL DATA

  • Heo, Jaeseok;Lee, Seung-Wook;Kim, Kyung Doo
    • Nuclear Engineering and Technology
    • /
    • v.46 no.5
    • /
    • pp.619-632
    • /
    • 2014
  • The Best Estimate Plus Uncertainty (BEPU) method has been widely used to evaluate the uncertainty of a best-estimate thermal hydraulic system code against a figure of merit. This uncertainty is typically evaluated based on the physical model's uncertainties determined by expert judgment. This paper introduces the application of data assimilation methodology to determine the uncertainty bands of the physical models, e.g., the mean value and standard deviation of the parameters, based upon the statistical approach rather than expert judgment. Data assimilation suggests a mathematical methodology for the best estimate bias and the uncertainties of the physical models which optimize the system response following the calibration of model parameters and responses. The mathematical approaches include deterministic and probabilistic methods of data assimilation to solve both linear and nonlinear problems with the a posteriori distribution of parameters derived based on Bayes' theorem. The inverse problem was solved analytically to obtain the mean value and standard deviation of the parameters assuming Gaussian distributions for the parameters and responses, and a sampling method was utilized to illustrate the non-Gaussian a posteriori distributions of parameters. SPACE is used to demonstrate the data assimilation method by determining the bias and the uncertainty bands of the physical models employing Bennett's heated tube test data and Becker's post critical heat flux experimental data. Based on the results of the data assimilation process, the major sources of the modeling uncertainties were identified for further model development.

Gaussian noise addition approaches for ensemble optimal interpolation implementation in a distributed hydrological model

  • Manoj Khaniya;Yasuto Tachikawa;Kodai Yamamoto;Takahiro Sayama;Sunmin Kim
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.25-25
    • /
    • 2023
  • The ensemble optimal interpolation (EnOI) scheme is a sub-optimal alternative to the ensemble Kalman filter (EnKF) with a reduced computational demand making it potentially more suitable for operational applications. Since only one model is integrated forward instead of an ensemble of model realizations, online estimation of the background error covariance matrix is not possible in the EnOI scheme. In this study, we investigate two Gaussian noise based ensemble generation strategies to produce dynamic covariance matrices for assimilation of water level observations into a distributed hydrological model. In the first approach, spatially correlated noise, sampled from a normal distribution with a fixed fractional error parameter (which controls its standard deviation), is added to the model forecast state vector to prepare the ensembles. In the second method, we use an adaptive error estimation technique based on the innovation diagnostics to estimate this error parameter within the assimilation framework. The results from a real and a set of synthetic experiments indicate that the EnOI scheme can provide better results when an optimal EnKF is not identified, but performs worse than the ensemble filter when the true error characteristics are known. Furthermore, while the adaptive approach is able to reduce the sensitivity to the fractional error parameter affecting the first (non-adaptive) approach, results are usually worse at ungauged locations with the former.

  • PDF

Value at Risk of portfolios using copulas

  • Byun, Kiwoong;Song, Seongjoo
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.1
    • /
    • pp.59-79
    • /
    • 2021
  • Value at Risk (VaR) is one of the most common risk management tools in finance. Since a portfolio of several assets, rather than one asset portfolio, is advantageous in the risk diversification for investment, VaR for a portfolio of two or more assets is often used. In such cases, multivariate distributions of asset returns are considered to calculate VaR of the corresponding portfolio. Copulas are one way of generating a multivariate distribution by identifying the dependence structure of asset returns while allowing many different marginal distributions. However, they are used mainly for bivariate distributions and are not widely used in modeling joint distributions for many variables in finance. In this study, we would like to examine the performance of various copulas for high dimensional data and several different dependence structures. This paper compares copulas such as elliptical, vine, and hierarchical copulas in computing the VaR of portfolios to find appropriate copula functions in various dependence structures among asset return distributions. In the simulation studies under various dependence structures and real data analysis, the hierarchical Clayton copula shows the best performance in the VaR calculation using four assets. For marginal distributions of single asset returns, normal inverse Gaussian distribution was used to model asset return distributions, which are generally high-peaked and heavy-tailed.

A variational Bayes method for pharmacokinetic model (약물동태학 모형에 대한 변분 베이즈 방법)

  • Parka, Sun;Jo, Seongil;Lee, Woojoo
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.1
    • /
    • pp.9-23
    • /
    • 2021
  • In the following paper we introduce a variational Bayes method that approximates posterior distributions with mean-field method. In particular, we introduce automatic differentiation variation inference (ADVI), which approximates joint posterior distributions using the product of Gaussian distributions after transforming parameters into real coordinate space, and then apply it to pharmacokinetic models that are models for the study of the time course of drug absorption, distribution, metabolism and excretion. We analyze real data sets using ADVI and compare the results with those based on Markov chain Monte Carlo. We implement the algorithms using Stan.