• Title/Summary/Keyword: statistical models

Search Result 3,009, Processing Time 0.024 seconds

Evaluation of One-particle Stochastic Lagrangian Models in Horizontally - homogeneous Neutrally - stratified Atmospheric Surface Layer (이상적인 중립 대기경계층에서 라그랑지안 단일입자 모델의 평가)

  • 김석철
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.19 no.4
    • /
    • pp.397-414
    • /
    • 2003
  • The performance of one-particle stochastic Lagrangian models for passive tracer dispersion are evaluated against measurements in horizontally-homogeneous neutrally-stratified atmospheric surface layer. State-of-the-technology models as well as classical Langevin models, all in class of well mixed models are numerically implemented for inter-model comparison study. Model results (far-downstream asymptotic behavior and vertical profiles of the time averaged concentrations, concentration fluxes, and concentration fluctuations) are compared with the reported measurements. The results are: 1) the far-downstream asymptotic trends of all models except Reynolds model agree well with Garger and Zhukov's measurements. 2) profiles of the average concentrations and vertical concentration fluxes by all models except Reynolds model show good agreement with Raupach and Legg's experimental data. Reynolds model produces horizontal concentration flux profiles most close to measurements, yet all other models fail severely. 3) With temporally correlated emissions, one-particle models seems to simulate fairly the concentration fluctuations induced by plume meandering, when the statistical random noises are removed from the calculated concentration fluctuations. Analytical expression for the statistical random noise of one-particle model is presented. This study finds no indication that recent models of most delicate theoretical background are superior to the simple Langevin model in accuracy and numerical performance at well.

Diagnostics for Heteroscedasticity in Mixed Linear Models

  • Ahn, Chul-Hwan
    • Journal of the Korean Statistical Society
    • /
    • v.19 no.2
    • /
    • pp.171-175
    • /
    • 1990
  • A diagnostic test for detecting nonconstant variance in mixed linear models based on the score statistic is derived through the technique of model expansion, and compared to the log likelihood ratio test.

  • PDF

Predictive analysis in insurance: An application of generalized linear mixed models

  • Rosy Oh;Nayoung Woo;Jae Keun Yoo;Jae Youn Ahn
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.5
    • /
    • pp.437-451
    • /
    • 2023
  • Generalized linear models and generalized linear mixed models (GLMMs) are fundamental tools for predictive analyses. In insurance, GLMMs are particularly important, because they provide not only a tool for prediction but also a theoretical justification for setting premiums. Although thousands of resources are available for introducing GLMMs as a classical and fundamental tool in statistical analysis, few resources seem to be available for the insurance industry. This study targets insurance professionals already familiar with basic actuarial mathematics and explains GLMMs and their linkage with classical actuarial pricing tools, such as the Buhlmann premium method. Focus of the study is mainly on the modeling aspect of GLMMs and their application to pricing, while avoiding technical issues related to statistical estimation, which can be automatically handled by most statistical software.

Statistical Inference in Non-Identifiable and Singular Statistical Models

  • Amari, Shun-ichi;Amari, Shun-ichi;Tomoko Ozeki
    • Journal of the Korean Statistical Society
    • /
    • v.30 no.2
    • /
    • pp.179-192
    • /
    • 2001
  • When a statistical model has a hierarchical structure such as multilayer perceptrons in neural networks or Gaussian mixture density representation, the model includes distribution with unidentifiable parameters when the structure becomes redundant. Since the exact structure is unknown, we need to carry out statistical estimation or learning of parameters in such a model. From the geometrical point of view, distributions specified by unidentifiable parameters become a singular point in the parameter space. The problem has been remarked in many statistical models, and strange behaviors of the likelihood ratio statistics, when the null hypothesis is at a singular point, have been analyzed so far. The present paper studies asymptotic behaviors of the maximum likelihood estimator and the Bayesian predictive estimator, by using a simple cone model, and show that they are completely different from regular statistical models where the Cramer-Rao paradigm holds. At singularities, the Fisher information metric degenerates, implying that the cramer-Rao paradigm does no more hold, and that he classical model selection theory such as AIC and MDL cannot be applied. This paper is a first step to establish a new theory for analyzing the accuracy of estimation or learning at around singularities.

  • PDF

A numerical study on group quantile regression models

  • Kim, Doyoen;Jung, Yoonsuh
    • Communications for Statistical Applications and Methods
    • /
    • v.26 no.4
    • /
    • pp.359-370
    • /
    • 2019
  • Grouping structures in covariates are often ignored in regression models. Recent statistical developments considering grouping structure shows clear advantages; however, reflecting the grouping structure on the quantile regression model has been relatively rare in the literature. Treating the grouping structure is usually conducted by employing a group penalty. In this work, we explore the idea of group penalty to the quantile regression models. The grouping structure is assumed to be known, which is commonly true for some cases. For example, group of dummy variables transformed from one categorical variable can be regarded as one group of covariates. We examine the group quantile regression models via two real data analyses and simulation studies that reveal the beneficial performance of group quantile regression models to the non-group version methods if there exists grouping structures among variables.

A New Methodology for Software Reliability based on Statistical Modeling

  • Avinash S;Y.Srinivas;P.Annan naidu
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.9
    • /
    • pp.157-161
    • /
    • 2023
  • Reliability is one of the computable quality features of the software. To assess the reliability the software reliability growth models(SRGMS) are used at different test times based on statistical learning models. In all situations, Tradational time-based SRGMS may not be enough, and such models cannot recognize errors in small and medium sized applications.Numerous traditional reliability measures are used to test software errors during application development and testing. In the software testing and maintenance phase, however, new errors are taken into consideration in real time in order to decide the reliability estimate. In this article, we suggest using the Weibull model as a computational approach to eradicate the problem of software reliability modeling. In the suggested model, a new distribution model is suggested to improve the reliability estimation method. We compute the model developed and stabilize its efficiency with other popular software reliability growth models from the research publication. Our assessment results show that the proposed Model is worthier to S-shaped Yamada, Generalized Poisson, NHPP.

A study on the characterization and traffic modeling of MPEG video sources (MPEG 비디오 소스의 특성화 및 트래픽 모델링에 관한 연구)

  • Jeon, Yong-Hee;Park, Jung-Sook
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.11
    • /
    • pp.2954-2972
    • /
    • 1998
  • It is expected that the transport of compressed video will become a significant part of total network traffic because of the widespread introduction of multimedial services such as VOD(video on demand). Accordingly, VBR(variable bit-rate) encoded video will be widely used, due to its advantages in statistical multiplexing gain and consistent vido quality. Since the transport of video traffic requires larger bandwidth than that of voice and data, the characterization of video source and traffic modeling is very important for the design of proper resource allocation scheme in ATM networks. Suitable statistical source models are also required to analyze performance metrics such as packet loss, delay and jitter. In this paper, we analyzed and described on the characterization and traffic modeling of MPEG video sources. The models are broadly classified into two categories; i.e., statistical models and deterministic models. In statistical models, the models are categorized into five groups: AR(autoregressive), Markov, composite Marko and AR, TES, and selfsimilar models. In deterministic models, the models are categorized into $({\sigma},\;{\rho}$, parameterized model, D-BIND, and Empirical Envelopes models. Each model was analyzed for its characteristics along with corresponding advantages and shortcomings, and we made comparisons on the complexity of each model.

  • PDF

Residuals Plots for Repeated Measures Data

  • PARK TAESUNG
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2000.11a
    • /
    • pp.187-191
    • /
    • 2000
  • In the analysis of repeated measurements, multivariate regression models that account for the correlations among the observations from the same subject are widely used. Like the usual univariate regression models, these multivariate regression models also need some model diagnostic procedures. In this paper, we propose a simple graphical method to detect outliers and to investigate the goodness of model fit in repeated measures data. The graphical method is based on the quantile-quantile(Q-Q) plots of the $X^2$ distribution and the standard normal distribution. We also propose diagnostic measures to detect influential observations. The proposed method is illustrated using two examples.

  • PDF

Functional central limit theorems for ARCH(∞) models

  • Choi, Seunghee;Lee, Oesook
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.5
    • /
    • pp.443-455
    • /
    • 2017
  • In this paper, we study ARCH(${\infty}$) models with either geometrically decaying coefficients or hyperbolically decaying coefficients. Most popular autoregressive conditional heteroscedasticity (ARCH)-type models such as various modified generalized ARCH (GARCH) (p, q), fractionally integrated GARCH (FIGARCH), and hyperbolic GARCH (HYGARCH). can be expressed as one of these cases. Sufficient conditions for $L_2$-near-epoch dependent (NED) property to hold are established and the functional central limit theorems for ARCH(${\infty}$) models are proved.

Lagged Unstable Regressor Models and Asymptotic Efficiency of the Ordinary Least Squares Estimator

  • Shin, Dong-Wan;Oh, Man-Suk
    • Journal of the Korean Statistical Society
    • /
    • v.31 no.2
    • /
    • pp.251-259
    • /
    • 2002
  • Lagged regressor models with general stationary errors independent of the regressors are considered. The regressor process is unstable having characteristic roots on the unit circle. If the order of the lag matches the number of roots on the unit circle, the ordinary least squares estimator (OLSE) is asymptotically efficient in that it has the same limiting distribution as the generalized least squares estimator (GLSE) under the same normalization. This result extends the well-known result of Grenander and Rosenblatt (1957) for asymptotic efficiency of the OLSE in deterministic polynomial and/or trigonometric regressor models to a class of models with stochastic regressors.