• Title/Summary/Keyword: biased estimator

Search Result 46, Processing Time 0.021 seconds

The Causality of Ocean Freight (운임의 인과성)

  • Mo, Soo-Won
    • Journal of Korea Port Economic Association
    • /
    • v.23 no.4
    • /
    • pp.216-227
    • /
    • 2007
  • The aim of this paper is to find out the nature of causality between the two ocean freights employing the Granger method. That is because the Baltic freights tend to move very closely and seem to be behave like one time series. The Granger causality test, however, is very sensitive to the number of lags used in the analysis. This means that one has to be very careful in implementing the Granger causality test. This paper, hence, uses more rather than the lags which the Akaike Information Criterion and the Schwarz Information Criterion suggest. This study shows that BPI does not "Granger-cause" BCI and BSI, but BCI and BSI Granger-cause BPI. I also discover that BHSI does not "Granger-cause" BPI and BSI, but BPI and BSI Granger-cause BHSI. I, hence, model and estimate the ocean freight function and show that the Baltic ocean freight market is inefficient and the biased estimator of the other freight.

  • PDF

Quadratic inference functions in marginal models for longitudinal data with time-varying stochastic covariates

  • Cho, Gyo-Young;Dashnyam, Oyunchimeg
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.3
    • /
    • pp.651-658
    • /
    • 2013
  • For the marginal model and generalized estimating equations (GEE) method there is important full covariates conditional mean (FCCM) assumption which is pointed out by Pepe and Anderson (1994). With longitudinal data with time-varying stochastic covariates, this assumption may not necessarily hold. If this assumption is violated, the biased estimates of regression coefficients may result. But if a diagonal working correlation matrix is used, irrespective of whether the assumption is violated, the resulting estimates are (nearly) unbiased (Pan et al., 2000).The quadratic inference functions (QIF) method proposed by Qu et al. (2000) is the method based on generalized method of moment (GMM) using GEE. The QIF yields a substantial improvement in efficiency for the estimator of ${\beta}$ when the working correlation is misspecified, and equal efficiency to the GEE when the working correlation is correct (Qu et al., 2000).In this paper, we interest in whether the QIF can improve the results of the GEE method in the case of FCCM is violated. We show that the QIF with exchangeable and AR(1) working correlation matrix cannot be consistent and asymptotically normal in this case. Also it may not be efficient than GEE with independence working correlation. Our simulation studies verify the result.

Empirical Bayesian Misclassification Analysis on Categorical Data (범주형 자료에서 경험적 베이지안 오분류 분석)

  • 임한승;홍종선;서문섭
    • The Korean Journal of Applied Statistics
    • /
    • v.14 no.1
    • /
    • pp.39-57
    • /
    • 2001
  • Categorical data has sometimes misclassification errors. If this data will be analyzed, then estimated cell probabilities could be biased and the standard Pearson X2 tests may have inflated true type I error rates. On the other hand, if we regard wellclassified data with misclassified one, then we might spend lots of cost and time on adjustment of misclassification. It is a necessary and important step to ask whether categorical data is misclassified before analyzing data. In this paper, when data is misclassified at one of two variables for two-dimensional contingency table and marginal sums of a well-classified variable are fixed. We explore to partition marginal sums into each cells via the concepts of Bound and Collapse of Sebastiani and Ramoni (1997). The double sampling scheme (Tenenbein 1970) is used to obtain informations of misclassification. We propose test statistics in order to solve misclassification problems and examine behaviors of the statistics by simulation studies.

  • PDF

SURE-based-Trous Wavelet Filter for Interactive Monte Carlo Rendering (몬테카를로 렌더링을 위한 슈어기반 실시간 에이트러스 웨이블릿 필터)

  • Kim, Soomin;Moon, Bochang;Yoon, Sung-Eui
    • Journal of KIISE
    • /
    • v.43 no.8
    • /
    • pp.835-840
    • /
    • 2016
  • Monte Carlo ray tracing has been widely used for simulating a diverse set of photo-realistic effects. However, this technique typically produces noise when insufficient numbers of samples are used. As the number of samples allocated per pixel is increased, the rendered images converge. However, this approach of generating sufficient numbers of samples, requires prohibitive rendering time. To solve this problem, image filtering can be applied to rendered images, by filtering the noisy image rendered using low sample counts and acquiring smoothed images, instead of naively generating additional rays. In this paper, we proposed a Stein's Unbiased Risk Estimator (SURE) based $\grave{A}$-Trous wavelet to filter the noise in rendered images in a near-interactive rate. Based on SURE, we can estimate filtering errors associated with $\grave{A}$-Trous wavelet, and identify wavelet coefficients reducing filtering errors. Our approach showed improvement, up to 6:1, over the original $\grave{A}$-Trous filter on various regions in the image, while maintaining a minor computational overhead. We have integrated our propsed filtering method with the recent interactive ray tracing system, Embree, and demonstrated its benefits.

A joint modeling of longitudinal zero-inflated count data and time to event data (경시적 영과잉 가산자료와 생존자료의 결합모형)

  • Kim, Donguk;Chun, Jihun
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.7
    • /
    • pp.1459-1473
    • /
    • 2016
  • Both longitudinal data and survival data are collected simultaneously in longitudinal data which are observed throughout the passage of time. In this case, the effect of the independent variable becomes biased (provided that sole use of longitudinal data analysis does not consider the relation between both data used) if the missing that occurred in the longitudinal data is non-ignorable because it is caused by a correlation with the survival data. A joint model of longitudinal data and survival data was studied as a solution for such problem in order to obtain an unbiased result by considering the survival model for the cause of missing. In this paper, a joint model of the longitudinal zero-inflated count data and survival data is studied by replacing the longitudinal part with zero-inflated count data. A hurdle model and proportional hazards model were used for each longitudinal zero inflated count data and survival data; in addition, both sub-models were linked based on the assumption that the random effect of sub-models follow the multivariate normal distribution. We used the EM algorithm for the maximum likelihood estimator of parameters and estimated standard errors of parameters were calculated using the profile likelihood method. In simulation, we observed a better performance of the joint model in bias and coverage probability compared to the separate model.

New composite distributions for insurance claim sizes (보험 청구액에 대한 새로운 복합분포)

  • Jung, Daehyeon;Lee, Jiyeon
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.3
    • /
    • pp.363-376
    • /
    • 2017
  • The insurance market is saturated and its growth engine is exhausted; consequently, the insurance industry is now in a low growth period with insurance companies that face a fierce competitive environment. In such a situation, it will be an important issue to find the probability distributions that can explain the flow of insurance claims, which are the basis of the actuarial calculation of the insurance product. Insurance claims are generally known to be well fitted by lognormal distributions or Pareto distributions biased to the left with a thick tail. In recent years, skew normal distributions or skew t distributions have been considered reasonable distributions for describing insurance claims. Cooray and Ananda (2005) proposed a composite lognormal-Pareto distribution that has the advantages of both lognormal and Pareto distributions and they also showed the composite distribution has a higher fitness than single distributions. In this paper, we introduce new composite distributions based on skew normal distributions or skew t distributions and apply them to Danish fire insurance claim data and US indemnity loss data to compare their performance with the other composite distributions and single distributions.