• Title/Summary/Keyword: Real variance estimation

Search Result 62, Processing Time 0.023 seconds

Implementation of Timing Synchronization in Vehicle Communication System

  • Lee, Sang-Yub;Lee, Chul-Dong;Kwak, Jae-Min
    • Journal of information and communication convergence engineering
    • /
    • v.8 no.3
    • /
    • pp.289-294
    • /
    • 2010
  • In the vehicle communication system, transferred information is needed to be detected as possible as fast in order to inform car status located in front and rear side. Through the moving vehicle information, we can avoid the crash caused by sudden break of front one or acquire to real time traffic data to check the detour road. To be connecting the wireless communication between the vehicles, fast timing synchronization can be a key factor. Finding out the sync point fast is able to have more marginal time to compensate the distorted signals caused by channel variance. Thus, we introduce the combination method which helps find out the start of frame quickly. It is executed by auto-correlation and cross-correlation simultaneously using only short preambles. With taking the absolute value at the implemented synch block output, the proposed method shows much better system performance to us.

Estimation of Conditional Kendall's Tau for Bivariate Interval Censored Data

  • Kim, Yang-Jin
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.6
    • /
    • pp.599-604
    • /
    • 2015
  • Kendall's tau statistic has been applied to test an association of bivariate random variables. However, incomplete bivariate data with a truncation and a censoring results in incomparable or unorderable pairs. With such a partial information, Tsai (1990) suggested a conditional tau statistic and a test procedure for a quasi independence that was extended to more diverse cases such as double truncation and a semi-competing risk data. In this paper, we also employed a conditional tau statistic to estimate an association of bivariate interval censored data. The suggested method shows a better result in simulation studies than Betensky and Finkelstein's multiple imputation method except a case in cases with strong associations. The association of incubation time and infection time from an AIDS cohort study is estimated as a real data example.

Portfolio Optimization with Groupwise Selection

  • Kim, Namhyoung;Sra, Suvrit
    • Industrial Engineering and Management Systems
    • /
    • v.13 no.4
    • /
    • pp.442-448
    • /
    • 2014
  • Portfolio optimization in the presence of estimation error can be stabilized by incorporating norm-constraints; this result was shown by DeMiguel et al. (A generalized approach to portfolio optimization: improving performance by constraining portfolio norms, Management Science, 5, 798-812, 2009), who reported empirical performance better than numerous competing approaches. We extend the idea of norm-constraints by introducing a powerful enhancement, grouped selection for portfolio optimization. Here, instead of merely penalizing norms of the assets being selected, we penalize groups, where within a group assets are treated alike, but across groups, the penalization may differ. The idea of groupwise selection is grounded in statistics, but to our knowledge, it is novel in the context of portfolio optimization. Novelty aside, the real benefits of groupwise selection are substantiated by experiments; our results show that groupwise asset selection leads to strategies with lower variance, higher Sharpe ratios, and even higher expected returns than the ordinary norm-constrained formulations.

MCCARD: MONTE CARLO CODE FOR ADVANCED REACTOR DESIGN AND ANALYSIS

  • Shim, Hyung-Jin;Han, Beom-Seok;Jung, Jong-Sung;Park, Ho-Jin;Kim, Chang-Hyo
    • Nuclear Engineering and Technology
    • /
    • v.44 no.2
    • /
    • pp.161-176
    • /
    • 2012
  • McCARD is a Monte Carlo (MC) neutron-photon transport simulation code. It has been developed exclusively for the neutronics design of nuclear reactors and fuel systems. It is capable of performing the whole-core neutronics calculations, the reactor fuel burnup analysis, the few group diffusion theory constant generation, sensitivity and uncertainty (S/U) analysis, and uncertainty propagation analysis. It has some special features such as the anterior convergence diagnostics, real variance estimation, neutronics analysis with temperature feedback, $B_1$ theory-augmented few group constants generation, kinetics parameter generation and MC S/U analysis based on the use of adjoint flux. This paper describes the theoretical basis of these features and validation calculations for both neutronics benchmark problems and commercial PWR reactors in operation.

Long-Term Forecasting by Wavelet-Based Filter Bank Selections and Its Application

  • Lee, Jeong-Ran;Lee, You-Lim;Oh, Hee-Seok
    • The Korean Journal of Applied Statistics
    • /
    • v.23 no.2
    • /
    • pp.249-261
    • /
    • 2010
  • Long-term forecasting of seasonal time series is critical in many applications such as planning business strategies and resolving possible problems of a business company. Unlike the traditional approach that depends solely on dynamic models, Li and Hinich (2002) introduced a combination of stochastic dynamic modeling with filter bank approach for forecasting seasonal patterns using highly coherent(High-C) waveforms. We modify the filter selection and forecasting procedure on wavelet domain to be more feasible and compare the resulting predictor with one that obtained from the wavelet variance estimation method. An improvement over other seasonal pattern extraction and forecasting methods based on such as wavelet scalogram, Holt-Winters, and seasonal autoregressive integrated moving average(SARIMA) is shown in terms of the prediction error. The performance of the proposed method is illustrated by a simulation study and an application to the real stock price data.

Bayesian analysis of an exponentiated half-logistic distribution under progressively type-II censoring

  • Kang, Suk Bok;Seo, Jung In;Kim, Yongku
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.6
    • /
    • pp.1455-1464
    • /
    • 2013
  • This paper develops maximum likelihood estimators (MLEs) of unknown parameters in an exponentiated half-logistic distribution based on a progressively type-II censored sample. We obtain approximate confidence intervals for the MLEs by using asymptotic variance and covariance matrices. Using importance sampling, we obtain Bayes estimators and corresponding credible intervals with the highest posterior density and Bayes predictive intervals for unknown parameters based on progressively type-II censored data from an exponentiated half logistic distribution. For illustration purposes, we examine the validity of the proposed estimation method by using real and simulated data.

Estimation of Gini-Simpson index for SNP data

  • Kang, Joonsung
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.6
    • /
    • pp.1557-1564
    • /
    • 2017
  • We take genomic sequences of high-dimensional low sample size (HDLSS) without ordering of response categories into account. When constructing an appropriate test statistics in this model, the classical multivariate analysis of variance (MANOVA) approach might not be useful owing to very large number of parameters and very small sample size. For these reasons, we present a pseudo marginal model based upon the Gini-Simpson index estimated via Bayesian approach. In view of small sample size, we consider the permutation distribution by every possible n! (equally likely) permutation of the joined sample observations across G groups of (sizes $n_1,{\ldots}n_G$). We simulate data and apply false discovery rate (FDR) and positive false discovery rate (pFDR) with associated proposed test statistics to the data. And we also analyze real SARS data and compute FDR and pFDR. FDR and pFDR procedure along with the associated test statistics for each gene control the FDR and pFDR respectively at any level ${\alpha}$ for the set of p-values by using the exact conditional permutation theory.

Tutorial: Methodologies for sufficient dimension reduction in regression

  • Yoo, Jae Keun
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.2
    • /
    • pp.105-117
    • /
    • 2016
  • In the paper, as a sequence of the first tutorial, we discuss sufficient dimension reduction methodologies used to estimate central subspace (sliced inverse regression, sliced average variance estimation), central mean subspace (ordinary least square, principal Hessian direction, iterative Hessian transformation), and central $k^{th}$-moment subspace (covariance method). Large-sample tests to determine the structural dimensions of the three target subspaces are well derived in most of the methodologies; however, a permutation test (which does not require large-sample distributions) is introduced. The test can be applied to the methodologies discussed in the paper. Theoretical relationships among the sufficient dimension reduction methodologies are also investigated and real data analysis is presented for illustration purposes. A seeded dimension reduction approach is then introduced for the methodologies to apply to large p small n regressions.

Allocation in Multi-way Stratification by Linear Programing

  • NamKung, Pyong;Choi, Jae-Hyuk
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.2
    • /
    • pp.327-341
    • /
    • 2006
  • Winkler (1990, 2001), Sitter and Skinner (1994), Wilson and Sitter (2002) present a method which applies linear programing to designing surveys with multi-way stratification, primarily in situation where the desired sample size is less than or only slightly larger than the total number of stratification cells. A comparison is made with existing methods both by illustrating the sampling schemes generated for specific examples, by evaluating sample mean, variance estimation, and mean squared errors, and by simulating sample mean for all methods. The computations required can, however, increase rapidly as the number of cells in the multi-way classification increase. In this article their approach is applied to multi-way stratification using real data.

The skew-t censored regression model: parameter estimation via an EM-type algorithm

  • Lachos, Victor H.;Bazan, Jorge L.;Castro, Luis M.;Park, Jiwon
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.3
    • /
    • pp.333-351
    • /
    • 2022
  • The skew-t distribution is an attractive family of asymmetrical heavy-tailed densities that includes the normal, skew-normal and Student's-t distributions as special cases. In this work, we propose an EM-type algorithm for computing the maximum likelihood estimates for skew-t linear regression models with censored response. In contrast with previous proposals, this algorithm uses analytical expressions at the E-step, as opposed to Monte Carlo simulations. These expressions rely on formulas for the mean and variance of a truncated skew-t distribution, and can be computed using the R library MomTrunc. The standard errors, the prediction of unobserved values of the response and the log-likelihood function are obtained as a by-product. The proposed methodology is illustrated through the analyses of simulated and a real data application on Letter-Name Fluency test in Peruvian students.