• Title/Summary/Keyword: Covariance

Search Result 1,775, Processing Time 0.033 seconds

Covariance analysis of strapdown INS considering characteristics of gyrocompass alignment errors (자이로 컴파스 얼라인먼트 오차특성을 고려한 스트랩다운 관성항법장치의 상호분산해석)

  • 박흥원;박찬국;이장규
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1993.10a
    • /
    • pp.34-39
    • /
    • 1993
  • Presented in this paper is a complete error covariance analysis for strapdown inertial navigation system(SDINS). We have found that in SDINS the cross-coupling terms in gyrocompass alignment errors can significantly influence the SDINS error propagation. Initial heading error has a close correlation with the east component of gyro bias erro, while initial level tilt errors are closely related to accelerometer bias errors. In addition, pseudo-state variables are introduced in covariance analysis for SDINS utilizing the characteristics of gyrocompass alignment errors. This approach simplifies the covariance analysis because it makes the initial error covariance matrix to a diagonal form. Thus a real implementation becomes easier. The approach is conformed by comparing the results for a simplified case with the covariance analysis obtained from the conventional SDINS error model.

  • PDF

Bayesian modeling of random effects precision/covariance matrix in cumulative logit random effects models

  • Kim, Jiyeong;Sohn, Insuk;Lee, Keunbaik
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.1
    • /
    • pp.81-96
    • /
    • 2017
  • Cumulative logit random effects models are typically used to analyze longitudinal ordinal data. The random effects covariance matrix is used in the models to demonstrate both subject-specific and time variations. The covariance matrix may also be homogeneous; however, the structure of the covariance matrix is assumed to be homoscedastic and restricted because the matrix is high-dimensional and should be positive definite. To satisfy these restrictions two Cholesky decomposition methods were proposed in linear (mixed) models for the random effects precision matrix and the random effects covariance matrix, respectively: modified Cholesky and moving average Cholesky decompositions. In this paper, we use these two methods to model the random effects precision matrix and the random effects covariance matrix in cumulative logit random effects models for longitudinal ordinal data. The methods are illustrated by a lung cancer data set.

Bayesian Modeling of Random Effects Covariance Matrix for Generalized Linear Mixed Models

  • Lee, Keunbaik
    • Communications for Statistical Applications and Methods
    • /
    • v.20 no.3
    • /
    • pp.235-240
    • /
    • 2013
  • Generalized linear mixed models(GLMMs) are frequently used for the analysis of longitudinal categorical data when the subject-specific effects is of interest. In GLMMs, the structure of the random effects covariance matrix is important for the estimation of fixed effects and to explain subject and time variations. The estimation of the matrix is not simple because of the high dimension and the positive definiteness; subsequently, we practically use the simple structure of the covariance matrix such as AR(1). However, this strong assumption can result in biased estimates of the fixed effects. In this paper, we introduce Bayesian modeling approaches for the random effects covariance matrix using a modified Cholesky decomposition. The modified Cholesky decomposition approach has been used to explain a heterogenous random effects covariance matrix and the subsequent estimated covariance matrix will be positive definite. We analyze metabolic syndrome data from a Korean Genomic Epidemiology Study using these methods.

Global Covariance based Principal Component Analysis for Speaker Identification (화자식별을 위한 전역 공분산에 기반한 주성분분석)

  • Seo, Chang-Woo;Lim, Young-Hwan
    • Phonetics and Speech Sciences
    • /
    • v.1 no.1
    • /
    • pp.69-73
    • /
    • 2009
  • This paper proposes an efficient global covariance-based principal component analysis (GCPCA) for speaker identification. Principal component analysis (PCA) is a feature extraction method which reduces the dimension of the feature vectors and the correlation among the feature vectors by projecting the original feature space into a small subspace through a transformation. However, it requires a larger amount of training data when performing PCA to find the eigenvalue and eigenvector matrix using the full covariance matrix by each speaker. The proposed method first calculates the global covariance matrix using training data of all speakers. It then finds the eigenvalue matrix and the corresponding eigenvector matrix from the global covariance matrix. Compared to conventional PCA and Gaussian mixture model (GMM) methods, the proposed method shows better performance while requiring less storage space and complexity in speaker identification.

  • PDF

Covariance Matrix Synthesis Using Maximum Ratio Combining in Coherent MIMO Radar with Frequency Diversity

  • Jeon, Hyeonmu;Chung, Yongseek;Chung, Wonzoo;Kim, Jongmann;Yang, Hoongee
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.1
    • /
    • pp.445-450
    • /
    • 2018
  • Reliable detection and parameter estimation of a radar cross section(RCS) fluctuating target have been known as a difficult task. To reduce the effect of RCS fluctuation, various diversity techniques have been considered. This paper presents a new method for synthesizing a covariance matrix applicable to a coherent multi-input multi-output(MIMO) radar with frequency diversity. It is achieved by efficiently combining covariance matrices corresponding to different carrier frequencies such that the signal-to-noise ratio(SNR) in the combined covariance matrix is maximized. The value of a synthesized covariance matrix is assessed by examining the phase curves of its entries and the improvement on direction of arrival(DOA) estimation.

Negative binomial loglinear mixed models with general random effects covariance matrix

  • Sung, Youkyung;Lee, Keunbaik
    • Communications for Statistical Applications and Methods
    • /
    • v.25 no.1
    • /
    • pp.61-70
    • /
    • 2018
  • Modeling of the random effects covariance matrix in generalized linear mixed models (GLMMs) is an issue in analysis of longitudinal categorical data because the covariance matrix can be high-dimensional and its estimate must satisfy positive-definiteness. To satisfy these constraints, we consider the autoregressive and moving average Cholesky decomposition (ARMACD) to model the covariance matrix. The ARMACD creates a more flexible decomposition of the covariance matrix that provides generalized autoregressive parameters, generalized moving average parameters, and innovation variances. In this paper, we analyze longitudinal count data with overdispersion using GLMMs. We propose negative binomial loglinear mixed models to analyze longitudinal count data and we also present modeling of the random effects covariance matrix using the ARMACD. Epilepsy data are analyzed using our proposed model.

The Analysis of The Kalman Filter Noise Factor on The Inverted Pendulum (도립진자 모델에서 칼만 필터의 잡음인자 해석)

  • Kim, Hoon-Hak
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.5
    • /
    • pp.13-21
    • /
    • 2010
  • The Optimal results of Kalman Filtering on the Inverted Pendulum System requires an effective factor such as the noise covariance matrix Q, the measurement noise covariance matrix R and the initial error covariance matrix $P_0$. We present a special case where the optimality of the filter is not destroyed and not sensitive to scaling of these covariance matrix because these factors are unknown or are known only approximately in the practical situation. Moreover, the error covariance matrices issued by this method predict errors in the state estimate consistent with the scaled covariance matrices and not the issued state estimates. Various results using the scalar gain $\delta$ are derived to described the relations among the three covariance matrices, Kalman Gain and the error covariance matrices. This paper is described as follows: Section III a brief overview of the Inverted Pendulum system. Section IV deals with the mathematical dynamic model of the system used for the computer simulation. Section V presents a various simulation results using the scalar gain.

A Comparative Study of Covariance Matrix Estimators in High-Dimensional Data (고차원 데이터에서 공분산행렬의 추정에 대한 비교연구)

  • Lee, DongHyuk;Lee, Jae Won
    • The Korean Journal of Applied Statistics
    • /
    • v.26 no.5
    • /
    • pp.747-758
    • /
    • 2013
  • The covariance matrix is important in multivariate statistical analysis and a sample covariance matrix is used as an estimator of the covariance matrix. High dimensional data has a larger dimension than the sample size; therefore, the sample covariance matrix may not be suitable since it is known to perform poorly and event not invertible. A number of covariance matrix estimators have been recently proposed with three different approaches of shrinkage, thresholding, and modified Cholesky decomposition. We compare the performance of these newly proposed estimators in various situations.

LOCAL INFLUENCE ANALYSIS OF THE PROPORTIONAL COVARIANCE MATRICES MODEL

  • Kim, Myung-Geun;Jung, Kang-Mo
    • Journal of the Korean Statistical Society
    • /
    • v.33 no.2
    • /
    • pp.233-244
    • /
    • 2004
  • The influence of observations is investigated in fitting proportional covariance matrices model. Local influence measures are obtained when all parameters or subsets of the parameters are of interest. We will also derive the local influence measure for investigating the influence of observations in testing the proportionality of covariance matrices. A numerical example is given for illustration.

On Testing Equality of Matrix Intraclass Covariance Matrices of $K$Multivariate Normal Populations

  • Kim, Hea-Jung
    • Communications for Statistical Applications and Methods
    • /
    • v.7 no.1
    • /
    • pp.55-64
    • /
    • 2000
  • We propose a criterion for testing homogeneity of matrix intraclass covariance matrices of K multivariate normal populations, It is based on a variable transformation intended to propose and develop a likelihood ratio criterion that makes use of properties of eigen structures of the matrix intraclass covariance matrices. The criterion then leads to a simple test that uses an asymptotic distribution obtained from Box's (1949) theorem for the general asymptotic expansion of random variables.

  • PDF