• Title/Summary/Keyword: 공분산 구조모형

Search Result 96, Processing Time 0.029 seconds

A Tutorial on Covariance-based Structural Equation Modeling using R: focused on "lavaan" Package (R을 이용한 공분산 기반 구조방정식 모델링 튜토리얼: Lavaan 패키지를 중심으로)

  • Yoon, Cheol-Ho;Choi, Kwang-Don
    • Journal of Digital Convergence
    • /
    • v.13 no.10
    • /
    • pp.121-133
    • /
    • 2015
  • This tutorial presents an approach to perform the covariance based structural equation modeling using the R. For this purpose, the tutorial defines the criteria for the covariance based structural equation modeling by reviewing previous studies, and shows how to analyze the research model with an example using the "lavaan" which is the R package supporting the covariance based structural equation modeling. In this tutorial, a covariance-based structural equation modeling technique using the R and the R scripts targeting the example model were proposed as the results. This tutorial will be useful to start the study of the covariance based structural equation modeling for the researchers who first encounter the covariance based structural equation modeling and will provide the knowledge base for in-depth analysis through the covariance based structural equation modeling technique using R which is the integrated statistical software operating environment for the researchers familiar with the covariance based structural equation modeling.

Comparison of the covariance matrix for general linear model (일반 선형 모형에 대한 공분산 행렬의 비교)

  • Nam, Sang Ah;Lee, Keunbaik
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.1
    • /
    • pp.103-117
    • /
    • 2017
  • In longitudinal data analysis, the serial correlation of repeated outcomes must be taken into account using covariance matrix. Modeling of the covariance matrix is important to estimate the effect of covariates properly. However, It is challenging because there are many parameters in the matrix and the estimated covariance matrix should be positive definite. To overcome the restrictions, several Cholesky decomposition approaches for the covariance matrix were proposed: modified autoregressive (AR), moving average (MA), ARMA Cholesky decompositions. In this paper we review them and compare the performance of the approaches using simulation studies.

A Mixed Model for Nested Structural Repeated Data (지분구조의 반복측정 자료에 대한 혼합모형)

  • Choi, Jae-Sung
    • The Korean Journal of Applied Statistics
    • /
    • v.22 no.1
    • /
    • pp.181-188
    • /
    • 2009
  • This paper discusses the covariance structures of data collected from an experiment with a nested design structure, where a smaller experimental unit is nested within a larger one. Due to the nonrandomization of repeated measures factors to the nested experimental units, compound symmetry covariance structure is assumed for the analysis of data. Treatments are given as the combinations of the levels of random factors and fixed factors. So, a mixed-effects model is suggested under compound symmetry structure. An example is presented to illustrate the nesting in the experimental units and to show how to get the parameter estimates in the fitted model.

A mixed model for repeated split-plot data (반복측정의 분할구 자료에 대한 혼합모형)

  • Choi, Jae-Sung
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.1
    • /
    • pp.1-9
    • /
    • 2010
  • This paper suggests a mixed-effects model for analyzing split-plot data when there is a repeated measures factor that affects on the response variable. Covariance structures are discussed among the observations because of the assumption of a repeated measures factor as one of explanatory variables. As a plausible covariance structure, compound symmetric covariance structure is assumed for analyzing data. The restricted maximum likelihood (REML)method is used for estimating fixed effects in the model.

A statistical analysis on the selection of the optimal covariance matrix pattern for the cholesterol data (콜레스테롤 자료에 대한 적정 공분산행렬 형태 산출에 관한 통계적 분석)

  • Jo, Jin-Nam;Baik, Jai-Wook
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.6
    • /
    • pp.1263-1270
    • /
    • 2010
  • Sixty patients were divided into three groups. Each group of twenty persons had fed on different diet foods over 5 weeks. Cholesterol had been measured repeatedly five times at an interval of a week during 5 weeks. It resulted from mixed model analysis of repeated measurements data that homogeneous toeplitz covariance matrix pattern was selected as the optimal covariance pattern. The correlations between measurements of different times for the covariance matrix are somewhat highly correlated as 0.64-0.78. Based upon the homogeneous toeplitz covariance pattern model, the time effect was found to be highly significant, but the treatment effect and treatment-time interaction effect were found to be insignificant.

The Development of Biomass Model for Pinus densiflora in Chungnam Region Using Random Effect (임의효과를 이용한 충남지역 소나무림의 바이오매스 모형 개발)

  • Pyo, Jungkee;Son, Yeong Mo
    • Journal of Korean Society of Forest Science
    • /
    • v.106 no.2
    • /
    • pp.213-218
    • /
    • 2017
  • The purpose of this study was to develop age-biomass model in Chungnam region containing random effect. To develop the biomass model by species and tree component, data for Pinus densiflora in central region is collected to 30 plots (150 trees). The mixed model were used to fixed effect in the age-biomass relation for Pinus densiflora, with random effect representing correlation of survey area were obtained. To verify the evaluation of the model for random effect, the akaike information criterion (abbreviated as, AIC) was used to calculate the variance-covariance matrix, and residual of repeated data. The estimated variance-covariance matrix, and residual were -1.0022, 0.6240, respectively. The model with random effect (AIC=377.2) has low AIC value, comparison with other study relating to random effects. It is for this reason that random effect associated with categorical data were used in the data fitting process, the model can be calibrated to fit the Chungnam region by obtaining measurements. Therefore, the results of this study could be useful method for developing biomass model using random effects by region.

The Studies of the Stochastic Duration and the Relationship between Futures and Forward Prices under the Arbitrage-free Interest rate Model (차익거래 기회가 없는 이자율 변동모형 하에서 확률적 평균만기 및 선물가격과 선도가격과의 관계에 관한 연구)

  • Kang, Byong-Ho;Choi, Jong-Yeon
    • The Korean Journal of Financial Management
    • /
    • v.19 no.2
    • /
    • pp.27-48
    • /
    • 2002
  • 본 논문은 이자율의 기간 구조가 차익 거래의 기회가 없도록 움직일 때 새로운 평균만기 측 정치인 AR 평균만기(arbitrage-free duration)을 도출하고 선물가격과 선도가격과의 관계를 분석한다. 지금까지 평균만기에 관한 많은 연구들은 수익률 곡선이 특정한 형태로 이동한다는 가정 하에서 평균만기를 유도하고 이에 근거하여 채권가격의 변동치를 측정하고 있다. 본 논문에서는 기존의 평균만기의 가정을 완화한 AR 평균만기를 도출하였다. 여기서 제시하는 AR 평균만기는 기존의 Macaulay 평균만기를 포함하는 일반화한 측정치라고 할 수 있다. 아울러 본 논문에서는 선물가격과 선도가격사이에 존재하는 이론적 관계를 규명하고자 하였다. 선물가격은 선도가격에 비해 할인된 가격이라는 것을 보이고 이자율 변동위험이 선물가격의 할인정도에 미치는 영향을 모형화 하였다. 최근 들어 선물을 이용한 채권 면역화에 대한 실증연구에 관심이 지속적으로 증가하고 있다. 전통적 실증연구 방법론에서는 먼저, 선물가격과 기초채권 가격사이에 존재하는 분산-공분산 행렬을 추정한다. 그런 후 추정된 분산-공분산 행렬을 바탕으로 이자율 위험 헤징 전략을 수립한 후 이 전략에 대한 실증 분석을 수행하였다. 그러나, 전통적 접근법의 가장 큰 문제는 비안정적(non-stationary)인 분산-공분산 행렬을 적절히 고려할 수 없었다는 점이다. 따라서, 본 연구의 결과를 기반으로 하면 최적의 헷징 전략을 수립하기 위한 이론적 기틀을 수립할 수 있을 것이다.

  • PDF

소지역에서 Pseudo-EBLUP 추정

  • Sin, Min-Ung;Baek, Jeong-Yong;Kim, Ik-Chan
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2003.10a
    • /
    • pp.111-115
    • /
    • 2003
  • 소지역 모형들은 고정된(fixed)효과와 랜덤 효과를 포함하는 일반적 선형 혼한 모형의 특별한 경우로 간주될 수 있다. 소지역 평균이나 종계는 고정된 효과와 랜덤 효과의 일치 결합으로 표현될 수 있다. 블록 대각 공분산 구조를 갖는 선형 혼합모형(mixed model) 아래서 EBLUP은 실재문제에 있어서 많이 소지역 모형에 응용된다. 설계 가중값(design weight) 들에 의존하고 설계-일치(design consistency) 성질을 만족하는 Pseudo-EBLUP 추정량들은 소지역추정에서 합해지면 (aggregated) 사후-수정(post-adjustment)없이 벤치마킹 성질을 만족한다.

  • PDF

A Logit Model for Repeated Binary Response Data (반복측정의 이가반응 자료에 대한 로짓 모형)

  • Choi, Jae-Sung
    • The Korean Journal of Applied Statistics
    • /
    • v.21 no.2
    • /
    • pp.291-299
    • /
    • 2008
  • This paper discusses model building for repeated binary response data with different time-dependent covariates each occasion. Since repeated measurements data are having correlated structure, weighed least squares(WLS) methodology is applied. Repeated measures designs are usually having different sizes of experimental units like split-plot designs. However repeated measures designs differ from split-plot designs in that the levels of one or more factors cannot be randomly assigned to one or more of the sizes of experimental units in the experiment. In this case, the levels of time cannot be assigned at random to the time intervals. Because of this nonrandom assignment, the errors corresponding to the respective experimental units may have a covariance matrix. So, the estimates of effects included in a suggested logit model are obtained by using covariance structures.

BCDR algorithm for network estimation based on pseudo-likelihood with parallelization using GPU (유사가능도 기반의 네트워크 추정 모형에 대한 GPU 병렬화 BCDR 알고리즘)

  • Kim, Byungsoo;Yu, Donghyeon
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.2
    • /
    • pp.381-394
    • /
    • 2016
  • Graphical model represents conditional dependencies between variables as a graph with nodes and edges. It is widely used in various fields including physics, economics, and biology to describe complex association. Conditional dependencies can be estimated from a inverse covariance matrix, where zero off-diagonal elements denote conditional independence of corresponding variables. This paper proposes a efficient BCDR (block coordinate descent with random permutation) algorithm using graphics processing units and random permutation for the CONCORD (convex correlation selection method) based on the BCD (block coordinate descent) algorithm, which estimates a inverse covariance matrix based on pseudo-likelihood. We conduct numerical studies for two network structures to demonstrate the efficiency of the proposed algorithm for the CONCORD in terms of computation times.