• Title/Summary/Keyword: positive definite matrix

Search Result 105, Processing Time 0.025 seconds

THE k-GOLDEN MEAN OF TWO POSITIVE NUMBERS AND ITS APPLICATIONS

  • Choi, Jin Ho;Kim, Young Ho
    • Bulletin of the Korean Mathematical Society
    • /
    • v.56 no.2
    • /
    • pp.521-533
    • /
    • 2019
  • In this paper, we define a mean of two positive numbers called the k-golden mean and study some properties of it. Especially, we show that the 2-golden mean refines the harmonic and the geometric means. As an application, we define the k-golden ratio and give some properties of it as an generalization of the golden ratio. Furthermore, we define the matrix k-golden mean of two positive-definite matrices and give some properties of it. This is an improvement of Lim's results [2] for which the matrix golden mean.

EXTENSION OF BLOCK MATRIX REPRESENTATION OF THE GEOMETRIC MEAN

  • Choi, Hana;Choi, Hayoung;Kim, Sejong;Lee, Hosoo
    • Journal of the Korean Mathematical Society
    • /
    • v.57 no.3
    • /
    • pp.641-653
    • /
    • 2020
  • To extend the well-known extremal characterization of the geometric mean of two n × n positive definite matrices A and B, we solve the following problem: $${\max}\{X:X=X^*,\;\(\array{A&V&X\\V&B&W\\X&W&C}\){\geq}0\}$$. We find an explicit expression of the maximum value with respect to the matrix geometric mean of Schur complements.

BOUNDARIES OF THE CONE OF POSITIVE LINEAR MAPS AND ITS SUBCONES IN MATRIX ALGEBRAS

  • Kye, Seung-Hyeok
    • Journal of the Korean Mathematical Society
    • /
    • v.33 no.3
    • /
    • pp.669-677
    • /
    • 1996
  • Let $M_n$ be the $C^*$-algebra of all $n \times n$ matrices over the complex field, and $P[M_m, M_n]$ the convex cone of all positive linear maps from $M_m$ into $M_n$ that is, the maps which send the set of positive semidefinite matrices in $M_m$ into the set of positive semi-definite matrices in $M_n$. The convex structures of $P[M_m, M_n]$ are highly complicated even in low dimensions, and several authors [CL, KK, LW, O, R, S, W]have considered the possibility of decomposition of $P[M_m, M_n] into subcones.

  • PDF

Autoregressive Cholesky Factor Modeling for Marginalized Random Effects Models

  • Lee, Keunbaik;Sung, Sunah
    • Communications for Statistical Applications and Methods
    • /
    • v.21 no.2
    • /
    • pp.169-181
    • /
    • 2014
  • Marginalized random effects models (MREM) are commonly used to analyze longitudinal categorical data when the population-averaged effects is of interest. In these models, random effects are used to explain both subject and time variations. The estimation of the random effects covariance matrix is not simple in MREM because of the high dimension and the positive definiteness. A relatively simple structure for the correlation is assumed such as a homogeneous AR(1) structure; however, it is too strong of an assumption. In consequence, the estimates of the fixed effects can be biased. To avoid this problem, we introduce one approach to explain a heterogenous random effects covariance matrix using a modified Cholesky decomposition. The approach results in parameters that can be easily modeled without concern that the resulting estimator will not be positive definite. The interpretation of the parameters is sensible. We analyze metabolic syndrome data from a Korean Genomic Epidemiology Study using this method.

SAOR METHOD FOR FUZZY LINEAR SYSTEM

  • Miao, Shu-Xin;Zheng, Bing
    • Journal of applied mathematics & informatics
    • /
    • v.26 no.5_6
    • /
    • pp.839-850
    • /
    • 2008
  • In this paper, the symmetric accelerated overrelaxation (SAOR) method for solving $n{\times}n$ fuzzy linear system is discussed, then the convergence theorems in the special cases where matrix S in augmented system SX = Y is H-matrices or consistently ordered matrices and or symmetric positive definite matrices are also given out. Numerical examples are presented to illustrate the theory.

  • PDF

ESOR METHOD WITH DIAGONAL PRECONDITIONERS FOR SPD LINEAR SYSTEMS

  • Oh, Seyoung;Yun, Jae Heon;Kim, Kyoum Sun
    • Journal of applied mathematics & informatics
    • /
    • v.33 no.1_2
    • /
    • pp.111-118
    • /
    • 2015
  • In this paper, we propose an extended SOR (ESOR) method with diagonal preconditioners for solving symmetric positive definite linear systems, and then we provide convergence results of the ESOR method. Lastly, we provide numerical experiments to evaluate the performance of the ESOR method with diagonal preconditioners.

GRADIENT PROJECTION METHODS FOR THE n-COUPLING PROBLEM

  • Kum, Sangho;Yun, Sangwoon
    • Journal of the Korean Mathematical Society
    • /
    • v.56 no.4
    • /
    • pp.1001-1016
    • /
    • 2019
  • We are concerned with optimization methods for the $L^2$-Wasserstein least squares problem of Gaussian measures (alternatively the n-coupling problem). Based on its equivalent form on the convex cone of positive definite matrices of fixed size and the strict convexity of the variance function, we are able to present an implementable (accelerated) gradient method for finding the unique minimizer. Its global convergence rate analysis is provided according to the derived upper bound of Lipschitz constants of the gradient function.

Bayesian Modeling of Random Effects Covariance Matrix for Generalized Linear Mixed Models

  • Lee, Keunbaik
    • Communications for Statistical Applications and Methods
    • /
    • v.20 no.3
    • /
    • pp.235-240
    • /
    • 2013
  • Generalized linear mixed models(GLMMs) are frequently used for the analysis of longitudinal categorical data when the subject-specific effects is of interest. In GLMMs, the structure of the random effects covariance matrix is important for the estimation of fixed effects and to explain subject and time variations. The estimation of the matrix is not simple because of the high dimension and the positive definiteness; subsequently, we practically use the simple structure of the covariance matrix such as AR(1). However, this strong assumption can result in biased estimates of the fixed effects. In this paper, we introduce Bayesian modeling approaches for the random effects covariance matrix using a modified Cholesky decomposition. The modified Cholesky decomposition approach has been used to explain a heterogenous random effects covariance matrix and the subsequent estimated covariance matrix will be positive definite. We analyze metabolic syndrome data from a Korean Genomic Epidemiology Study using these methods.

Poisson linear mixed models with ARMA random effects covariance matrix

  • Choi, Jiin;Lee, Keunbaik
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.4
    • /
    • pp.927-936
    • /
    • 2017
  • To analyze longitudinal count data, Poisson linear mixed models are commonly used. In the models the random effects covariance matrix explains both within-subject variation and serial correlation of repeated count outcomes. When the random effects covariance matrix is assumed to be misspecified, the estimates of covariates effects can be biased. Therefore, we propose reasonable and flexible structures of the covariance matrix using autoregressive and moving average Cholesky decomposition (ARMACD). The ARMACD factors the covariance matrix into generalized autoregressive parameters (GARPs), generalized moving average parameters (GMAPs) and innovation variances (IVs). Positive IVs guarantee the positive-definiteness of the covariance matrix. In this paper, we use the ARMACD to model the random effects covariance matrix in Poisson loglinear mixed models. We analyze epileptic seizure data using our proposed model.

Geodesic Clustering for Covariance Matrices

  • Lee, Haesung;Ahn, Hyun-Jung;Kim, Kwang-Rae;Kim, Peter T.;Koo, Ja-Yong
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.4
    • /
    • pp.321-331
    • /
    • 2015
  • The K-means clustering algorithm is a popular and widely used method for clustering. For covariance matrices, we consider a geodesic clustering algorithm based on the K-means clustering framework in consideration of symmetric positive definite matrices as a Riemannian (non-Euclidean) manifold. This paper considers a geodesic clustering algorithm for data consisting of symmetric positive definite (SPD) matrices, utilizing the Riemannian geometric structure for SPD matrices and the idea of a K-means clustering algorithm. A K-means clustering algorithm is divided into two main steps for which we need a dissimilarity measure between two matrix data points and a way of computing centroids for observations in clusters. In order to use the Riemannian structure, we adopt the geodesic distance and the intrinsic mean for symmetric positive definite matrices. We demonstrate our proposed method through simulations as well as application to real financial data.