• Title/Summary/Keyword: symmetric positive definite matrix

Search Result 34, Processing Time 0.019 seconds

A Minimum Degree Ordering Algorithm using the Lower and Upper Bounds of Degrees

  • Park, Chan-Kyoo;Doh, Seungyong;Park, Soondal;Kim, Woo-Je
    • Management Science and Financial Engineering
    • /
    • v.8 no.1
    • /
    • pp.1-19
    • /
    • 2002
  • Ordering is used to reduce the amount of fill-ins in the Cholesky factor of a symmetric positive definite matrix. One of the most efficient ordering methods is the minimum degree ordering algorithm(MDO). In this paper, we provide a few techniques that improve the performance of MDO implemented with the clique storage scheme. First, the absorption of nodes in the cliques is developed which reduces the number of cliques and the amount of storage space required for MDO. Second, we present a modified minimum degree ordering algorithm of which the number of degree updates can be reduced by introducing the lower bounds of degrees. Third, using both the lower and upper bounds of degrees, we develop an approximate minimum degree ordering algorithm. Experimental results show that the proposed algorithm is competitive with the minimum degree ordering algorithm that uses quotient graphs from the points of the ordering time and the nonzeros in the Cholesky factor.

A PROXIMAL POINT-TYPE ALGORITHM FOR PSEUDOMONOTONE EQUILIBRIUM PROBLEMS

  • Kim, Jong-Kyu;Anh, Pham Ngoc;Hyun, Ho-Geun
    • Bulletin of the Korean Mathematical Society
    • /
    • v.49 no.4
    • /
    • pp.749-759
    • /
    • 2012
  • A globally convergent algorithm for solving equilibrium problems is proposed. The algorithm is based on a proximal point algorithm (shortly (PPA)) with a positive definite matrix M which is not necessarily symmetric. The proximal function in existing (PPA) usually is the gradient of a quadratic function, namely, ${\nabla}({\parallel}x{\parallel}^2_M)$. This leads to a proximal point-type algorithm. We first solve pseudomonotone equilibrium problems without Lipschitzian assumption and prove the convergence of algorithms. Next, we couple this technique with the Banach contraction method for multivalued variational inequalities. Finally some computational results are given.

A MULTILEVEL BLOCK INCOMPLETE CHOLESKY PRECONDITIONER FOR SOLVING NORMAL EQUATIONS IN LINEAR LEAST SQUARES PROBLEMS

  • Jun, Zhang;Tong, Xiao
    • Journal of applied mathematics & informatics
    • /
    • v.11 no.1_2
    • /
    • pp.59-80
    • /
    • 2003
  • An incomplete factorization method for preconditioning symmetric positive definite matrices is introduced to solve normal equations. The normal equations are form to solve linear least squares problems. The procedure is based on a block incomplete Cholesky factorization and a multilevel recursive strategy with an approximate Schur complement matrix formed implicitly. A diagonal perturbation strategy is implemented to enhance factorization robustness. The factors obtained are used as a preconditioner for the conjugate gradient method. Numerical experiments are used to show the robustness and efficiency of this preconditioning technique, and to compare it with two other preconditioners.

Parallel Algorithm of Conjugate Gradient Solver using OpenGL Compute Shader

  • Va, Hongly;Lee, Do-keyong;Hong, Min
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.1
    • /
    • pp.1-9
    • /
    • 2021
  • OpenGL compute shader is a shader stage that operate differently from other shader stage and it can be used for the calculating purpose of any data in parallel. This paper proposes a GPU-based parallel algorithm for computing sparse linear systems through conjugate gradient using an iterative method, which perform calculation on OpenGL compute shader. Basically, this sparse linear solver is used to solve large linear systems such as symmetric positive definite matrix. Four well-known matrix formats (Dense, COO, ELL and CSR) have been used for matrix storage. The performance comparison from our experimental tests using eight sparse matrices shows that GPU-based linear solving system much faster than CPU-based linear solving system with the best average computing time 0.64ms in GPU-based and 15.37ms in CPU-based.