• Title/Summary/Keyword: sparse matrices

Search Result 67, Processing Time 0.024 seconds

A PRECONDITIONER FOR THE NORMAL EQUATIONS

  • Salkuyeh, Davod Khojasteh
    • Journal of applied mathematics & informatics
    • /
    • v.28 no.3_4
    • /
    • pp.687-696
    • /
    • 2010
  • In this paper, an algorithm for computing the sparse approximate inverse factor of matrix $A^{T}\;A$, where A is an $m\;{\times}\;n$ matrix with $m\;{\geq}\;n$ and rank(A) = n, is proposed. The computation of the inverse factor are done without computing the matrix $A^{T}\;A$. The computed sparse approximate inverse factor is applied as a preconditioner for solving normal equations in conjunction with the CGNR algorithm. Some numerical experiments on test matrices are presented to show the efficiency of the method. A comparison with some available methods is also included.

AN ITERATIVE ALGORITHM FOR THE LEAST SQUARES SOLUTIONS OF MATRIX EQUATIONS OVER SYMMETRIC ARROWHEAD MATRICES

  • Ali Beik, Fatemeh Panjeh;Salkuyeh, Davod Khojasteh
    • Journal of the Korean Mathematical Society
    • /
    • v.52 no.2
    • /
    • pp.349-372
    • /
    • 2015
  • This paper concerns with exploiting an oblique projection technique to solve a general class of large and sparse least squares problem over symmetric arrowhead matrices. As a matter of fact, we develop the conjugate gradient least squares (CGLS) algorithm to obtain the minimum norm symmetric arrowhead least squares solution of the general coupled matrix equations. Furthermore, an approach is offered for computing the optimal approximate symmetric arrowhead solution of the mentioned least squares problem corresponding to a given arbitrary matrix group. In addition, the minimization property of the proposed algorithm is established by utilizing the feature of approximate solutions derived by the projection method. Finally, some numerical experiments are examined which reveal the applicability and feasibility of the handled algorithm.

THE EXISTENCE THEOREM OF ORTHOGONAL MATRICES WITH p NONZERO ENTRIES

  • CHEON, GI-SANG;LEE, SANG-GU;SONG, SEOK-ZUN
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.4 no.1
    • /
    • pp.109-119
    • /
    • 2000
  • It was shown that if Q is a fully indecomposable $n{\times}n$ orthogonal matrix then Q has at least 4n-4 nonzero entries in 1993. In this paper, we show that for each integer p with $4n-4{\leq}p{\leq}n^2$, there exist a fully indecomposable $n{\times}n$ orthogonal matrix with exactly p nonzero entries. Furthermore, we obtain a method of construction of a fully indecomposable $n{\times}n$ orthogonal matrix which has exactly 4n-4 nonzero entries. This is a part of the study in sparse matrices.

  • PDF

AN ALGORITHM FOR MULTIPLICATIONS IN F2m

  • Oh, SeYoung;Yoon, ChungSup
    • Journal of the Chungcheong Mathematical Society
    • /
    • v.15 no.2
    • /
    • pp.85-96
    • /
    • 2003
  • An efficient algorithm for the multiplication in a binary finite filed using a normal basis representation of $F_{2^m}$ is discussed and proposed for software implementation of elliptic curve cryptography. The algorithm is developed by using the storage scheme of sparse matrices.

  • PDF

ITERATIVE ALGORITHMS AND DOMAIN DECOMPOSITION METHODS IN PARTIAL DIFFERENTIAL EQUATIONS

  • Lee, Jun Yull
    • Korean Journal of Mathematics
    • /
    • v.13 no.1
    • /
    • pp.113-122
    • /
    • 2005
  • We consider the iterative schemes for the large sparse linear system to solve partial differential equations. Using spectral radius of iteration matrices, the optimal relaxation parameters and good parameters can be obtained. With those parameters we compare the effectiveness of the SOR and SSOR algorithms. Applying Crank-Nicolson approximation, we observe the error distribution according to domain decomposition. The number of processors due to domain decomposition affects time and error. Numerical experiments show that effectiveness of SOR and SSOR can be reversed as time size varies, which is not the usual case. Finally, these phenomena suggest conjectures about equilibrium time grid for SOR and SSOR.

  • PDF

Experimental Study on Supernodal Column Choleksy Factorization in Interior-Point Methods (내부점방법을 위한 초마디 열촐레스키 분해의 실험적 고찰)

  • 설동렬;정호원;박순달
    • Korean Management Science Review
    • /
    • v.15 no.1
    • /
    • pp.87-95
    • /
    • 1998
  • The computational speed of interior point method depends on the speed of Cholesky factorization. The supernodal column Cholesky factorization is a fast method that performs Cholesky factorization of sparse matrices with exploiting computer's characteristics. Three steps are necessary to perform the supernodal column Cholesky factorization : symbolic factorization, creation of the elimination tree, ordering by a post-order of the elimination tree and creation of supernodes. We study performing sequences of these three steps and efficient implementation of them.

  • PDF

Design and implementation of mathematical programming software-LinPro (수리계획 소프트웨어 LinPro의 설계 및 구현)

  • 양광민
    • Korean Management Science Review
    • /
    • v.12 no.1
    • /
    • pp.139-156
    • /
    • 1995
  • This study addresses basic requirements for mathematical programming software, discusses considerations in designing these software, implementation issues facing in these types of applications development, and shows some examples of codes being developed in the course. This type of projects requires long and ever-changing evolutionary phases. The experience is therefore, valuaable in suggesting some useful hints which may be salvaged for similar projects as well as providing reusable codes. In particular, scanning and parsing the free-format inputs, symbol table management, mixed-language programming, and data structures dealing with large sparse matrices are indispensable to many management science software development. Extensions to be made are also discussed.

  • PDF

A partial proof of the convergence of the block-ADI preconditioner

  • Ma, Sang-Back
    • Communications of the Korean Mathematical Society
    • /
    • v.11 no.2
    • /
    • pp.495-501
    • /
    • 1996
  • There is currently a regain of interest in ADI (Alternating Direction Implicit) method as a preconditioner for iterative Method for solving large sparse linear systems, because of its suitability for parallel computation. However the classical ADI is not applicable to FE(Finite Element) matrices. In this paper wer propose a Block-ADI method, which is applicable to Finite Element metrices. The new approach is a combination of classical ADI method and domain decompositi on. Also, we provide a partial proof of the convergence based on the results from the regular splittings, in case the discretization metrix is symmetric positive definite.

  • PDF

NUMERICAL STABILITY OF UPDATE METHOD FOR SYMMETRIC EIGENVALUE PROBLEM

  • Jang Ho-Jong;Lee Sung-Ho
    • Journal of applied mathematics & informatics
    • /
    • v.22 no.1_2
    • /
    • pp.467-474
    • /
    • 2006
  • We present and study the stability and convergence of a deflation-preconditioned conjugate gradient(PCG) scheme for the interior generalized eigenvalue problem $Ax = {\lambda}Bx$, where A and B are large sparse symmetric positive definite matrices. Numerical experiments are also presented to support our theoretical results.

Design Considerations on Large-scale Parallel Finite Element Code in Shared Memory Architecture with Multi-Core CPU (멀티코어 CPU를 갖는 공유 메모리 구조의 대규모 병렬 유한요소 코드에 대한 설계 고려 사항)

  • Cho, Jeong-Rae;Cho, Keunhee
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.30 no.2
    • /
    • pp.127-135
    • /
    • 2017
  • The computing environment has changed rapidly to enable large-scale finite element models to be analyzed at the PC or workstation level, such as multi-core CPU, optimal math kernel library implementing BLAS and LAPACK, and popularization of direct sparse solvers. In this paper, the design considerations on a parallel finite element code for shared memory based multi-core CPU system are proposed; (1) the use of optimized numerical libraries, (2) the use of latest direct sparse solvers, (3) parallelism using OpenMP for computing element stiffness matrices, and (4) assembly techniques using triplets, which is a type of sparse matrix storage. In addition, the parallelization effect is examined on the time-consuming works through a large scale finite element model.