• Title/Summary/Keyword: sparse matrix

Search Result 253, Processing Time 0.029 seconds

A Network Reduction using Weak Coupling Method (Weak Coupling Method를 이용한 계통 축약)

  • Lee, H.M.;Rho, K.M.;Kwon, S.H.
    • Proceedings of the KIEE Conference
    • /
    • 1999.07c
    • /
    • pp.1067-1069
    • /
    • 1999
  • This paper presents a network reduction using weak coupling method. Weak coupling method of identifying coherent generator groups are proposed. The partitioning technique used in this paper is based on a property of sparse matrix factorization. When a matrix has been factorized, a system is divided into study area, boundary buses and external area. A reduction process for external system starts with the load bus elimination and coherent generator aggregation. An identification of coherent generator group, network partitioning and network reduction are presented.

  • PDF

On the Fitting ANOVA Models to Unbalanced Data

  • Jong-Tae Park;Jae-Heon Lee;Byung-Chun Kim
    • Communications for Statistical Applications and Methods
    • /
    • v.2 no.1
    • /
    • pp.48-54
    • /
    • 1995
  • A direct method for fitting analysis-of-variance models to unbalanced data is presented. This method exploits sparsity and rank deficiency of the matrix and is based on Gram-Schmidt orthogonalization of a set of sparse columns of the model matrix. The computational algorithm of the sum of squares for testing estmable hyphotheses is given.

  • PDF

Fast Calculation of Capacitance Matrix for Strip-Line Crossings and Other Interconnects (교차되는 스트립 라인구조에서의 빠른 커패시턴스 계산기법)

  • Srinivasan Jegannathan;Lee Dong-Jun;Shim Duk-Sun;Yang Cheol-Kwan;Kim Hyung-Kyu;Kim Hyeong-Seok
    • The Transactions of the Korean Institute of Electrical Engineers C
    • /
    • v.53 no.10
    • /
    • pp.539-545
    • /
    • 2004
  • In this paper, we consider the problem of capacitance matrix calculation for strip-line and other interconnects crossings. The problem is formulated in the spectral domain using the method of moments. Sinc-functions are employed as basis functions. Conventionally, such a formulation leads to a large, non-sparse system of linear equations in which the calculation of each of the coefficient requires the evaluation of a Fourier-Bessel integral. Such calculations are computationally very intensive. In the method proposed here, we provide simplified expressions for the coefficients in the moment method matrix. Using these simplified expressions, the coefficients can be calculated very efficiently. This leads to a fast evaluation of the capacitance matrix of the structure. Computer simulations are provided illustrating the validity of the method proposed.

AN ITERATIVE ALGORITHM FOR THE LEAST SQUARES SOLUTIONS OF MATRIX EQUATIONS OVER SYMMETRIC ARROWHEAD MATRICES

  • Ali Beik, Fatemeh Panjeh;Salkuyeh, Davod Khojasteh
    • Journal of the Korean Mathematical Society
    • /
    • v.52 no.2
    • /
    • pp.349-372
    • /
    • 2015
  • This paper concerns with exploiting an oblique projection technique to solve a general class of large and sparse least squares problem over symmetric arrowhead matrices. As a matter of fact, we develop the conjugate gradient least squares (CGLS) algorithm to obtain the minimum norm symmetric arrowhead least squares solution of the general coupled matrix equations. Furthermore, an approach is offered for computing the optimal approximate symmetric arrowhead solution of the mentioned least squares problem corresponding to a given arbitrary matrix group. In addition, the minimization property of the proposed algorithm is established by utilizing the feature of approximate solutions derived by the projection method. Finally, some numerical experiments are examined which reveal the applicability and feasibility of the handled algorithm.

Refinement of Document Clustering by Using NMF

  • Shinnou, Hiroyuki;Sasaki, Minoru
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.430-439
    • /
    • 2007
  • In this paper, we use non-negative matrix factorization (NMF) to refine the document clustering results. NMF is a dimensional reduction method and effective for document clustering, because a term-document matrix is high-dimensional and sparse. The initial matrix of the NMF algorithm is regarded as a clustering result, therefore we can use NMF as a refinement method. First we perform min-max cut (Mcut), which is a powerful spectral clustering method, and then refine the result via NMF. Finally we should obtain an accurate clustering result. However, NMF often fails to improve the given clustering result. To overcome this problem, we use the Mcut object function to stop the iteration of NMF.

  • PDF

Computational Efficiency on Frequency Domain Analysis of Large-scale Finite Element Model by Combination of Iterative and Direct Sparse Solver (반복-직접 희소 솔버 조합에 의한 대규모 유한요소 모델의 주파수 영역 해석의 계산 효율)

  • Cho, Jeong-Rae;Cho, Keunhee
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.32 no.2
    • /
    • pp.117-124
    • /
    • 2019
  • Parallel sparse solvers are essential for solving large-scale finite element models. This paper introduces the combination of iterative and direct solver that can be applied efficiently to problems that require continuous solution for a subtly changing sequence of systems of equations. The iterative-direct sparse solver combination technique, proposed and implemented in the parallel sparse solver package, PARDISO, means that iterative sparse solver is applied for the newly updated linear system, but it uses the direct sparse solver's factorization of previous system matrix as a preconditioner. If the solution does not converge until the preset iterations, the solution will be sought by the direct sparse solver, and the last factorization results will be used as a preconditioner for subsequent updated system of equations. In this study, an improved method that sets the maximum number of iterations dynamically at the first Krylov iteration step is proposed and verified thereby enhancing calculation efficiency by the frequency domain analysis.

A Simple Toeplitz Channel Matrix Decomposition with Vectorization Technique for Large scaled MIMO System (벡터화 기술을 이용한 대규모 MIMO 시스템의 간단한 Toeplitz 채널 행렬 분해)

  • Park, Ju Yong;Hanif, Mohammad Abu;Kim, Jeong Su;Song, Sang Seob;Lee, Moon Ho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.9
    • /
    • pp.21-29
    • /
    • 2014
  • Due to enormous number of user and limited memory space, the memory saving is become an important issue for big data service these days. In the large scaled multiple-input multiple-output (MIMO) system, the Teoplitz channel can play the significance rule to improve the performance as well as power efficiency. In this paper, we propose a Toeplitz channel decomposition based on matrix vectorization. Here we use Toeplitz matrix to the channel for large scaled MIMO system. And we show that the Toeplitz Jacket matrices are decomposed to Cooley-Tukey sparse matrices like fast Fourier transform (FFT).

Comparisons of Linear Feature Extraction Methods (선형적 특징추출 방법의 특성 비교)

  • Oh, Sang-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.4
    • /
    • pp.121-130
    • /
    • 2009
  • In this paper, feature extraction methods, which is one field of reducing dimensions of high-dimensional data, are empirically investigated. We selected the traditional PCA(Principal Component Analysis), ICA(Independent Component Analysis), NMF(Non-negative Matrix Factorization), and sNMF(Sparse NMF) for comparisons. ICA has a similar feature with the simple cell of V1. NMF implemented a "parts-based representation in the brain" and sNMF is a improved version of NMF. In order to visually investigate the extracted features, handwritten digits are handled. Also, the extracted features are used to train multi-layer perceptrons for recognition test. The characteristic of each feature extraction method will be useful when applying feature extraction methods to many real-world problems.

Comparison of different iterative schemes for ISPH based on Rankine source solution

  • Zheng, Xing;Ma, Qing-wei;Duan, Wen-yang
    • International Journal of Naval Architecture and Ocean Engineering
    • /
    • v.9 no.4
    • /
    • pp.390-403
    • /
    • 2017
  • Smoothed Particle Hydrodynamics (SPH) method has a good adaptability for the simulation of free surface flow problems. There are two forms of SPH. One is weak compressible SPH and the other one is incompressible SPH (ISPH). Compared with the former one, ISPH method performs better in many cases. ISPH based on Rankine source solution can perform better than traditional ISPH, as it can use larger stepping length by avoiding the second order derivative in pressure Poisson equation. However, ISPH_R method needs to solve the sparse linear matrix for pressure Poisson equation, which is one of the most expensive parts during one time stepping calculation. Iterative methods are normally used for solving Poisson equation with large particle numbers. However, there are many iterative methods available and the question for using which one is still open. In this paper, three iterative methods, CGS, Bi-CGstab and GMRES are compared, which are suitable and typical for large unsymmetrical sparse matrix solutions. According to the numerical tests on different cases, still water test, dam breaking, violent tank sloshing, solitary wave slamming, the GMRES method is more efficient than CGS and Bi-CGstab for ISPH method.

A PRECONDITIONER FOR THE LSQR ALGORITHM

  • Karimi, Saeed;Salkuyeh, Davod Khojasteh;Toutounian, Faezeh
    • Journal of applied mathematics & informatics
    • /
    • v.26 no.1_2
    • /
    • pp.213-222
    • /
    • 2008
  • Iterative methods are often suitable for solving least squares problems min$||Ax-b||_2$, where A $\epsilon\;\mathbb{R}^{m{\times}n}$ is large and sparse. The well known LSQR algorithm is among the iterative methods for solving these problems. A good preconditioner is often needed to speedup the LSQR convergence. In this paper we present the numerical experiments of applying a well known preconditioner for the LSQR algorithm. The preconditioner is based on the $A^T$ A-orthogonalization process which furnishes an incomplete upper-lower factorization of the inverse of the normal matrix $A^T$ A. The main advantage of this preconditioner is that we apply only one of the factors as a right preconditioner for the LSQR algorithm applied to the least squares problem min$||Ax-b||_2$. The preconditioner needs only the sparse matrix-vector product operations and significantly reduces the solution time compared to the unpreconditioned iteration. Finally, some numerical experiments on test matrices from Harwell-Boeing collection are presented to show the robustness and efficiency of this preconditioner.

  • PDF