• Title/Summary/Keyword: Sparse Systems

Search Result 268, Processing Time 0.028 seconds

Sparse Representation Learning of Kernel Space Using the Kernel Relaxation Procedure (커널 이완절차에 의한 커널 공간의 저밀도 표현 학습)

  • 류재홍;정종철
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2001.12a
    • /
    • pp.60-64
    • /
    • 2001
  • In this paper, a new learning methodology for Kernel Methods is suggested that results in a sparse representation of kernel space from the training patterns for classification problems. Among the traditional algorithms of linear discriminant function(perceptron, relaxation, LMS(least mean squared), pseudoinverse), this paper shows that the relaxation procedure can obtain the maximum margin separating hyperplane of linearly separable pattern classification problem as SVM(Support Vector Machine) classifier does. The original relaxation method gives only the necessary condition of SV patterns. We suggest the sufficient condition to identify the SV patterns in the learning epochs. Experiment results show the new methods have the higher or equivalent performance compared to the conventional approach.

  • PDF

An Efficient Parallel Algorithm for Solving Large Sparse Linear Systems of Equations (대형 Sparse 선형시스템 방정식을 풀기위한 효과적인 병렬 알고리즘)

  • Chae, Soo-Hoan;Lee, Jin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.14 no.4
    • /
    • pp.388-397
    • /
    • 1989
  • This paper describes an intelligent iterative parallel algorithm for solving large sparse linear systems of equations, and proposes a ststic dataflow computer architechture for the implementation of the algorithm. Implemented with the Jacobi interative method, the intelligent algorithm reduces the parallel execution time by reducing the individual inner product operation time.

  • PDF

Computational Experience of Linear Equation Solvers for Self-Regular Interior-Point Methods (자동조절자 내부점 방법을 위한 선형방정식 해법)

  • Seol Tongryeol
    • Korean Management Science Review
    • /
    • v.21 no.2
    • /
    • pp.43-60
    • /
    • 2004
  • Every iteration of interior-point methods of large scale optimization requires computing at least one orthogonal projection. In the practice, symmetric variants of the Gaussian elimination such as Cholesky factorization are accepted as the most efficient and sufficiently stable method. In this paper several specific implementation issues of the symmetric factorization that can be applied for solving such equations are discussed. The code called McSML being the result of this work is shown to produce comparably sparse factors as another implementations in the $MATLAB^{***}$ environment. It has been used for computing projections in an efficient implementation of self-regular based interior-point methods, McIPM. Although primary aim of developing McSML was to embed it into an interior-point methods optimizer, the code may equally well be used to solve general large sparse systems arising in different applications.

Greedy Learning of Sparse Eigenfaces for Face Recognition and Tracking

  • Kim, Minyoung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.14 no.3
    • /
    • pp.162-170
    • /
    • 2014
  • Appearance-based subspace models such as eigenfaces have been widely recognized as one of the most successful approaches to face recognition and tracking. The success of eigenfaces mainly has its origins in the benefits offered by principal component analysis (PCA), the representational power of the underlying generative process for high-dimensional noisy facial image data. The sparse extension of PCA (SPCA) has recently received significant attention in the research community. SPCA functions by imposing sparseness constraints on the eigenvectors, a technique that has been shown to yield more robust solutions in many applications. However, when SPCA is applied to facial images, the time and space complexity of PCA learning becomes a critical issue (e.g., real-time tracking). In this paper, we propose a very fast and scalable greedy forward selection algorithm for SPCA. Unlike a recent semidefinite program-relaxation method that suffers from complex optimization, our approach can process several thousands of data dimensions in reasonable time with little accuracy loss. The effectiveness of our proposed method was demonstrated on real-world face recognition and tracking datasets.

An efficient method for computation of receptances of structural systems with sparse, non-proportional damping matrix (성긴 일반 감쇠행렬을 포함하는 구조물에 대한 효율적인 주파수 응답 계산 방법)

  • Park, Jong-Heuck;Hong, Seong-Wook
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.12 no.7
    • /
    • pp.99-106
    • /
    • 1995
  • Frequency response functions are of great use in dynamic analysis of structural systems. The present paper proposes an efficient method for computation of the frequency rewponse functions of linear structural dynamic models with a sparse, non-proportional damping matrix. An exact condensation procedure is proposed which enables the present method to condense the matrices without resulting in any errors. Also, an iterative scheme is proposed to be able to avoid matrix inversion in computing frequency response matrix. The proposed method is illustrated through a numerical example.

  • PDF

A Robust Preconditioner on the CRAY-T3E for Large Nonsymmetric Sparse Linear Systems

  • Ma, Sangback;Cho, Jaeyoung
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.5 no.1
    • /
    • pp.85-100
    • /
    • 2001
  • In this paper we propose a block-type parallel preconditioner for solving large sparse nonsymmetric linear systems, which we expect to be scalable. It is Multi-Color Block SOR preconditioner, combined with direct sparse matrix solver. For the Laplacian matrix the SOR method is known to have a nondeteriorating rate of convergence when used with Multi-Color ordering. Since most of the time is spent on the diagonal inversion, which is done on each processor, we expect it to be a good scalable preconditioner. Finally, due to the blocking effect, it will be effective for ill-conditioned problems. We compared it with four other preconditioners, which are ILU(0)-wavefront ordering, ILU(0)-Multi-Color ordering, SPAI(SParse Approximate Inverse), and SSOR preconditioner. Experiments were conducted for the Finite Difference discretizations of two problems with various meshsizes varying up to 1024 x 1024, and for an ill-conditioned matrix from the shell problem from the Harwell-Boeing collection. CRAY-T3E with 128 nodes was used. MPI library was used for interprocess communications. The results show that Multi-Color Block SOR and ILU(0) with Multi-Color ordering give the best performances for the finite difference matrices and for the shell problem only the Multi-Color Block SOR converges.

  • PDF

Low Complexity Zero-Forcing Beamforming for Distributed Massive MIMO Systems in Large Public Venues

  • Li, Haoming;Leung, Victor C.M.
    • Journal of Communications and Networks
    • /
    • v.15 no.4
    • /
    • pp.370-382
    • /
    • 2013
  • Distributed massive MIMO systems, which have high bandwidth efficiency and can accommodate a tremendous amount of traffic using algorithms such as zero-forcing beam forming (ZFBF), may be deployed in large public venues with the antennas mounted under-floor. In this case the channel gain matrix H can be modeled as a multi-banded matrix, in which off-diagonal entries decay both exponentially due to heavy human penetration loss and polynomially due to free space propagation loss. To enable practical implementation of such systems, we present a multi-banded matrix inversion algorithm that substantially reduces the complexity of ZFBF by keeping the most significant entries in H and the precoding matrix W. We introduce a parameter p to control the sparsity of H and W and thus achieve the tradeoff between the computational complexity and the system throughput. The proposed algorithm includes dense and sparse precoding versions, providing quadratic and linear complexity, respectively, relative to the number of antennas. We present analysis and numerical evaluations to show that the signal-to-interference ratio (SIR) increases linearly with p in dense precoding. In sparse precoding, we demonstrate the necessity of using directional antennas by both analysis and simulations. When the directional antenna gain increases, the resulting SIR increment in sparse precoding increases linearly with p, while the SIR of dense precoding is much less sensitive to changes in p.

Accelerated Split Bregman Method for Image Compressive Sensing Recovery under Sparse Representation

  • Gao, Bin;Lan, Peng;Chen, Xiaoming;Zhang, Li;Sun, Fenggang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.6
    • /
    • pp.2748-2766
    • /
    • 2016
  • Compared with traditional patch-based sparse representation, recent studies have concluded that group-based sparse representation (GSR) can simultaneously enforce the intrinsic local sparsity and nonlocal self-similarity of images within a unified framework. This article investigates an accelerated split Bregman method (SBM) that is based on GSR which exploits image compressive sensing (CS). The computational efficiency of accelerated SBM for the measurement matrix of a partial Fourier matrix can be further improved by the introduction of a fast Fourier transform (FFT) to derive the enhanced algorithm. In addition, we provide convergence analysis for the proposed method. Experimental results demonstrate that accelerated SBM is potentially faster than some existing image CS reconstruction methods.

Block Sparse Signals Recovery via Block Backtracking-Based Matching Pursuit Method

  • Qi, Rui;Zhang, Yujie;Li, Hongwei
    • Journal of Information Processing Systems
    • /
    • v.13 no.2
    • /
    • pp.360-369
    • /
    • 2017
  • In this paper, a new iterative algorithm for reconstructing block sparse signals, called block backtracking-based adaptive orthogonal matching pursuit (BBAOMP) method, is proposed. Compared with existing methods, the BBAOMP method can bring some flexibility between computational complexity and reconstruction property by using the backtracking step. Another outstanding advantage of BBAOMP algorithm is that it can be done without another information of signal sparsity. Several experiments illustrate that the BBAOMP algorithm occupies certain superiority in terms of probability of exact reconstruction and running time.

Sparse Representation based Two-dimensional Bar Code Image Super-resolution

  • Shen, Yiling;Liu, Ningzhong;Sun, Han
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.4
    • /
    • pp.2109-2123
    • /
    • 2017
  • This paper presents a super-resolution reconstruction method based on sparse representation for two-dimensional bar code images. Considering the features of two-dimensional bar code images, Kirsch and LBP (local binary pattern) operators are used to extract the edge gradient and texture features. Feature extraction is constituted based on these two features and additional two second-order derivatives. By joint dictionary learning of the low-resolution and high-resolution image patch pairs, the sparse representation of corresponding patches is the same. In addition, the global constraint is exerted on the initial estimation of high-resolution image which makes the reconstructed result closer to the real one. The experimental results demonstrate the effectiveness of the proposed algorithm for two-dimensional bar code images by comparing with other reconstruction algorithms.