• Title/Summary/Keyword: Matrix Algorithm

Search Result 2,214, Processing Time 0.028 seconds

In-depth Analysis and Performance Improvement of a Flash Disk-based Matrix Transposition Algorithm (플래시 디스크 기반 행렬전치 알고리즘 심층 분석 및 성능개선)

  • Lee, Hyung-Bong;Chung, Tae-Yun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.12 no.6
    • /
    • pp.377-384
    • /
    • 2017
  • The scope of the matrix application is so broad that it can not be limited. A typical matrix application area in computer science is image processing. Particularly, radar scanning equipment implemented on a small embedded system requires real-time matrix transposition for image processing, and since its memory size is small, a general matrix transposition algorithm can not be applied. In this case, matrix transposition must be done in disk space, such as flash disk, using a limited memory buffer. In this paper, we analyze and improve a recently published flash disk-based matrix transposition algorithm named as asymmetric sub-matrix transposition algorithm. The performance analysis shows that the asymmetric sub-matrix transposition algorithm has lower performance than the conventional sub-matrix transposition algorithm, but the improved asymmetric sub-matrix transposition algorithm is superior to the sub-matrix transposition algorithm in 13 of the 16 experimental data.

A FAST FACTORIZATION ALGORITHM FOR A CONFLUENT CAUCHY MATRIX

  • KIM KYUNGSUP
    • Journal of the Korean Mathematical Society
    • /
    • v.42 no.1
    • /
    • pp.1-16
    • /
    • 2005
  • This paper presents a fast factorization algorithm for confluent Cauchy-like matrices. The algorithm consists of two parts. First. a confluent Cauchy-like matrix is transformed into a Cauchy-like matrix available to pivot without changing its structure. Second. a fast partial pivoting factorization algorithm for the Cauchy-like matrix is presented. A new displacement structure cannot possibly generate all entries of a transformed matrix, which is called by 'partially reconstructible'. This paper also discusses how the proposed factorization algorithm can be generally applied to partially reconstructive matrices.

On a sign-pattern matrix and it's related algorithms for L-matrix

  • Seol, Han-Guk;Kim, Yu-Hyuk;Lee, Sang-Gu
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.3 no.1
    • /
    • pp.43-53
    • /
    • 1999
  • A real $m{\times}n$ matrix A is called an L-matrix if every matrix in its qualitative class has linearly independent rows. Since the number of the sign pattern matrices of the given size is finite, we can list all patterns lexicographically. In [2], a necessary and sufficient condition for a matrix to be an L-matrix was given. We presented an algorithm which decides whether the given matrix is an L-matrix or not. In this paper, we develope an algorithm and C-program which will determine whether a given matrix is an L-matrix or not, or an SNS-matrix or not. In addition, we have extended our algorithm to be able to classify sign-pattern matrices, and to find barely L-matrices from a given matrix and to list all $m{\times}n$ L-matrices.

  • PDF

Parallel Algorithm for Matrix-Matrix Multiplication on the GPU (GPU 기반 행렬 곱셈 병렬처리 알고리즘)

  • Park, Sangkun
    • Journal of Institute of Convergence Technology
    • /
    • v.9 no.1
    • /
    • pp.1-6
    • /
    • 2019
  • Matrix multiplication is a fundamental mathematical operation that has numerous applications across most scientific fields. In this paper, we presents a parallel GPU computation algorithm for dense matrix-matrix multiplication using OpenGL compute shader, which can play a very important role as a fundamental building block for many high-performance computing applications. Experimental results on NVIDIA Quad 4000 show that the proposed algorithm runs about 208 times faster than previous CPU algorithm and achieves performance of 75 GFLOPS in single precision for dense matrices with matrix size 4,096. Such performance proves that our algorithm is practical for real applications.

A Broadcasting Algorithm in Matrix Hypercubes (행렬 하이퍼큐브에 대한 방송 알고리즘)

  • 최선아;이형옥임형석
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.475-478
    • /
    • 1998
  • The matrix hypercube MH(2,n) is the interconnection network which improves the network cost of the hypercube. In this paper, we propose an algorithm for one-to-all broadcasting in the matrix hypercube MH(2,n). The algorithm can broadcast a message to 22n nodes in O(n) time. The algorithm uses the rich structure of the matrix hypercubes and works by recursively partitioning the original matrix hypercubes into smaller matrix hypercubes.

  • PDF

On the singularity of the matrix sign function algorithm

  • Kim, Hyoung-Joong;Lee, Jang-Gyu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1988.10b
    • /
    • pp.770-771
    • /
    • 1988
  • Some properties of a matrix containing at least one pair of purely imaginary eigenvalues in the matrix sign function algorithm are explicated. It is shown that such a nonsingular matrix can be end up a singular matrix in the matrix sign function algorithm independently of the matrix condition. The result can be utilized to identify and locate all the eigenvalues theoretically.

  • PDF

EFFICIENT ALGORITHM FOR FINDING THE INVERSE AND THE GROUP INVERSE OF FLS $\gamma-CIRCULANT$ MATRIX

  • JIANG ZHAO-LIN;XU ZONG-BEN
    • Journal of applied mathematics & informatics
    • /
    • v.18 no.1_2
    • /
    • pp.45-57
    • /
    • 2005
  • An efficient algorithm for finding the inverse and the group inverse of the FLS $\gamma-circulant$ matrix is presented by Euclidean algorithm. Extension is made to compute the inverse of the FLS $\gamma-retrocirculant$ matrix by using the relationship between an FLS $\gamma-circulant$ matrix and an FLS $\gamma-retrocirculant$ matrix. Finally, some examples are given.

ANALYSIS OF THE UPPER BOUND ON THE COMPLEXITY OF LLL ALGORITHM

  • PARK, YUNJU;PARK, JAEHYUN
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.20 no.2
    • /
    • pp.107-121
    • /
    • 2016
  • We analyze the complexity of the LLL algorithm, invented by Lenstra, Lenstra, and $Lov{\acute{a}}sz$ as a a well-known lattice reduction (LR) algorithm which is previously known as having the complexity of $O(N^4{\log}B)$ multiplications (or, $O(N^5({\log}B)^2)$ bit operations) for a lattice basis matrix $H({\in}{\mathbb{R}}^{M{\times}N})$ where B is the maximum value among the squared norm of columns of H. This implies that the complexity of the lattice reduction algorithm depends only on the matrix size and the lattice basis norm. However, the matrix structures (i.e., the correlation among the columns) of a given lattice matrix, which is usually measured by its condition number or determinant, can affect the computational complexity of the LR algorithm. In this paper, to see how the matrix structures can affect the LLL algorithm's complexity, we derive a more tight upper bound on the complexity of LLL algorithm in terms of the condition number and determinant of a given lattice matrix. We also analyze the complexities of the LLL updating/downdating schemes using the proposed upper bound.

A Study on Multi-Signal DOA Estimation in Fading Channels

  • Lee Kwan-Houng;Song Woo-Young
    • Journal of information and communication convergence engineering
    • /
    • v.3 no.3
    • /
    • pp.115-118
    • /
    • 2005
  • In this study, the proposed algorithm is a correlativity signal in a mobile wireless channel that has estimated the direction of arrival. The proposed algorithm applied the space average method in a MUSIC algorithm. The diagonal matrix of the space average method was changed to inverse the matrix and to obtain a new signal correlation matrix. The existing algorithm was analyzed and compared by applying a proposed signal correlation matrix to estimate the direction of arrival in a MUSIC algorithm. The experiment resulted in a proposed algorithm with a min-norm method resolution at more than $5^{\circ}$. It improved more than $2^{\circ}$ in a MUSIC algorithm.

Speed-up of the Matrix Computation on the Ridge Regression

  • Lee, Woochan;Kim, Moonseong;Park, Jaeyoung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.10
    • /
    • pp.3482-3497
    • /
    • 2021
  • Artificial intelligence has emerged as the core of the 4th industrial revolution, and large amounts of data processing, such as big data technology and rapid data analysis, are inevitable. The most fundamental and universal data interpretation technique is an analysis of information through regression, which is also the basis of machine learning. Ridge regression is a technique of regression that decreases sensitivity to unique or outlier information. The time-consuming calculation portion of the matrix computation, however, basically includes the introduction of an inverse matrix. As the size of the matrix expands, the matrix solution method becomes a major challenge. In this paper, a new algorithm is introduced to enhance the speed of ridge regression estimator calculation through series expansion and computation recycle without adopting an inverse matrix in the calculation process or other factorization methods. In addition, the performances of the proposed algorithm and the existing algorithm were compared according to the matrix size. Overall, excellent speed-up of the proposed algorithm with good accuracy was demonstrated.