• Title/Summary/Keyword: Matrix Computation

Search Result 457, Processing Time 0.034 seconds

Secure Outsourced Computation of Multiple Matrix Multiplication Based on Fully Homomorphic Encryption

  • Wang, Shufang;Huang, Hai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5616-5630
    • /
    • 2019
  • Fully homomorphic encryption allows a third-party to perform arbitrary computation over encrypted data and is especially suitable for secure outsourced computation. This paper investigates secure outsourced computation of multiple matrix multiplication based on fully homomorphic encryption. Our work significantly improves the latest Mishra et al.'s work. We improve Mishra et al.'s matrix encoding method by introducing a column-order matrix encoding method which requires smaller parameter. This enables us to develop a binary multiplication method for multiple matrix multiplication, which multiplies pairwise two adjacent matrices in the tree structure instead of Mishra et al.'s sequential matrix multiplication from left to right. The binary multiplication method results in a logarithmic-depth circuit, thus is much more efficient than the sequential matrix multiplication method with linear-depth circuit. Experimental results show that for the product of ten 32×32 (64×64) square matrices our method takes only several thousand seconds while Mishra et al.'s method will take about tens of thousands of years which is astonishingly impractical. In addition, we further generalize our result from square matrix to non-square matrix. Experimental results show that the binary multiplication method and the classical dynamic programming method have a similar performance for ten non-square matrices multiplication.

Speed-up of the Matrix Computation on the Ridge Regression

  • Lee, Woochan;Kim, Moonseong;Park, Jaeyoung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.10
    • /
    • pp.3482-3497
    • /
    • 2021
  • Artificial intelligence has emerged as the core of the 4th industrial revolution, and large amounts of data processing, such as big data technology and rapid data analysis, are inevitable. The most fundamental and universal data interpretation technique is an analysis of information through regression, which is also the basis of machine learning. Ridge regression is a technique of regression that decreases sensitivity to unique or outlier information. The time-consuming calculation portion of the matrix computation, however, basically includes the introduction of an inverse matrix. As the size of the matrix expands, the matrix solution method becomes a major challenge. In this paper, a new algorithm is introduced to enhance the speed of ridge regression estimator calculation through series expansion and computation recycle without adopting an inverse matrix in the calculation process or other factorization methods. In addition, the performances of the proposed algorithm and the existing algorithm were compared according to the matrix size. Overall, excellent speed-up of the proposed algorithm with good accuracy was demonstrated.

DEVELOPMENT OF PARALLEL COMPUTATION METHOD FOR THE p VERSION IN THE FINITE ELEMENT METHOD

  • Kim, Chang-Geun;Cha, Ho-Jung
    • Journal of applied mathematics & informatics
    • /
    • v.6 no.2
    • /
    • pp.649-659
    • /
    • 1999
  • This paper presents a parallel implementation of stiff-ness matrix calculation based on the processor farm model on a net-work of workstations running PVM programming environment. As the computational characteristics of stiffnes matrix exhibits good po-tentials for effective prallel computation the performance improve-ment is show to be almost linear with the number of sorkstations involved in the computation.

Parallel Computation Algorithm of Gauss Elimination in Power system Analysis (전력계통해석을 위한 자코비안행렬 가우스소거의병렬계산 알고리즘)

  • 서의석;오태규
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.43 no.2
    • /
    • pp.189-196
    • /
    • 1994
  • This paper describes a parallel computing algorithm in Gauss elimination of Jacobian matrix to large-scale power system. The structure of Jacobian matrix becomes different according to ordering method of buses. In sequential computation buses are ordered to minimize the number of fill-in in the triangulation of the Jacobian matrix. The proposed method develops the parallelism in the Gauss elimination by using ND(nested dissection) ordering. In this procedure the level structure of the power system network is transformed to be long and narrow by using end buses which results in balance of computing load among processes and maximization of parallel computation. Each processor uses the sequential computation method to preserve the sqarsity of matrix.

  • PDF

A MATRIX PENCIL APPROACH COMPUTING THE ELEMENTARY DIVISORS OF A MATRIX : NUMERICAL ASPECTS AND APPLICATIONS

  • Mitrouli, M.;Kalogeropoulos, G.
    • Journal of applied mathematics & informatics
    • /
    • v.5 no.3
    • /
    • pp.717-734
    • /
    • 1998
  • In the present paper is presented a new matrix pencil-based numerical approach achieving the computation of the elemen-tary divisors of a given matrix $A \in C^{n\timesn}$ This computation is at-tained without performing similarity transformations and the whole procedure is based on the construction of the Piecewise Arithmetic Progression Sequence(PAPS) of the associated pencil $\lambda I_n$ -A of matrix A for all the appropriate values of $\lambda$ belonging to the set of eigenvalues of A. This technique produces a stable and accurate numerical algorithm working satisfactorily for matrices with a well defined eigenstructure. The whole technique can be applied for the computation of the first second and Jordan canonical form of a given matrix $A \in C^{n\timesn}$. The results are accurate for matrices possessing a well defined canonical form. In case of defective matrices indications of the most appropriately computed canonical form. In case of defective matrices indication of the most appropriately computed canonical form are given.

COMPUTATION OF HANKEL MATRICES IN TERMS OF CLASSICAL KERNEL FUNCTIONS IN POTENTIAL THEORY

  • Chung, Young-Bok
    • Journal of the Korean Mathematical Society
    • /
    • v.57 no.4
    • /
    • pp.973-986
    • /
    • 2020
  • In this paper, we compute the Hankel matrix representation of the Hankel operator on the Hardy space of a general bounded domain with respect to special orthonormal bases for the Hardy space and its orthogonal complement. Moreover we obtain the compact form of the Hankel matrix for the unit disc case with respect to these bases. One can see that the Hankel matrix generated by this computation turns out to be a generalization of the case of the unit disc from the single simply connected domain to multiply connected domains with much diversities of bases.

Parallel Computation Algorithm of Gauss Elimination in Power system Analysis (전력계통의 자코비안행렬 가우스소거의 병렬계산)

  • Suh, Eui-Suk;Oh, Tae-Kyoo
    • Proceedings of the KIEE Conference
    • /
    • 1993.07a
    • /
    • pp.163-166
    • /
    • 1993
  • This paper describes an parallell computing algorithm in Gauss elimination of Jacobian matrix to large-scale power system. The structure of Jacobian matrix becomes different according to ordering method of buses. In sequential computation buses are ordered to minimize the number of fill-in in the triangulation of the Jacobian matrix. The proposed method using ND(nested dissection) ordering develops the parallelism in the Gauss elimination to have balance of computing load among processes and each processor uses the sequential computation method to preserve the sparsity of matrix.

  • PDF

Sparse Matrix Computation in Mixed Effects Model (희소행렬 계산과 혼합모형의 추론)

  • Son, Won;Park, Yong-Tae;Kim, Yu Kyeong;Lim, Johan
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.2
    • /
    • pp.281-288
    • /
    • 2015
  • In this paper, we study an approximate procedure to evaluate a penalized maximum likelihood estimator (MLE) for a mixed effects model. The procedure approximates the Hessian matrix of the penalized MLE with a structured sparse matrix or an arrowhead type matrix to speed its computation. In this paper, we numerically investigate the gain in computation time as well as approximation error from the considered approximation procedure.

SPARSE NULLSPACE COMPUTATION OF EQULILBRIUM MATRICES

  • Jang, Ho-Jong;Cha, Kyung-Joon
    • Communications of the Korean Mathematical Society
    • /
    • v.11 no.4
    • /
    • pp.1175-1185
    • /
    • 1996
  • We study the computation of sparse null bases of equilibrium matrices in the context of structural optimization and incompressible fluid flow. In our approach we emphasize the parallel computatin and examine the applications. New block decomposition and node ordering schemes are suggested, and numerical examples are considered.

  • PDF

ANALYSIS OF POSSIBLE PRE-COMPUTATION AIDED DLP SOLVING ALGORITHMS

  • HONG, JIN;LEE, HYEONMI
    • Journal of the Korean Mathematical Society
    • /
    • v.52 no.4
    • /
    • pp.797-819
    • /
    • 2015
  • A trapdoor discrete logarithm group is a cryptographic primitive with many applications, and an algorithm that allows discrete logarithm problems to be solved faster using a pre-computed table increases the practicality of using this primitive. Currently, the distinguished point method and one extension to this algorithm are the only pre-computation aided discrete logarithm problem solving algorithms appearing in the related literature. This work investigates the possibility of adopting other pre-computation matrix structures that were originally designed for used with cryptanalytic time memory tradeoff algorithms to work as pre-computation aided discrete logarithm problem solving algorithms. We find that the classical Hellman matrix structure leads to an algorithm that has performance advantages over the two existing algorithms.