• Title/Summary/Keyword: Cholesky decomposition

Search Result 42, Processing Time 0.02 seconds

Hardware Design of High Performance ALF in HEVC Encoder for Efficient Filter Coefficient Estimation (효율적인 필터 계수 추출을 위한 HEVC 부호화기의 고성능 ALF 하드웨어 설계)

  • Shin, Seungyong;Ryoo, Kwangki
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.2
    • /
    • pp.379-385
    • /
    • 2015
  • This paper proposes the hardware architecture of high performance ALF(Adaptive Loop Filter) for efficient filter coefficient estimation. In order to make the original image which has high resolution and high quality into highly compressed image effectively and also, subjective image quality into improved image, the ALF technique of HEVC performs a filtering by estimating filter coefficients using statistical characteristics of image. The proposed ALF hardware architecture is designed with a 2-step pipelined architecture for a reduction in performance cycle by analysing an operation relationship of Cholesky decomposition for the filter coefficient estimation. Also, in the operation process of the Cholesky decomposition, a square root operation is designed to reduce logic area, computation time and computation complexity by using the multiplexer, subtracter and comparator. The proposed hardware architecture is designed using Xilinx ISE 14.3 Vertex-7 XC7VCX485T FPGA device and can support 4K UHD@40fps in real time at a maximum operation frequency of 186MHz.

Hurdle Model for Longitudinal Zero-Inflated Count Data Analysis (영과잉 경시적 가산자료 분석을 위한 허들모형)

  • Jin, Iktae;Lee, Keunbaik
    • The Korean Journal of Applied Statistics
    • /
    • v.27 no.6
    • /
    • pp.923-932
    • /
    • 2014
  • The Hurdle model can to analyze zero-inflated count data. This model is a mixed model of the logit model for a binary component and a truncated Poisson model of a truncated count component. We propose a new hurdle model with a general heterogeneous random effects covariance matrix to analyze longitudinal zero-inflated count data using modified Cholesky decomposition. This decomposition factors the random effects covariance matrix into generalized autoregressive parameters and innovation variance. The parameters are modeled using (generalized) linear models and estimated with a Bayesian method. We use these methods to carefully analyze a real dataset.

Domain Decomposition using Substructuring Method and Parallel Computation of the Rigid-Plastic Finite Element Analysis (부구조법에 의한 영역 분할 및 강소성 유한요소해석의 병렬 계산)

  • Park, Keun;Yang, Dong-Yol
    • Transactions of Materials Processing
    • /
    • v.7 no.5
    • /
    • pp.474-480
    • /
    • 1998
  • In the present study a domain decomposition scheme using the substructuring method is developed for the computational efficiency of the finite element analysis of metal forming processes. in order to avoid calculation of an inverse matrix during the substructuring procedure, the modified Cholesky decomposition method is implemented. As obtaining the data independence by the substructuring method the program is easily paralleized using the Parallel Virtual machine(PVM) library on a work-station cluster connected on networks. A numerical example for a simple upsetting is calculated and the speed-up ratio with respect to various number of subdomains and number of processors. The efficiency of the parallel computation is discussed by comparing the results.

  • PDF

Domain Decomposition using Substructuring Method and Parallel Comptation of the Rigid-Plastic Finite Element Analysis (부구조법에 의한 영역 분할 및 강소성 유한요소해석의 병렬 계산)

  • Park, Keun;Yang, Dong-Yol
    • Proceedings of the Korean Society for Technology of Plasticity Conference
    • /
    • 1998.03a
    • /
    • pp.246-249
    • /
    • 1998
  • In the present study, domain decomposition using the substructuring method is developed for the computational efficiency of the finite element analysis of metal forming processes. In order to avoid calculation of an inverse matrix during the substructuring procedure, the modified Cholesky decomposition method is implemented. As obtaining the data independence by the substructuring method, the program is easily parallelized using the Parallel Virtual Machine(PVM) library on a workstation cluster connected on networks. A numerical example for a simple upsetting is calculated and the speed-up ratio with respect to various domain decompositions and number of processors. Comparing the results, it is concluded that the improvement of performance is obtained through the proposed method.

  • PDF

Comparison of Preconditioned Conjugate Gradient Methods for Adaptive Finite Element Analysis (유한요소 적응분할해석을 위한 선조정 공액구배법들의 비교연구)

  • 주관정
    • Computational Structural Engineering
    • /
    • v.1 no.2
    • /
    • pp.121-130
    • /
    • 1988
  • Adaptive reinements yield a large sparse system of equations. In order to solve such a system, the core storage requirement is an important consideration. Accordingly, an iterative method which minimizes the core storage and provides a high rate of convergence is called for. In this paper the conjugate gradient algorithms with various preconditionings including the incomplete Cholesky decomposition are examined.

  • PDF

Autoregressive Cholesky Factor Modeling for Marginalized Random Effects Models

  • Lee, Keunbaik;Sung, Sunah
    • Communications for Statistical Applications and Methods
    • /
    • v.21 no.2
    • /
    • pp.169-181
    • /
    • 2014
  • Marginalized random effects models (MREM) are commonly used to analyze longitudinal categorical data when the population-averaged effects is of interest. In these models, random effects are used to explain both subject and time variations. The estimation of the random effects covariance matrix is not simple in MREM because of the high dimension and the positive definiteness. A relatively simple structure for the correlation is assumed such as a homogeneous AR(1) structure; however, it is too strong of an assumption. In consequence, the estimates of the fixed effects can be biased. To avoid this problem, we introduce one approach to explain a heterogenous random effects covariance matrix using a modified Cholesky decomposition. The approach results in parameters that can be easily modeled without concern that the resulting estimator will not be positive definite. The interpretation of the parameters is sensible. We analyze metabolic syndrome data from a Korean Genomic Epidemiology Study using this method.

Enhanced data-driven simulation of non-stationary winds using DPOD based coherence matrix decomposition

  • Liyuan Cao;Jiahao Lu;Chunxiang Li
    • Wind and Structures
    • /
    • v.39 no.2
    • /
    • pp.125-140
    • /
    • 2024
  • The simulation of non-stationary wind velocity is particularly crucial for the wind resistant design of slender structures. Recently, some data-driven simulation methods have received much attention due to their straightforwardness. However, as the number of simulation points increases, it will face efficiency issues. Under such a background, in this paper, a time-varying coherence matrix decomposition method based on Diagonal Proper Orthogonal Decomposition (DPOD) interpolation is proposed for the data-driven simulation of non-stationary wind velocity based on S-transform (ST). Its core idea is to use coherence matrix decomposition instead of the decomposition of the measured time-frequency power spectrum matrix based on ST. The decomposition result of the time-varying coherence matrix is relatively smooth, so DPOD interpolation can be introduced to accelerate its decomposition, and the DPOD interpolation technology is extended to the simulation based on measured wind velocity. The numerical experiment has shown that the reconstruction results of coherence matrix interpolation are consistent with the target values, and the interpolation calculation efficiency is higher than that of the coherence matrix time-frequency interpolation method and the coherence matrix POD interpolation method. Compared to existing data-driven simulation methods, it addresses the efficiency issue in simulations where the number of Cholesky decompositions increases with the increase of simulation points, significantly enhancing the efficiency of simulating multivariate non-stationary wind velocities. Meanwhile, the simulation data preserved the time-frequency characteristics of the measured wind velocity well.

A Simplified Efficient Algorithm for Blind Detection of Orthogonal Space-Time Block Codes

  • Pham, Van Su;Mai, Linh;Lee, Jae-Young;Yoon, Gi-Wan
    • Journal of information and communication convergence engineering
    • /
    • v.6 no.3
    • /
    • pp.261-265
    • /
    • 2008
  • This work presents a simplified efficient blind detection algorithm for orthogonal space-time codes(OSTBC). First, the proposed decoder exploits a proper decomposition approach of the upper triangular matrix R, which resulted from Cholesky-factorization of the composition channel matrix, to form an easy-to-solve blind detection equation. Secondly, in order to avoid suffering from the high computational load, the proposed decoder applies a sub-optimal QR-based decoder. Computer simulation results verify that the proposed decoder allows to significantly reduce computational complexity while still satisfying the bit-error-rate(BER) performance.

A Comparative Study of Covariance Matrix Estimators in High-Dimensional Data (고차원 데이터에서 공분산행렬의 추정에 대한 비교연구)

  • Lee, DongHyuk;Lee, Jae Won
    • The Korean Journal of Applied Statistics
    • /
    • v.26 no.5
    • /
    • pp.747-758
    • /
    • 2013
  • The covariance matrix is important in multivariate statistical analysis and a sample covariance matrix is used as an estimator of the covariance matrix. High dimensional data has a larger dimension than the sample size; therefore, the sample covariance matrix may not be suitable since it is known to perform poorly and event not invertible. A number of covariance matrix estimators have been recently proposed with three different approaches of shrinkage, thresholding, and modified Cholesky decomposition. We compare the performance of these newly proposed estimators in various situations.

Simulation of stationary Gaussian stochastic wind velocity field

  • Ding, Quanshun;Zhu, Ledong;Xiang, Haifan
    • Wind and Structures
    • /
    • v.9 no.3
    • /
    • pp.231-243
    • /
    • 2006
  • An improvement to the spectral representation algorithm for the simulation of wind velocity fields on large scale structures is proposed in this paper. The method proposed by Deodatis (1996) serves as the basis of the improved algorithm. Firstly, an interpolation approximation is introduced to simplify the computation of the lower triangular matrix with the Cholesky decomposition of the cross-spectral density (CSD) matrix, since each element of the triangular matrix varies continuously with the wind spectra frequency. Fast Fourier Transform (FFT) technique is used to further enhance the efficiency of computation. Secondly, as an alternative spectral representation, the vectors of the triangular matrix in the Deodatis formula are replaced using an appropriate number of eigenvectors with the spectral decomposition of the CSD matrix. Lastly, a turbulent wind velocity field through a vertical plane on a long-span bridge (span-wise) is simulated to illustrate the proposed schemes. It is noted that the proposed schemes require less computer memory and are more efficiently simulated than that obtained using the existing traditional method. Furthermore, the reliability of the interpolation approximation in the simulation of wind velocity field is confirmed.