• Title/Summary/Keyword: Sparse data

Search Result 408, Processing Time 0.025 seconds

Sparse Point Representation Based on Interpolation Wavelets (보간 웨이블렛 기반의 Sparse Point Representation)

  • Park, Jun-Pyo;Lee, Do-Hyung;Maeng, Joo-Sung
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.30 no.1 s.244
    • /
    • pp.8-15
    • /
    • 2006
  • A Sparse Point Representation(SPR) based on interpolation wavelets is presented. The SPR is implemented for the purpose of CFD data compression. Unlike conventional wavelet transformation, the SPR relieves computing workload in the similar fashion of lifting scheme that includes splitting and prediction procedures in sequence. However, SPR skips update procedure that is major part of lifting scheme. Data compression can be achieved by proper thresholding method. The advantage of the SPR method is that, by keeping even point physical values, low frequency filtering procedure is omitted and its related unphysical thresholing mechanism can be avoided in reconstruction process. Extra singular feature detection algorithm is implemented for preserving singular features such as shock and vortices. Several numerical tests show the adequacy of SPR for the CFD data. It is also shown that it can be easily extended to nonlinear adaptive wavelets for enhanced feature capturing.

Estimation of high-dimensional sparse cross correlation matrix

  • Yin, Cao;Kwangok, Seo;Soohyun, Ahn;Johan, Lim
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.6
    • /
    • pp.655-664
    • /
    • 2022
  • On the motivation by an integrative study of multi-omics data, we are interested in estimating the structure of the sparse cross correlation matrix of two high-dimensional random vectors. We rewrite the problem as a multiple testing problem and propose a new method to estimate the sparse structure of the cross correlation matrix. To do so, we test the correlation coefficients simultaneously and threshold the correlation coefficients by controlling FRD at a predetermined level α. Further, we apply the proposed method and an alternative adaptive thresholding procedure by Cai and Liu (2016) to the integrative analysis of the protein expression data (X) and the mRNA expression data (Y) in TCGA breast cancer cohort. By varying the FDR level α, we show that the new procedure is consistently more efficient in estimating the sparse structure of cross correlation matrix than the alternative one.

Moving Object Detection Using Sparse Approximation and Sparse Coding Migration

  • Li, Shufang;Hu, Zhengping;Zhao, Mengyao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.5
    • /
    • pp.2141-2155
    • /
    • 2020
  • In order to meet the requirements of background change, illumination variation, moving shadow interference and high accuracy in object detection of moving camera, and strive for real-time and high efficiency, this paper presents an object detection algorithm based on sparse approximation recursion and sparse coding migration in subspace. First, low-rank sparse decomposition is used to reduce the dimension of the data. Combining with dictionary sparse representation, the computational model is established by the recursive formula of sparse approximation with the video sequences taken as subspace sets. And the moving object is calculated by the background difference method, which effectively reduces the computational complexity and running time. According to the idea of sparse coding migration, the above operations are carried out in the down-sampling space to further reduce the requirements of computational complexity and memory storage, and this will be adapt to multi-scale target objects and overcome the impact of large anomaly areas. Finally, experiments are carried out on VDAO datasets containing 59 sets of videos. The experimental results show that the algorithm can detect moving object effectively in the moving camera with uniform speed, not only in terms of low computational complexity but also in terms of low storage requirements, so that our proposed algorithm is suitable for detection systems with high real-time requirements.

Modal parameter identification with compressed samples by sparse decomposition using the free vibration function as dictionary

  • Kang, Jie;Duan, Zhongdong
    • Smart Structures and Systems
    • /
    • v.25 no.2
    • /
    • pp.123-133
    • /
    • 2020
  • Compressive sensing (CS) is a newly developed data acquisition and processing technique that takes advantage of the sparse structure in signals. Normally signals in their primitive space or format are reconstructed from their compressed measurements for further treatments, such as modal analysis for vibration data. This approach causes problems such as leakage, loss of fidelity, etc., and the computation of reconstruction itself is costly as well. Therefore, it is appealing to directly work on the compressed data without prior reconstruction of the original data. In this paper, a direct approach for modal analysis of damped systems is proposed by decomposing the compressed measurements with an appropriate dictionary. The damped free vibration function is adopted to form atoms in the dictionary for the following sparse decomposition. Compared with the normally used Fourier bases, the damped free vibration function spans a space with both the frequency and damping as the control variables. In order to efficiently search the enormous two-dimension dictionary with frequency and damping as variables, a two-step strategy is implemented combined with the Orthogonal Matching Pursuit (OMP) to determine the optimal atom in the dictionary, which greatly reduces the computation of the sparse decomposition. The performance of the proposed method is demonstrated by a numerical and an experimental example, and advantages of the method are revealed by comparison with another such kind method using POD technique.

Implementation and Experiments of Sparse Matrix Data Structure for Heat Conduction Equations

  • Kim, Jae-Gu;Lee, Ju-Hee;Park, Geun-Duk
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.12
    • /
    • pp.67-74
    • /
    • 2015
  • The heat conduction equation, a type of a Poisson equation which can be applied in various areas of engineering is calculating its value with the iteration method in general. The equation which had difference discretization of the heat conduction equation is the simultaneous equation, and each line has the characteristic of expressing in sparse matrix of the equivalent number of none-zero elements with neighboring grids. In this paper, we propose a data structure for sparse matrix that can calculate the value faster with less memory use calculate the heat conduction equation. To verify whether the proposed data structure efficiently calculates the value compared to the other sparse matrix representations, we apply the representative iteration method, CG (Conjugate Gradient), and presents experiment results of time consumed to get values, calculation time of each step and relevant time consumption ratio, and memory usage amount. The results of this experiment could be used to estimate main elements of calculating the value of the general heat conduction equation, such as time consumed, the memory usage amount.

A Local Linear Kernel Estimator for Sparse Multinomial Data

  • Baek, Jangsun
    • Journal of the Korean Statistical Society
    • /
    • v.27 no.4
    • /
    • pp.515-529
    • /
    • 1998
  • Burman (1987) and Hall and Titterington (1987) studied kernel smoothing for sparse multinomial data in detail. Both of their estimators for cell probabilities are sparse asymptotic consistent under some restrictive conditions on the true cell probabilities. Dong and Simonoff (1994) adopted boundary kernels to relieve the restrictive conditions. We propose a local linear kernel estimator which is popular in nonparametric regression to estimate cell probabilities. No boundary adjustment is necessary for this estimator since it adapts automatically to estimation at the boundaries. It is shown that our estimator attains the optimal rate of convergence in mean sum of squared error under sparseness. Some simulation results and a real data application are presented to see the performance of the estimator.

  • PDF

Effect of Sparse Decomposition on Various ICA Algorithms With Application to Image Data

  • Khan, Asif;Kim, In-Taek
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.967-968
    • /
    • 2008
  • In this paper we demonstrate the effect of sparse decomposition on various Independent Component Analysis (ICA) algorithms for separating simultaneous linear mixture of independent 2-D signals (images). We will show using simulated results that sparse decomposition before Kernel ICA (Sparse Kernel ICA) algorithm produces the best results as compared to other ICA algorithms.

  • PDF

ASSVD: Adaptive Sparse Singular Value Decomposition for High Dimensional Matrices

  • Ding, Xiucai;Chen, Xianyi;Zou, Mengling;Zhang, Guangxing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.6
    • /
    • pp.2634-2648
    • /
    • 2020
  • In this paper, an adaptive sparse singular value decomposition (ASSVD) algorithm is proposed to estimate the signal matrix when only one data matrix is observed and there is high dimensional white noise, in which we assume that the signal matrix is low-rank and has sparse singular vectors, i.e. it is a simultaneously low-rank and sparse matrix. It is a structured matrix since the non-zero entries are confined on some small blocks. The proposed algorithm estimates the singular values and vectors separable by exploring the structure of singular vectors, in which the recent developments in Random Matrix Theory known as anisotropic Marchenko-Pastur law are used. And then we prove that when the signal is strong in the sense that the signal to noise ratio is above some threshold, our estimator is consistent and outperforms over many state-of-the-art algorithms. Moreover, our estimator is adaptive to the data set and does not require the variance of the noise to be known or estimated. Numerical simulations indicate that ASSVD still works well when the signal matrix is not very sparse.

Mining Frequent Pattern from Large Spatial Data (대용량 공간 데이터로 부터 빈발 패턴 마이닝)

  • Lee, Dong-Gyu;Yi, Gyeong-Min;Jung, Suk-Ho;Lee, Seong-Ho;Ryu, Keun-Ho
    • Journal of Korea Spatial Information System Society
    • /
    • v.12 no.1
    • /
    • pp.49-56
    • /
    • 2010
  • Many researches of frequent pattern mining technique for detecting unknown patterns on spatial data have studied actively. Existing data structures have classified into tree-structure and array-structure, and those structures show the weakness of performance on dense or sparse data. Since spatial data have obtained the characteristics of dense and sparse patterns, it is important for us to mine quickly dense and sparse patterns using only single algorithm. In this paper, we propose novel data structure as compressed patricia frequent pattern tree and frequent pattern mining algorithm based on proposed data structure which can detect frequent patterns quickly in terms of both dense and sparse frequent patterns mining. In our experimental result, proposed algorithm proves about 10 times faster than existing FP-Growth algorithm on both dense and sparse data.

Hierarchically penalized sparse principal component analysis (계층적 벌점함수를 이용한 주성분분석)

  • Kang, Jongkyeong;Park, Jaeshin;Bang, Sungwan
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.1
    • /
    • pp.135-145
    • /
    • 2017
  • Principal component analysis (PCA) describes the variation of multivariate data in terms of a set of uncorrelated variables. Since each principal component is a linear combination of all variables and the loadings are typically non-zero, it is difficult to interpret the derived principal components. Sparse principal component analysis (SPCA) is a specialized technique using the elastic net penalty function to produce sparse loadings in principal component analysis. When data are structured by groups of variables, it is desirable to select variables in a grouped manner. In this paper, we propose a new PCA method to improve variable selection performance when variables are grouped, which not only selects important groups but also removes unimportant variables within identified groups. To incorporate group information into model fitting, we consider a hierarchical lasso penalty instead of the elastic net penalty in SPCA. Real data analyses demonstrate the performance and usefulness of the proposed method.