• Title/Summary/Keyword: Sparse data

Search Result 408, Processing Time 0.021 seconds

Improving Contextual Understanding Using Sparse Attention Models (Sparse Attention 모델을 활용한 효율적인 문맥 이해)

  • Tae-Hoon Her
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.694-697
    • /
    • 2023
  • 본 논문은 문맥 이해에서 발생할 수 있는 문제점을 개선하기 위해 Sparse Attention 모델을 적용하였다. 실험 결과, 이 방법은 문맥 손실률을 상당히 줄이며 자연어 처리에 유용하다는 것을 확인하였다. 본 연구는 기계 학습과 자연어 처리분야에서 더 나은 문맥 이해를 위한 새로운 방향을 제시하며, 향후 다양한 모델과 방법론을 탐구하여 문맥 이해를 더욱 향상시킬 계획이다.

  • PDF

Sparse Matrix Compression Technique and Hardware Design for Lightweight Deep Learning Accelerators (경량 딥러닝 가속기를 위한 희소 행렬 압축 기법 및 하드웨어 설계)

  • Kim, Sunhee;Shin, Dongyeob;Lim, Yong-Seok
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.17 no.4
    • /
    • pp.53-62
    • /
    • 2021
  • Deep learning models such as convolutional neural networks and recurrent neual networks process a huge amounts of data, so they require a lot of storage and consume a lot of time and power due to memory access. Recently, research is being conducted to reduce memory usage and access by compressing data using the feature that many of deep learning data are highly sparse and localized. In this paper, we propose a compression-decompression method of storing only the non-zero data and the location information of the non-zero data excluding zero data. In order to make the location information of non-zero data, the matrix data is divided into sections uniformly. And whether there is non-zero data in the corresponding section is indicated. In this case, section division is not executed only once, but repeatedly executed, and location information is stored in each step. Therefore, it can be properly compressed according to the ratio and distribution of zero data. In addition, we propose a hardware structure that enables compression and decompression without complex operations. It was designed and verified with Verilog, and it was confirmed that it can be used in hardware deep learning accelerators.

Structural novelty detection based on sparse autoencoders and control charts

  • Finotti, Rafaelle P.;Gentile, Carmelo;Barbosa, Flavio;Cury, Alexandre
    • Structural Engineering and Mechanics
    • /
    • v.81 no.5
    • /
    • pp.647-664
    • /
    • 2022
  • The powerful data mapping capability of computational deep learning methods has been recently explored in academic works to develop strategies for structural health monitoring through appropriate characterization of dynamic responses. In many cases, these studies concern laboratory prototypes and finite element models to validate the proposed methodologies. Therefore, the present work aims to investigate the capability of a deep learning algorithm called Sparse Autoencoder (SAE) specifically focused on detecting structural alterations in real-case studies. The idea is to characterize the dynamic responses via SAE models and, subsequently, to detect the onset of abnormal behavior through the Shewhart T control chart, calculated with SAE extracted features. The anomaly detection approach is exemplified using data from the Z24 bridge, a classical benchmark, and data from the continuous monitoring of the San Vittore bell-tower, Italy. In both cases, the influence of temperature is also evaluated. The proposed approach achieved good performance, detecting structural changes even under temperature variations.

A study on the localization of incipient propeller cavitation applying sparse Bayesian learning (희소 베이지안 학습 기법을 적용한 초생 프로펠러 캐비테이션 위치추정 연구)

  • Ha-Min Choi;Haesang Yang;Sock-Kyu Lee;Woojae Seong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.6
    • /
    • pp.529-535
    • /
    • 2023
  • Noise originating from incipient propeller cavitation is assumed to come from a limited number of sources emitting a broadband signal. Conventional methods for cavitation localization have limitations because they cannot distinguish adjacent sound sources effectively due to low accuracy and resolution. On the other hand, sparse Bayesian learning technique demonstrates high-resolution restoration performance for sparse signals and offers greater resolution compared to conventional cavitation localization methods. In this paper, an incipient propeller cavitation localization method using sparse Bayesian learning is proposed and shown to be superior to the conventional method in terms of accuracy and resolution through experimental data from a model ship.

A Signal Detection and Estimation Method Based on Compressive Sensing (압축 센싱 기반의 신호 검출 및 추정 방법)

  • Nguyen, Thu L.N.;Jung, Honggyu;Shin, Yoan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.6
    • /
    • pp.1024-1031
    • /
    • 2015
  • Compressive sensing is a new data acquisition method enabling the reconstruction of sparse or compressible signals from a smaller number of measurements than Nyquist rate, as long as the signal is sparse and the measurement is incoherent. In this paper, we consider a simple hypothesis testing in target detection and estimation problems using compressive sensing, where the performance depends on the sparsity level of the signals being detected. We provide theoretical analysis results along with some experiment results.

Progressive Compression of 3D Mesh Geometry Using Sparse Approximations from Redundant Frame Dictionaries

  • Krivokuca, Maja;Abdulla, Waleed Habib;Wunsche, Burkhard Claus
    • ETRI Journal
    • /
    • v.39 no.1
    • /
    • pp.1-12
    • /
    • 2017
  • In this paper, we present a new approach for the progressive compression of three-dimensional (3D) mesh geometry using redundant frame dictionaries and sparse approximation techniques. We construct the proposed frames from redundant linear combinations of the eigenvectors of a combinatorial mesh Laplacian matrix. We achieve a sparse synthesis of the mesh geometry by selecting atoms from a frame using matching pursuit. Experimental results show that the resulting rate-distortion performance compares favorably with other progressive mesh compression algorithms in the same category, even when a very simple, sub-optimal encoding strategy is used for the transmitted data. The proposed frames also have the desirable property of being able to be applied directly to a manifold mesh having arbitrary topology and connectivity types; thus, no initial remeshing is required and the original mesh connectivity is preserved.

CONSTRUCTIONS OF REGULAR SPARSE ANTI-MAGIC SQUARES

  • Chen, Guangzhou;Li, Wen;Xin, Bangying;Zhong, Ming
    • Bulletin of the Korean Mathematical Society
    • /
    • v.59 no.3
    • /
    • pp.617-642
    • /
    • 2022
  • For positive integers n and d with d < n, an n × n array A based on 𝒳 = {0, 1, …, nd} is called a sparse anti-magic square of order n with density d, denoted by SAMS(n, d), if each non-zero element of X occurs exactly once in A, and its row-sums, column-sums and two main diagonal-sums constitute a set of 2n + 2 consecutive integers. An SAMS(n, d) is called regular if there are exactly d non-zero elements in each row, each column and each main diagonal. In this paper, we investigate the existence of regular sparse anti-magic squares of order n ≡ 1, 5 (mod 6), and prove that there exists a regular SAMS(n, d) for any n ≥ 5, n ≡ 1, 5 (mod 6) and d with 2 ≤ d ≤ n - 1.

L1-norm Regularization for State Vector Adaptation of Subspace Gaussian Mixture Model (L1-norm regularization을 통한 SGMM의 state vector 적응)

  • Goo, Jahyun;Kim, Younggwan;Kim, Hoirin
    • Phonetics and Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.131-138
    • /
    • 2015
  • In this paper, we propose L1-norm regularization for state vector adaptation of subspace Gaussian mixture model (SGMM). When you design a speaker adaptation system with GMM-HMM acoustic model, MAP is the most typical technique to be considered. However, in MAP adaptation procedure, large number of parameters should be updated simultaneously. We can adopt sparse adaptation such as L1-norm regularization or sparse MAP to cope with that, but the performance of sparse adaptation is not good as MAP adaptation. However, SGMM does not suffer a lot from sparse adaptation as GMM-HMM because each Gaussian mean vector in SGMM is defined as a weighted sum of basis vectors, which is much robust to the fluctuation of parameters. Since there are only a few adaptation techniques appropriate for SGMM, our proposed method could be powerful especially when the number of adaptation data is limited. Experimental results show that error reduction rate of the proposed method is better than the result of MAP adaptation of SGMM, even with small adaptation data.

GEase-K: Linear and Nonlinear Autoencoder-based Recommender System with Side Information (GEase-K: 부가 정보를 활용한 선형 및 비선형 오토인코더 기반의 추천시스템)

  • Taebeom Lee;Seung-hak Lee;Min-jeong Ma;Yoonho Cho
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.167-183
    • /
    • 2023
  • In the recent field of recommendation systems, various studies have been conducted to model sparse data effectively. Among these, GLocal-K(Global and Local Kernels for Recommender Systems) is a research endeavor combining global and local kernels to provide personalized recommendations by considering global data patterns and individual user characteristics. However, due to its utilization of kernel tricks, GLocal-K exhibits diminished performance on highly sparse data and struggles to offer recommendations for new users or items due to the absence of side information. In this paper, to address these limitations of GLocal-K, we propose the GEase-K (Global and EASE kernels for Recommender Systems) model, incorporating the EASE(Embarrassingly Shallow Autoencoders for Sparse Data) model and leveraging side information. Initially, we substitute EASE for the local kernel in GLocal-K to enhance recommendation performance on highly sparse data. EASE, functioning as a simple linear operational structure, is an autoencoder that performs highly on extremely sparse data through regularization and learning item similarity. Additionally, we utilize side information to alleviate the cold-start problem. We enhance the understanding of user-item similarities by employing a conditional autoencoder structure during the training process to incorporate side information. In conclusion, GEase-K demonstrates resilience in highly sparse data and cold-start situations by combining linear and nonlinear structures and utilizing side information. Experimental results show that GEase-K outperforms GLocal-K based on the RMSE and MAE metrics on the highly sparse GoodReads and ModCloth datasets. Furthermore, in cold-start experiments divided into four groups using the GoodReads and ModCloth datasets, GEase-K denotes superior performance compared to GLocal-K.

Robust Non-negative Matrix Factorization with β-Divergence for Speech Separation

  • Li, Yinan;Zhang, Xiongwei;Sun, Meng
    • ETRI Journal
    • /
    • v.39 no.1
    • /
    • pp.21-29
    • /
    • 2017
  • This paper addresses the problem of unsupervised speech separation based on robust non-negative matrix factorization (RNMF) with ${\beta}$-divergence, when neither speech nor noise training data is available beforehand. We propose a robust version of non-negative matrix factorization, inspired by the recently developed sparse and low-rank decomposition, in which the data matrix is decomposed into the sum of a low-rank matrix and a sparse matrix. Efficient multiplicative update rules to minimize the ${\beta}$-divergence-based cost function are derived. A convolutional extension of the proposed algorithm is also proposed, which considers the time dependency of the non-negative noise bases. Experimental speech separation results show that the proposed convolutional RNMF successfully separates the repeating time-varying spectral structures from the magnitude spectrum of the mixture, and does so without any prior training.