• Title/Summary/Keyword: Compressed sensing

Search Result 154, Processing Time 0.019 seconds

COMPARISON OF SUB-SAMPLING ALGORITHM FOR LRIT IMAGE GENERATION

  • Bae, Hee-Jin;Ahn, Sang-Il
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.109-113
    • /
    • 2007
  • The COMS provides the LRIT/HRIT services to users. The COMS LRIT/HRIT broadcast service should satisfy the 15 minutes timeliness requirement. The requirement is important and critical enough to impact overall performance of the LHGS. HRIT image data is acquired from INRSM output receiving but LRIT image data is generated by sub-sampling HRIT image data in the LHGS. Specially, since LRIT is acquired from sub-sampled HRIT image data, LRIT processing spent more time. Besides, some of data loss for LRIT occurs since LRIT is compressed by lossy JPEG. Therefore, algorithm with the fastest processing speed and simplicity to be implemented should be selected to satisfy the requirement. Investigated sub-sampling algorithm for the LHGS were nearest neighbour algorithm, bilinear algorithm and bicubic algorithm. Nearest neighbour algorithm is selected for COMS LHGS considering the speed, simplicity and anti-aliasing corresponding to the guideline of user (KMA: Korea Meteorological Administration) to maintain the most cloud itself information in a view of meteorology. But the nearest neighbour algorithm is known as the worst performance. Therefore, it is studied in this paper that the selection of nearest neighbour algorithm for the LHGS is reasonable. First of all, characteristic of 3 sub-sampling algorithms is studied and compared. Then, several sub-sampling algorithm were applied to MTSAT-1R image data corresponding to COMS HRIT. Also, resized image was acquired from sub-sampled image with the identical sub-sampling algorithms applied to sub-sampling from HRIT to LRIT. And the difference between original image and resized image is compared. Besides, PSNR and MSE are calculated for each algorithm. This paper shows that it is appropriate to select nearest neighbour algorithm for COMS LHGS since sub-sampled image by nearest neighbour algorithm is little difference with that of other algorithms in quality performance from PSNR.

  • PDF

Sparse Signal Recovery with Parallel Orthogonal Matching Pursuit and Its Performances (병렬OMP 기법을 통한 성긴신호 복원과 그 성능)

  • Park, Jeonghong;Jung, Bang Chul;Kim, Jong Min;Ban, Tae Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.8
    • /
    • pp.1784-1789
    • /
    • 2013
  • In this paper, parallel orthogonal matching pursuit (POMP) is proposed to supplement the orthogonal matching pursuit (OMP) which has been widely used as a greedy algorithm for sparse signal recovery. The process of POMP is simple but effective: (1) multiple indexes maximally correlated with the observation vector are chosen at the firest iteration, (2) the conventional OMP process is carried out in parallel for each selected index, (3) the index set which yields the minimum residual is selected for reconstructing the original sparse signal. Empirical simulations show that POMP outperforms than the existing sparse signal recovery algorithms in terms of exact recovery ratio (ERR) for sparse pattern and mean-squared error (MSE) between the estimated signal and the original signal.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

Why Gabor Frames? Two Fundamental Measures of Coherence and Their Role in Model Selection

  • Bajwa, Waheed U.;Calderbank, Robert;Jafarpour, Sina
    • Journal of Communications and Networks
    • /
    • v.12 no.4
    • /
    • pp.289-307
    • /
    • 2010
  • The problem of model selection arises in a number of contexts, such as subset selection in linear regression, estimation of structures in graphical models, and signal denoising. This paper studies non-asymptotic model selection for the general case of arbitrary (random or deterministic) design matrices and arbitrary nonzero entries of the signal. In this regard, it generalizes the notion of incoherence in the existing literature on model selection and introduces two fundamental measures of coherence-termed as the worst-case coherence and the average coherence-among the columns of a design matrix. It utilizes these two measures of coherence to provide an in-depth analysis of a simple, model-order agnostic one-step thresholding (OST) algorithm for model selection and proves that OST is feasible for exact as well as partial model selection as long as the design matrix obeys an easily verifiable property, which is termed as the coherence property. One of the key insights offered by the ensuing analysis in this regard is that OST can successfully carry out model selection even when methods based on convex optimization such as the lasso fail due to the rank deficiency of the submatrices of the design matrix. In addition, the paper establishes that if the design matrix has reasonably small worst-case and average coherence then OST performs near-optimally when either (i) the energy of any nonzero entry of the signal is close to the average signal energy per nonzero entry or (ii) the signal-to-noise ratio in the measurement system is not too high. Finally, two other key contributions of the paper are that (i) it provides bounds on the average coherence of Gaussian matrices and Gabor frames, and (ii) it extends the results on model selection using OST to low-complexity, model-order agnostic recovery of sparse signals with arbitrary nonzero entries. In particular, this part of the analysis in the paper implies that an Alltop Gabor frame together with OST can successfully carry out model selection and recovery of sparse signals irrespective of the phases of the nonzero entries even if the number of nonzero entries scales almost linearly with the number of rows of the Alltop Gabor frame.