Browse > Article
http://dx.doi.org/10.17661/jkiiect.2017.10.4.257

Analysis of Signal Recovery for Compressed Sensing using Deep Learning Technique  

Seong, Jin-Taek (Department of Information and Communication Engineering, Honam University)
Publication Information
The Journal of Korea Institute of Information, Electronics, and Communication Technology / v.10, no.4, 2017 , pp. 257-267 More about this Journal
Abstract
Compressed Sensing(CS) deals with linear inverse problems. The theoretical results of CS have had an impact on inference problems and presented amazing research achievements in the related fields including signal processing and information theory. However, in order for CS to be applied in practical environments, there are two significant challenges to be solved. One is to guarantee in real time recovery of CS signals, and the other is that the signals have to be sparse. To this end, the latest researches using deep learning technology have emerged. In this paper, we consider CS problems based on deep learning and discuss the latest research results. And the approaches for CS signal reconstruction using deep learning show superior results in terms of recovery time and performance. It is expected that the approaches for CS reconstruction using deep learning shown in recent studies can not only raise the possibility of utilization of CS, but also be highly exploited in the fields of signal processing and communication areas.
Keywords
Approximate Message Passing; Compressed Sensing; Convolutional Neural Network; Deep Learning; Sparse Signal;
Citations & Related Records
연도 인용수 순위
  • Reference
1 D. L. Donoho, "Compressed Sensing," IEEE Tran. on Inf. Theory, vol. 52, no. 4, pp. 1289-1306, Apr. 2006.   DOI
2 C. A. Metzler, A. Maleki, and R. G. Baraniuk, "From Denoising to Compressed Sensing," IEEE Tran. on Inf. Theory, vol. 62, no. 9, pp. 5117-5144, Sep. 2016.   DOI
3 Y. LeCun, Y. Bengio, and G. Hinton, "Deep Learning," Nature 521, pp. 436-444, 2015.   DOI
4 A. Mousavi, A. B. Patel, R. G. Baraniuk, "A deep learning approach to structured signal recovery," 53rd Annual Allerton Conference on Communication, Control, and Computing, pp. 1226-1343, 2015.
5 K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, A. Ashok, "ReconNet: Non-Iterative Reconstruction of Images from Compressively Sensed Measurements," IEEE Conference on Computer Vision and Pattern Recognition, pp. 449-458, 2016.
6 A. Mousavi and R. G. Baraniuk, "Learning to invert: Signal recovery via deep convolutional networks," arXiv preprint arXiv:1701.03891, 2017.
7 C. A. Metzler, A. Mousavi, R. G. Baraniuk, "Learned D-AMP: Principled Neural-Network- based Compressive Image Recovery," arXiv preprint arXiv:1704.06625, 2017.
8 M. Borgerding, P. Schniter, and S. Rangan, "AMP-Inspired Deep Networks for Sparse Linear Inverse Problems," IEEE Trans. Sig. Processing, early access, May 2017.
9 H. Palangi, R. Ward, and L. Deng, "Distributed Compressive Sensing: A Deep Learning Approach," IEEE Trans. Sig. Processing, vol. 64, no. 17, pp. 4504-4518, Sep. 2016.   DOI
10 D. E. Rumelhart, G. Hinton and R. J. Williams, "Learning presentations by back-propagation errors," Nature 323, pp. 533-536, 1986.   DOI
11 G. Hinton, S. Osinder, and Y. W. The, "A fast learning algorithm for deep belif nets," Neural computation, vol. 18, no. 7, pp. 1527-1554, Jul. 2006.   DOI
12 A. Krizhevsky, I. Sutskever, and G. Hinton, "Imagenet classification with Deep Convolutional Neural Networks," Advances in Neural Information Processing Systems, pp. 1097-1105, Dec. 2012.
13 N. Srivastava, G Hinton, A. Krizhevsky, "Dropout: A simple way to prevent neural networks from overfitting," The Journal of Machine Learning Research 15.1, pp. 1928-1958, 2014.
14 I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, The MIT Press, 2016.
15 C. Szegedy et al, "Going deeper with convolutions," CoRR, abs/1409.4842, 2014.
16 K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," IEEE International Conference on Computer Vision and Pattern Recognition, Nevada, USA, pp. 770-778, Nov. 2016.
17 Y. Lecun, L. Bottou, Y. Bengio, P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998.   DOI
18 G. Dahl, T. Sainath, and G. Hinton, "Improving deep neural networks for LVCSR using rectified linear units and dropout," IEEE International Conference on Acoustics, Speech and Signal Processing(ICASSP), BC, Canada, pp. 8609-8613, 2013.
19 D. L. Donoho and Y. Tsaig, "Fast Solution of L1-norm Minimization Problems When the Solutioni May Be Sparse," IEEE Trans. Info. Theory, vol. 54, no. 11, pp. 4789-4812, Nov. 2008.   DOI
20 A. Chambolle, R. A. DeVore, N. Lee, and B. J. Lucier, "Nonlinear wavelet image processing: Variational problems, compression, and noise removal through wavelet shrinkage," IEEE Trans. Image Process., vol. 7, no. 3, pp. 319-335, Mar. 1998.   DOI
21 D. L. Donoho, A. Maleki, and A. Montanari, "Message passing algorithms for compressed sensing," Proc. Nat. Acad. Sci., vol. 106, pp. 18914-18919, Nov. 2009.   DOI
22 Christopher A. Metzler, A. Maleki, and R. G. Baraniuk, "From Denoising to Compressed Sensing," IEEE Trans. Info. Theory, vol. 62, no. 9, pp. 5117-5144, Sep. 2016.   DOI
23 C. Li, W. Yin, and Y. Zhang, "User's guide for tval3: TV minimization by augmented lagrangian and alternating direction algorithms," CAAM report, vol. 20, pp. 46-47, 2009.
24 A. Mousavi, A. Maleki, R. G. Baraniuk, "Consistent parameter estimation for LASSO and approximate message passing," Annals of Statistics, 2017.
25 W. Dong et al, "Compressive Sensing via Nonloca Low-Rank Regularizatioin," IEEE Trans. Image Processing, vol. 23, no. 8, pp. 3618-3632, Aug. 2014.   DOI
26 P. Schniter, S. Rangan, and A. Fletcher, "Vector approximate message passing for the generalized linear model," 2016 50th Asilomar Conference on Signals, Systems and Computers, CA, USA, Nov. 2016.
27 E. J. Candes, J. Romberg, and T. Tao, "Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information," IEEE Trans. Info. Theory, vol. 52, no. 2, pp. 489-509, Feb. 2006.   DOI