DOI QR코드

DOI QR Code

Accelerated Split Bregman Method for Image Compressive Sensing Recovery under Sparse Representation

  • Gao, Bin (College of Communications Engineering, PLA University of Science and Technology) ;
  • Lan, Peng (College of Information Science and Engineering, Shandong Agricultural University) ;
  • Chen, Xiaoming (College of Communications Engineering, PLA University of Science and Technology) ;
  • Zhang, Li (College of Optoelectronic Engineering, Nanjing University of Posts and Telecommunications) ;
  • Sun, Fenggang (College of Information Science and Engineering, Shandong Agricultural University)
  • Received : 2015.12.14
  • Accepted : 2016.05.05
  • Published : 2016.06.30

Abstract

Compared with traditional patch-based sparse representation, recent studies have concluded that group-based sparse representation (GSR) can simultaneously enforce the intrinsic local sparsity and nonlocal self-similarity of images within a unified framework. This article investigates an accelerated split Bregman method (SBM) that is based on GSR which exploits image compressive sensing (CS). The computational efficiency of accelerated SBM for the measurement matrix of a partial Fourier matrix can be further improved by the introduction of a fast Fourier transform (FFT) to derive the enhanced algorithm. In addition, we provide convergence analysis for the proposed method. Experimental results demonstrate that accelerated SBM is potentially faster than some existing image CS reconstruction methods.

Keywords

1. Introduction

Image restoration is a fundamental problem in the field of image processing. The process can be formalized as an estimation of the original image x from a corrupted observation y:

where Φ is a degradation matrix and n is the additive noise vector.

For model (1), different values of Φ represent different image restoration problems. In particular, when Φ is a random projection operator, model (1) becomes the famous CS model, which has attracted extensive academic attention since several CS-based image systems were built in recent years, such as the high speed video camera [1] and compressive sensing Magnetic Resonance Imaging (MRI) [2].

Image CS restoration is a classical linear inverse problem, i.e. the blurred image that is observed cannot uniquely and stably determine the sharp image, due to the ill-conditioned nature of the degradation operator Φ. It is necessary to incorporate prior enforcing regularization on the solution in order to stabilize the restoration. Therefore, image restoration is often modeled as follows:

where measures the data fidelity between Φx and y, while R(x) represents the regularizer.

1.1 Related work

This subsection presents a brief review of existing methods, from the viewpoint of regularization and optimization algorithms.

Regularization: In the existing literature, various regularized methods have been proposed to mitigate the ill-posedness of the primal model. First-order smooth optimization methods can be used to solve classic quadratic Tikhonov regularization [3] in a relatively inexpensive way, but these methods tend to overly smooth images and often erode strong edges and textural details. Total variance regularization [5,6] can effectively suppress noise artefacts and usually smears out image details, but it tends to be poor at handling fine structures due to its assumption of local smoothness. As an alternative, sparsity-promoting regularization [7,8] [24] has been used in the past several years and has achieved strong results for various image recovery problems. Researchers have been frequently investigating two class sparsity-promoting models: the l0 norm regularized model and the l1 norm regularized model.

Optimization Algorithms: The optimization problem based on l0 norm regularization is combinational NP-hard. Some researchers have addressed this by focusing only on the l1 norm regularized model, which under certain strict conditions, is almost equivalent to the l0 norm regularization model. The optimization problem can then be solved using some well-known alternating optimization methods such as the alternating direction method of multipliers (ADMM) [4][17] or SBM [14][16]. However, although convergence of these methods is guaranteed, the quality of the reconstructed images is often affected since it is hard to satisfy the strict conditions in most real scenarios. Meanwhile, some authors have focused on solving the l0 norm regularized model directly and sub-optimally by using greedy algorithms such as orthogonal matching pursuit (OMP) [22]. However, greedy algorithms are computationally expensive when high fidelity image restorations are involved.

Specifically, the authors in [8] have produced encouraging results for image restoration using a combination of the alternating optimization method (i.e., SBM) and group sparse representation (i.e., GSR), which seems promising and is named GSR-SBM.

1.2 Motivation

From an algorithm performance perspective, the authors in [8] have ignored the following feature of GSR-SBM: while it can achieve high quality image restoration, it tends to require a high computation time. In order to apply it to more practical application scenarios which require much shorter computation times while maintaining image quality, such as X-ray computed tomography (CT) image reconstruction [10], it is necessary that some accelerating strategies for GSR-SBM are constructed.

From a convergence analysis perspective, since the l0 regularized model in [8] is a non-convex non-smooth optimization problem, it is difficult to provide theoretical proof of global convergence. Hence, the authors in [8] have only illustrated the convergence ability of the algorithm by providing experimental evidence. Although global convergence is true for its relaxed convex l1 norm optimization, it is neither simple nor trivial to prove for l0 norm optimization. However, it is extremely important that detailed proof can be obtained for GSR-SBM.

1.3 Contributions and Organization

In this article, we propose an accelerated split Bregman method based on GSR, which is called GSR-ASBM. Compared with GSR-SBM presented in [8], the contributions of this article can be summarized as follows:

The rest of this paper is organized as follows. In Section 2, we present the l0 norm based CS model. In Section 3, we propose GSB-ASBM to reconstruct the sparse coefficients from the measurements. Section 4 provides convergence analysis of the proposed method. Section 5 presents extensive numerical results that can be used to evaluate the performance of the new proposed reconstruction algorithm in comparison with some state-of-the-art algorithms. Finally, concluding remarks are provided in Section 6.

 

2. l0 norm based CS model

Based on a sparse representation, a CS image system is usually formularized as a constrained optimization:

By careful selection of λ , (3a) can be alternatively expressed as an equivalent unconstrained optimization:

where λ is the regularized parameter, D is an over-complete directory, α is the sparse coding matrix (i.e., most elements in each column of α are zero or close to zero), and Dα approximates x, which is depicted in Fig. 1.

Fig. 1.Sparse representation

 

3. Compressive Image Reconstruction under Accelerated SBM

3.1 Optimization for proposed l0-based minimization

In this section, we first introduce GSR-SBM, which is verified to be more effective than iterative shrinkage/thresholding (IST) [21] and is given below as Algorithm 1.

GSR-SBM can solve problem (3b) using a much easier form, i.e., by iteratively solving two easier sub-problems xk+1 and αk+1 . The first sub-problem (step 3) to solve xk+1 is the classical least square problem while the second sub-problem (step 4) to solve sparse coding αk+1 is a difficult combinatorial problem, which can be approximately solved using a heuristic algorithm such as OMP or a method with a closed-form solution, such as that illustrated in the next subsection. After such manipulation, GSR-SBM can produce a dimensionality reduction of the problem (3b), i.e., from 2n to n.

Although GSR-SBM uses variable splitting (i.e., divide and conquer) to simplify the original model, the following distinct points are rarely considered: 1) the least square problem in step 3 and the group-based sparse coding in step 4 are particularly expensive computationally, which is due to both the iterative nature of most least square gradient algorithms and the large memory cost required by sparse coding, 2) the multiplier updating in step 5 is simple and has a negligible computational cost. Therefore, we should reinvestigate GSR-SBM taking the above considerations into account.

It should be noted that a well-suited preconditioning matrix can be constructed to modify the convergence property of the least square gradient algorithms. However, this option will introduce a new computational burden for the matrix products. An alternative option is to reduce step 4 to which can be solved efficiently by some well-known l1 algorithms. Unfortunately, this manipulation of relaxation will reduce the PSNR of the resulting CS restoration image.

Since step 5 of Algorithm 1 is simple and requires negligible computational cost, it can be used as a breakthrough point to look for acceleration techniques. There are two reasons that such choices can be made: 1) the accelerated strategies do not introduce an additional computational burden because they only involve simple operations, such as vector addition/subtraction and the matrix-vector product, and 2) a faster convergence rate results in less iterations for a given stop criterion. Therefore, the number of function calls for the computationally expensive step 3 and step 4 will be decreased, which greatly reduces the overall computational cost of Algorithm 1. In this work, the proposed GSR-ASBM can be implemented in such a manner as follows:

Remark 1: Compared with Algorithm 1, there are only two additional computations in Algorithm 2: step 7 and step 8, both of which have a common feature, i.e., a negligible computation cost. In Algorithm 2, once the accelerated strategy reduces the number of iteration number, the number of function calls for step 4 and step 5 will be decreased. Since step 4 and step 5 contribute the main computational cost to the whole algorithm, this will improve the algorithm significantly. The experimental results in Section 5 support these conclusions.

3.2 Sub-problem for x

In Step 4 of Algorithm 2, the key components for solving (3b) have the form

where I is the identity matrix. For CS recovery, Φ is a random projection matrix that does not have a special structure. Thus, it is too expensive to compute the inverse of a square matrix ΦTΦ + μI .

For simplicity, gradient-type methods (such as the gradient descent method and the preconditioned conjugate gradient method) are the most popular tools [8] for approximating a solution of (4). However, gradient-type methods, which belong to the class of approximation algorithms, need significantly more iterations to satisfy so-called accuracy conditions. In this work, taking into consideration that the measurement matrix Φ is a partial Fourier transform matrix which has important applications for high speed MRI, we adopt the FFT2 (Fast Fourier Transform) algorithm to solve (4), which has a reasonable computational complexity O(n log n).

The partial Fourier transform matrix can be represented as Φ = DF , where D and F represent the down-sampling matrix and the Fourier transform matrix, respectively. By applying Φ = DF and the FFT to each side of (4), it can be seen to be mathematically equivalent to

where FH and DT represent the inverse FFT and the transpose of the down-sampled matrix, respectively. After a simple manipulation, (5) becomes

By invoking the inverse FFT, (6) can be formulated as

where (DTD + μI)-1 has a cost of only O(n) since DTD involves a diagonal matrix, while products of F or FH have a cost of O(n log n) (including one FFT and one inverse FFT). Thus when computing (7), the principal cost is O(n log n), which significantly improves the efficiency of solving the sub-problem for x.3

3.3 Sub-problem for α

Based on Step 5 of Algorithm 2, the α sub-problem can be mathematically transformed as

where e = x - b. By invoking theorem 1 in [8], (8) can be reduced to N sub-problems:

which can be independently solved by the orthogonal matching pursuit (OMP) algorithm [22]. Specifically, batch OMP [23] is a better choice when large numbers of signals are involved. As shown in [8], another alternative is hard thresholding pursuit. The closed-form solution of (9) is expressed as:

where and the element-wise hard thresholding operator is used, given by:

3.4 Summary of proposed algorithm

At this point, all the sub-problems have been solved. The details of the proposed method can be summarized as follows:

Table 1.Pseudo-algorithm of the proposed method

 

4. Convergence analysis

Unlike convex optimization, when non-convex and non-smooth problems are involved, convergence of SBM is still generally an issue. From an experimental perspective, empirical evidence shows that GSR-SBM seems to converge from any initial point. Although the following theorem does not provide substantial proof of global convergence, it shows that GSR-SBM can be used to find a stationary point of the primary problem under mild conditions. Although far from satisfactory, this result provides some assurance on the reliability of the proposed algorithm.

Before proceeding to the proof of convergence theorem, we provide a lemma.

Lemma 1. Let J be an index set such that the jth element of α is 0 for all is a vector where the elements with an index in are 1 and the others are 0. Assume that is a local minimizer of the problem

and define the following problem:

then is a local minimizer of problem (12a) if and only if is a local minimizer of problem (12b).

Proof: See the proof of Theorem 2.2 in [13]. □

The following theorem is based on the standard conditions that are often assumed in convergence analysis of augmented Lagrangian methods for non-convex optimization problems (see [12]).

Theorem 1. Let be any accumulation point of (xk, αk, bk) generated by GSR-SBM, assume

Then, any accumulation point of is a KKT point of the problem.

Proof: Initially, the update of the multiplier (Step 5 in Algorithm 1) and (13) is used to give

The solution of x sub-problem (step 3 of Algorithm 1) satisfies the first order stationary condition

Using (14), (15) becomes

We then introduce the Lagrange function of problem (12b)

Thus, the KKT condition of (12b) is

For the α sub-problem (step 4 of Algorithm 1), from Lemma 1, is at least a local minimal of (12b), so is the KKT point of (12b), and satisfies

According to (14), (19) becomes

Combining (14) (16) and (20), we obtain the system of equations as follows

Recalling the original problem (3a):

The constraintǁαǁ0 ≤ ϵ can be written as is an index set with such that the jth element of α is 0 for all is a vector where the elements with an index in are 1 and the others are 0. Then, the original problem can be reformulated into

It can be easily seen that the Lagrange function of the constrained problem (22) is

The above equations (21a) (21b) and (21c) are just the KKT conditions of (22). This completes the proof. □

The only difference with GSR-ASBM is that it adopts some accelerated strategies, which do not really affect the convergence of GSR-SBM. Therefore, GSR-ASBM is also convergent once GSR-SBM is convergent.

 

5 Experimental Results

In this section, the proposed GSR-ASBM based CS recovery method was implemented using the MATLAB framework. The OMP algorithm [22] was also included in our experimental setup. All evaluations were performed using a PC with an Intel(R) Pentium(R) CPU G3250 3.20GHz processor, 4 GB RAM with MATLAB R2009b. For the experiments, eleven well-known images including Barbara, Boat, Cameraman, Head, House, Leaves, Lena, Monarch, Parrot and Peppers were used with a size of 256 × 256 pixels, as well as image Vessels with a size of 96 × 96 pixels. So that a fair comparison between the competing methods could be performed.

Our proposed method was compared with several competitive CS recovery methods to verify the performance of our method, including the total variation (TV) method, the BCS-SPL, mutli-hypothesis (MH) method, the split Bregman method (SBM), the group based split Bregman method (denoted as GSR-SBM) and the proposed GSR-ASBM. It is worth highlighting that GSR-SBM is a well-known image CS method that gives state-of-the-art CS results. We have carefully tuned the parameters of each algorithm for optimal performance in order to provide a fair and unified framework for comparison. All methods are assigned the same convergence criterion, i.e.,ǁuk - uk-1ǁ/ǁuk-1ǁ < 10-3 , where k is the iteration number and uk are the PSNR values at iteration k. 4

The PSNR and FSIM comparison results of recovered images using competing CS recovery methods are shown in Table 2 and Table 3, respectively. It can be easily seen that the proposed GSR-ASBM method performs much better than TV, BSC-SPL, MH, and SBM for all test images and sensing rates and the average PSNR gains of the proposed method relative to TV, BSC-SPL, MH, and SBM can be as high as 11 dB, 5.2 dB, 3.7 dB and 1.6 dB respectively. Moreover, the average PSNR gain and CPU time of proposed method compared with state-of-the-art GSR-SBM are about 0.07 dB and 79.16 seconds (saving about 21% CPU cost) respectively. It is apparent that the proposed method achieves the best visual quality in most cases. 5

Table 2.PSNR comparision with various CS recovery methods(Units: dB) .The for each algorithm are indicated by respectively.

Table 3.FSIM comparision with various CS recovery methods

Fig. 2 and Fig. 3 are plotted based on the Leaves and Vessels images with a subrate of 30% for a visible comparison between SBM, GSR-SBM and GSR-ASBM. These graphs show that GSR-ASBM is more efficient and effective than SBM and GSR-SBM.

Fig. 2.Comparision of key competing methods for gray image Leaves with subrate 30%.

Fig. 3.Comparision of key competing methods for gray image Vessels with subrate 30%.

The visual quality of competing algorithms and some visual results for the recovered images using different methods will now be illustrated in Fig. 4-Fig. 7. From these figures, it can be observed that GSR-ASBM shows better visual results than other competing methods.

Fig. 4.Restruction of Vessels image with subrate 20%, (a)-(f) are TV, BCS-SPL, MH, SBM, GSR-SBM and the proposed GSR-ASBM repectively.

Fig. 5.Restruction of Parrots image with subrate 30%, (a)-(f) are TV, BCS-SPL, MH, SBM, GSR-SBM and the proposed GSR-ASBM repectively.

Fig. 6.Restruction of Leaves image with subrate 40%, (a)-(f) are TV, BCS-SPL, MH, SBM, GSR-SBM and the proposed GSR-ASBM repectively.

Fig. 7.Restruction of Head image with subrate 20%, (a)-(f) are TV, BCS-SPL, MH, SBM, GSR-SBM and the proposed GSR-ASBM repectively.

Fig. 8.Restruction of Cameraman image with subrate 20%, (a)-(f) are TV, BCS-SPL, MH, SBM, GSR-SBM and the proposed GSR-ASBM repectively.

Fig. 9.Restruction of Lena image with subrate 40%, (a)-(f) are TV, BCS-SPL, MH, SBM, GSR-SBM and the proposed GSR-ASBM repectively.

Fig. 10.Restruction of Monarch image with subrate 40%, (a)-(f) are TV, BCS-SPL, MH, SBM, GSR-SBM and the proposed GSR-ASBM repectively.

For all the above figures, it has been considered that GSR-SBM is using the preconditioned conjugate method (PCG) to calculate equation (4). In the following simulations, we will consider a more realistic image restoration environment where Φ is a partial Fourier transform matrix. The difference between GSR-SBM using FFT and GSR-SBM using PCG is shown in Fig. 11 and Fig. 12.

Fig. 11.The comparisons of FFT based algorithm and PCG based algorithm used in GSR-SBM for gray image Head with subrate 20% under PSNR vs iteration number.

Fig. 12.The comparisons of FFT based algorithm and PCG based algorithm used in GSR-SBM for gray image Head with subrate 20% under CPU time vs iteration number.

As indicated in Fig.11, the FFT-based algorithm and the PCG-based algorithm both achieve the same performance in terms of PSNR, while Fig.12 depicts that PCG requires a much higher computational time than FFT for each iteration. This is the main reason why FFT has been adopted for our proposed GSR-ASBM method.

The final test is on the sensitivity of parameters μ and λ . Different values of μ were chosen in the interval [0.0005, 0.02] and different values of λ were chosen in the interval [0.002, 2]. More specifically, λ was firstly fixed and the sensitivity results on μ were obtained, and then μ was fixed and the sensitivity results on λ were obtained, which are depicted by Fig. 13 and Fig. 14 respectively. Fig. 15 depicts the sensitivity test on both parameters μ and λ .

Fig. 13.Sensitivity test on the parameter on μ .

Fig. 14.Sensitivity test on the parameter on λ .

Fig. 15.Sensitivity test on the parameter both on λ and μ .

By combining the results shown in Fig. 13-15, it is apparent that our proposed algorithm will have the best performance when μ is around 0.002 and λ is around 0.08.

 

6. Conclusion

In this paper, we have proposed a new approach for compressive sensing based on an accelerated split Bregman method and a l0 model. Compared with the classical split Bregman method, the proposed accelerated split Bregman method can significantly improve the convergence rate. We prove the convergence of the proposed method. Alternatively, when the measurement matrix of the partial Fourier transform is involved and is applied in high speed MRI, FFT and the inverse FFT can be used to derive a faster algorithm. Simulation results validate that the proposed approach is favorable in terms of both subjective and objective qualities.

References

  1. Y. Hitomi, G. Jinwei, M. Gupta, T. Mitsunaga, and S.K. Nayar. ''Video from a single coded exposure photograph using a learned over-complete dictionary,'' in Proc. of IEEE Conf. on Computer Vision (ICCV), pp. 287-294, Nov. 2011. Article (CrossRef Link)
  2. M. Lustig, D.L. Donoho, J.M. Santos, and J.M. Pauly, ''Compressed sensing MRI,'' IEEE Signal Processing Magazine, vol.25, no.2, pp. 72-82, Mar. 2008. Article (CrossRef Link) https://doi.org/10.1109/MSP.2007.914728
  3. G. H. Golub, P. C. Hansen, and D. P. O'Leary, '' Tikhonov regularization and total least squares,'' SIAM Journal on Matrix Analysis and Applications, vol.21, no.1, pp. 185-194, Jul. 2006. Article (CrossRef Link) https://doi.org/10.1137/S0895479897326432
  4. S. Boyd, N. Parikh, E. Chu, B. Peleato and J. Eckstein, ''Distributed optimization and statistical learning via the alternating direction method of multipliers,'' Foundations and Trends in Machine Learning, vol.3, pp. 1-122, 2010. Article (CrossRef Link) https://doi.org/10.1561/2200000016
  5. L.I. Rudin, S. Osher, and E. Fatemi, ''Nonlinear total variation based noise removal algorithms,'' Physica D: Nonlinear Phenomena, vol.60, no.1–4, pp. 259-268, 1992. Article (CrossRef Link) https://doi.org/10.1016/0167-2789(92)90242-F
  6. J.P. Oliveira, J.M. Bioucas-Dias, and M.A.T. Figueiredo, ''Adaptive total variation image deblurring: A majorization–minimization approach,'' Signal Processing, vol.89, no.9, pp. 1683-1693, Sep. 2009. Article (CrossRef Link) https://doi.org/10.1016/j.sigpro.2009.03.018
  7. D. Weisheng, S. Guangming, and L. Xin, ''Nonlocal image restoration with bilateral variance estimation: a low-rank approach,'' IEEE Transactions on Image Processing, vol.22, no.2, pp. 700-711, Feb. 2013. Article (CrossRef Link) https://doi.org/10.1109/TIP.2012.2221729
  8. J. Zhang, D.B. Zhao, and W. Gao, ''Group-based sparse representation for image restoration,'' IEEE Transactions on Image Processing, vol.23, no.8, pp. 3336-3351, Aug. 2014. Article (CrossRef Link) https://doi.org/10.1109/TIP.2014.2323127
  9. J. Yang, Y. Zhang, and W. Yin, ''A fast alternating direction method for TVL1-L2 signal reconstruction from partial Fourier data,'' IEEE Journal of Selected Topics Signal Processing, vol. 4, no. 2, pp. 288–97, Apr. 2010. Article (CrossRef Link) https://doi.org/10.1109/JSTSP.2010.2042333
  10. H. Nie, and J. A. Fessler, ''Fast X-Ray CT image reconstruction using a linearized augmented Lagrangian method with ordered subsets,'' IEEE Transactions on medical imaging, vol. 34, no. 2, pp. 388-399, Feb. 2015. Article (CrossRef Link) https://doi.org/10.1109/TMI.2014.2358499
  11. M.S.C. Almeida and M.A.T. Figueiredo, ''Deconvolving images with unknown boundaries using the alternating direction method of multipliers,'' IEEE Transactions on Image Processing, vol.22, no.8, pp. 3074-3086, Aug. 2013. Article (CrossRef Link) https://doi.org/10.1109/TIP.2013.2258354
  12. H. Z. Luo, X. L. Sun, and D. Li, ''On the convergence of augmented lagrangian methods for constrained global optimization,'' SIAM Journal on Optimization, vol. 18, No. 4, pp. 1209-1230, Oct. 2007. Article (CrossRef Link) https://doi.org/10.1137/060667086
  13. Z. S. Lu and Y. Zhang. ''Sparse approximation via penalty decomposition methods,'' SIAM Journal on Optimization, vol. 23, No. 4, pp. 2448-2478, Dec. 2013. Article (CrossRef Link) https://doi.org/10.1137/100808071
  14. T. Goldstein and S. Osher, ''The split Bregman method for L1-regularized problems,'' SIAM Journal on Imaging Sciences, vol.2, no.2, pp. 323–343, Apr. 2009. Article (CrossRef Link) https://doi.org/10.1137/080725891
  15. J. Zhang, C. Zhao, D. Zhao, and W. Gao, ''Image compressive sensing recovery using adaptively learned sparsifying basis via L0 minimization,'' Signal Processing, vol.103, no.0, pp. 114-126, Oct. 2014. Article (CrossRef Link) https://doi.org/10.1016/j.sigpro.2013.09.025
  16. J.F. Cai, S. Osher, and Z. Shen, ''Split bregman methods and frame based image restoration,'' Multiscale Modeling & Simulation, vol.8, no.2, pp. 337-369, Dec. 2009. Article (CrossRef Link) https://doi.org/10.1137/090753504
  17. D. Gabay and B. Mercier, ''A dual algorithm for the solution of nonlinear variational problems via infinite element approximations,'' Computers and Mathematics with Applications, vol.2, pp. 17-40, 1976. Article (CrossRef Link) https://doi.org/10.1016/0898-1221(76)90003-1
  18. W.U. Bajwa, J.D. Haupt, G.M. Raz, S.J. Wright, and R.D. Nowak, ''Toeplitz-structured compressed sensing matrices,'' in Proc. of the IEEE/SP 14th Workshop on Statistical Signal Processing(SSP'07), pp.294-298, Aug. 2007. Article (CrossRef Link)
  19. S. Ma, W. Yin, Y. Zhang, A. Chakraborty, ''An efficient algorithm forcompressed mr imaging using total variation and wavelets''. in Proc. of IEEE Conference on Computer Vision and Pattern Recognition(CVPR'08), pp. 1-8, Jun. 2008. Article (CrossRef Link)
  20. J. Nocedal and S. J. Wright, ''Numerical Optimization,'' Springer-Verlag, 1999. Article (CrossRef Link)
  21. A. Beck and M. Teboulle, ''Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems,'' IEEE Transactions on Image Processing, vol.18, no.11, pp. 2419-2434, Nov. 2009. Article (CrossRef Link) https://doi.org/10.1109/TIP.2009.2028250
  22. J.A. Tropp and A.C. Gilbert, ''Signal recovery from random measurements via orthogonal matching pursuit,'' IEEE Transactions on Information Theory, vol.53, no.12, pp. 4655-4666, Dec. 2007. Article (CrossRef Link) https://doi.org/10.1109/TIT.2007.909108
  23. R. Rubinstein, M. Zibulevsky, and M. Elad, ''Efficient implementation of the K-SVD algorithm using batch orthogonal matching pursuit,'' CS Technion, vol.40, no.8, pp. 1-15, 2008. Article (CrossRef Link)
  24. W. Dong, L. Zhang, G. Shi, and Xin Li, '' Nonlocally centralized sparse representation for image restoration,'' IEEE Transactions on Image Processing, vol.22, no.4, pp. 1620-1630, Apr. 2013. Article (CrossRef Link) https://doi.org/10.1109/TIP.2012.2235847
  25. B. Huang, S. Ma, and D. Goldfarb, ''Accelerated linearized bregman method,'' Journal of Scientific Computing, vol.54, no.2-3, pp. 428-453, Feb. 2013. Article (CrossRef Link) https://doi.org/10.1007/s10915-012-9592-9
  26. C. Li, W. Yin, H. Jiang, and Y. Zhang, ''An efficient augmented Lagrangian method with applications to total variation minimization,'' Computational Optimization and Applications, vol.56, no.3, pp. 507-530, Dec. 2013. Article (CrossRef Link) https://doi.org/10.1007/s10589-013-9576-1
  27. M. Sungkwang and J.E. Fowler, ''Block compressed sensing of images using directional transforms,'' in Proc. of IEEE International Conference on Image Processing (ICIP), pp. 3021-3024, Nov. 2009. Article (CrossRef Link)
  28. C. Chen, E.W. Tramel, and J.E. Fowler, ''Compressed-sensing recovery of images and video using multihypothesis predictions,'' in Proc. of Signals, Systems and Computers (ASILOMAR), pp. 1193-1198, Nov. 2011. Article (CrossRef Link)
  29. Z. Chen, A. Basarab and D. Kouamé, ''Reconstruction of enhanced ultrasound images from compressed measurements using simultaneous direction method of multipliers,'' preprint, arXiv:1512.05586v1 (cs.CV), 2015. Article (CrossRef Link)
  30. M. Sun and J. Liu, ''A proximal Peaceman-Rachford splitting method for compressive sensing,'' Journal of Applied Mathematics and Computing, vol. 50, no.1, pp. 349-363, Feb. 2016. Article (CrossRef Link) https://doi.org/10.1007/s12190-015-0874-x