DOI QR코드

DOI QR Code

SMOOTH SINGULAR VALUE THRESHOLDING ALGORITHM FOR LOW-RANK MATRIX COMPLETION PROBLEM

  • Geunseop Lee (Division of Global Business and Technology Hankuk University of Foreign Studies)
  • Received : 2023.04.10
  • Accepted : 2024.01.17
  • Published : 2024.05.01

Abstract

The matrix completion problem is to predict missing entries of a data matrix using the low-rank approximation of the observed entries. Typical approaches to matrix completion problem often rely on thresholding the singular values of the data matrix. However, these approaches have some limitations. In particular, a discontinuity is present near the thresholding value, and the thresholding value must be manually selected. To overcome these difficulties, we propose a shrinkage and thresholding function that smoothly thresholds the singular values to obtain more accurate and robust estimation of the data matrix. Furthermore, the proposed function is differentiable so that the thresholding values can be adaptively calculated during the iterations using Stein unbiased risk estimate. The experimental results demonstrate that the proposed algorithm yields a more accurate estimation with a faster execution than other matrix completion algorithms in image inpainting problems.

Keywords

Acknowledgement

This work was supported by Hankuk University of Foreign Studies Research Fund and National Research Foundation of Korea (NRF) grant funded by the Korean government (2018R1C1B5085022).

References

  1. H. Abdi, Partial least squares regression and projection on latent structure regression (PLS Regression), Comput. Stat. 2 (2010), no. 1, 97-106.  https://doi.org/10.1002/wics.51
  2. J.-F. Cai, E. J. Candes, and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM J. Optim. 20 (2010), no. 4, 1956-1982. https://doi.org/10.1137/080738970 
  3. E. J. Candes, J. K. Romberg, and T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Comm. Pure Appl. Math. 59 (2006), no. 8, 1207-1223. https://doi.org/10.1002/cpa.20124 
  4. E. J. Candes, C. A. Sing-Long, and J. D. Trzasko, Unbiased risk estimates for singular value thresholding and spectral estimators, IEEE Trans. Signal Process. 61 (2013), no. 19, 4643-4657. https://doi.org/10.1109/TSP.2013.2270464 
  5. A. Cui, J. Peng, H. Li, C. Zhang, and Y. Yu, Affine matrix rank minimization problem via non-convex fraction function penalty, J. Comput. Appl. Math. 336 (2018), 353-374. https://doi.org/10.1016/j.cam.2017.12.048 
  6. M. Fornasier, H. Rauhut, and R. Ward, Low-rank matrix recovery via iteratively reweighted least squares minimization, SIAM J. Optim. 21 (2011), no. 4, 1614-1640. https://doi.org/10.1137/100811404 
  7. M. Gavish and D. L. Donoho, Optimal shrinkage of singular values, IEEE Trans. Inform. Theory 63 (2017), no. 4, 2137-2152. https://doi.org/10.1109/TIT.2017.2653801 
  8. P. C. Hansen, Discrete inverse problems, Fundamentals of Algorithms, 7, SIAM, Philadelphia, PA, 2010. https://doi.org/10.1137/1.9780898718836 
  9. P. Jain, R. Meka, and I. Dhillon, Guaranteed rank minimization via singular value projection, Adv. Neural Info. Proc. Syst. (2010), 937-945. 
  10. J. Josse and S. Sardy, Adaptive shrinkage of singular values, Stat. Comput. 26 (2016), no. 3, 715-724. https://doi.org/10.1007/s11222-015-9554-9 
  11. K. Lee and Y. Bresler, ADMiRA: atomic decomposition for minimum rank approximation, IEEE Trans. Inform. Theory 56 (2010), no. 9, 4402-4416. https://doi.org/10.1109/TIT.2010.2054251 
  12. H. Y. Li, Q. Zhang, A. Cui, and J. Peng, Minimization of fraction function penalty in compressed sensing, IEEE Trans. Neural Netw. Learn. Syst. 31 (2020), no. 5, 1626-1637.  https://doi.org/10.1109/TNNLS.2019.2921404
  13. W. Li, L. Zhao, Z. Lin, D. Xu, and D. Lu, Non-local image inpainting using low-rank matrix completion, Comput. Graph. Forum 34 (2015), no. 6, 111-122.  https://doi.org/10.1111/cgf.12521
  14. H. Liu, Z. Ma, J. Han, Z. Chen, and Z. Zheng, Regularized partial least squares for multi-label learning, J. Mach. Learn. Cyber. 9 (2018), 335-346. https://doi.org/10.1007/s13042-016-0500-8 
  15. K. Mohan and M. Fazel, Iterative reweighted algorithms for matrix rank minimization, J. Mach. Learn. Res. 13 (2012), 3441-3473. 
  16. L. T. Nguyen, J. Kim, and B. Shim, Low-rank matrix completion: a contemporary survey, IEEE Access 7 (2019), 94215-94237.  https://doi.org/10.1109/ACCESS.2019.2928130
  17. B. Recht, M. Fazel, and P. A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev. 52 (2010), no. 3, 471-501. https://doi.org/10.1137/070697835 
  18. C. M. Stein, Estimation of the mean of a multivariate normal distribution, Ann. Statist. 9 (1981), no. 6, 1135-1151.  https://doi.org/10.1214/aos/1176345632
  19. P. Symeonidis and A. Zioupos, Matrix and Tensor Factorization Techniques for Recommender Systems, Springer 2016. https://doi.org/10.1007/978-3-319-41357-0 
  20. M. Verbanck, J. Josse, and F. Husson, Regularised PCA to denoise and visualise data, Stat. Comput. 25 (2015), no. 2, 471-486. https://doi.org/10.1007/s11222-013-9444-y 
  21. Y. Wan, Q. Chen, and Y. Yang, Robust impulse noise variance estimation based on image histogram, IEEE Sig. Proc. Letter 17 (2010), no. 5, 485-488. https://doi.org/10.1109/LSP.2010.2044848