References
- Beck, A. and Teboulle, M. (2009). A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2, 183-202. https://doi.org/10.1137/080716542
- Bertsekas, D. P. (2003). Nonlinear Programming. 2nd edition, Athena Scientific.
- Boyd, S., Parikh, N., Chu, E., Peleato, B., and Eckstein, J. (2010). Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Optimization, 3, 1-122.
- Boyd, S. and Vandenberghe, L. (2004). Convex optimization. Cambridge University Press.
- Chen, X., Lin, Q., Kim, S., Carbonell, J. G., and Xing, E. P. (2012). Smoothing proximal gradient method for general structured sparse regression. Annals of Applied Statistics, 12, 719-752.
- Chi, E. C. and Lange, K. (2015). Splitting methods for convex clustering. Journal of Computational and Graphical Statistics, 24, 994-1013. https://doi.org/10.1080/10618600.2014.948181
- Choi, H., Koo, J.-Y., and Park, C. (2015). Fused least absolute shrinkage and selection operator for credit scoring. Journal of Statistical Computation and Simulation, 85, 2135-2147. https://doi.org/10.1080/00949655.2014.922685
- Choi, H. and Lee, S. (2017). Convex clustering for binary data. Technical Report.
- Choi, H. and Park, C. (2016). Clustering analysis of particulate matter data using shrinkage boxplot. Journal of the Korean Data Analysis Society, 18, 2435-2443.
- Choi, H., Park, H., and Park, C. (2013). Support vector machines for big data analysis. Journal of the Korean data & Information Science Society, 24, 989-998. https://doi.org/10.7465/jkdi.2013.24.5.989
- Davis, D. and Wotao, Y. (2016). Convergence rate analysis of several splitting schemes, Splitting methods in communication, imaging, science, and engineering, Springer International Publishing, 115-163.
- Fang, E. X., He, B., Liu, H., and Yuan, X. (2015). Generalized alteranting direction method of multipliers: new theoretical insights and applications. Mathematical Programming Computation, 7, 149-187. https://doi.org/10.1007/s12532-015-0078-2
- Forero, P. A., Cano, A., and Giannakis, G. B. (2010). Consensus-based distributed support vector machines. Journal of Machine Learning Research, 6, 2873-2898.
- Hastie, T., Tibshirani, and R. Wainwright, M. (2016). Statistical learning with sparsity: The lasso and generalizations, CRC Press.
- Hallac, D., Leskovec, J., and Boyd, S. (2015). Network lasso: Clustering and optimization in large graphs. Proceeding KDD '15 Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 387-396.
- Hwang, C. and Shim, J. (2017). Geographically weighted least squares-support vector machine. Journal of the Korean Data & Information Science Society, 28, 227-235. https://doi.org/10.7465/jkdi.2017.28.1.227
- He, B. and Yuan, X. (2012). On the o(1/n) convergence rate of teh Douglas-Rachford alternating direction method. SIAM Journal on Numerical Analysis, 50, 700-709. https://doi.org/10.1137/110836936
- Jeon, J. and Choi, H. (2016). The sparse Luce model. Applied Intelligence, https://doi.org/10.1007/s10489-016-0861-4, In press.
- Jeon, J., Kwon, S., and Choi, H. (2017). Homogeneity detection for the high-dimensional generalized linear model. Computational Statistics and Data Analysis, 114, 61-74. https://doi.org/10.1016/j.csda.2017.04.001
- Kim, S. and Xing, E. P. (2012). Tree-guided group lasso for multi-response regression with structured sparsity, with an application to eQTL mapping. The Annals of Applied Statistics, 6, 1095-1117. https://doi.org/10.1214/12-AOAS549
- Lange, K. (2016). MM optimization algorithms. SIAM-Society for Industrial and Applied Mathematics.
- Parekh, A. and Selesnick, I. W. (2017). Improved sparse low-rank matrix estimation. arXiv:1605.00042v2.
- Parikh, N. and Boyd, S. (2013). Proximal algorithms. Foundations and Trends in Optimization. 1, 123-231.
- Park, C., Kim, Y., Kim, J., Song, J., and Choi, H. (2015). Data mining using R. 2nd edition, Kyowoosa.
- Polson, N. G., Scott, J. G., and Willard, B. T. (2015). Proximal algorithms in statistics and machine learning. Statistical Science, 30, 559-581. https://doi.org/10.1214/15-STS530
- Ramdas, A. and Tibshirani, R. (2016). Fast and flexible admm algorithms for trend filtering. Journal of Computational and Graphical Statistics, 25, 839-858. https://doi.org/10.1080/10618600.2015.1054033
- Taylor, G., Burmeister, R., Xu, Z., Singh, B., Patel, A., Goldstein, T. (2016). Training neural networks without gradients: A scalable ADMM approach. CoRR, arXiv1605.02026.
- Tibshirani, R. J., Hoefling, H., and Tibshirani, R. (2011). Nearly-Isotonic regression. Technometrics, 53, 54-61. https://doi.org/10.1198/TECH.2010.10111
- Tibshirani, R., Saunders, M., Rosset, S., Zhu, J., and Knight, K. (2005). Sparsity and smoothness via the fused lasso. Journal of the Royal Statistical Society Series B, 67, 91-108. https://doi.org/10.1111/j.1467-9868.2005.00490.x
- Tibshirani, R. J. and Taylor, J. (2011). The solution path of the generalized lasso. The Annals of Statistics, 39, 1335-1371. https://doi.org/10.1214/11-AOS878
- Tibshirani, R. (2014). Adaptive piecewise polynomial estimation via trend filtering. The Annals of Statistics, 42, 285-323. https://doi.org/10.1214/13-AOS1189
- Wen, W., Wu, C., Wang, Y., Chen, Y., and Li, H. (2016). Learning structured sparsity in deep neural networks. In Neural Information Processing Systems, 2074-2082.
- Xu, Z., Taylor, G., Li, H., Figueiredo, M., Yuan, X., and Goldstein, T. (2017). Adaptive consensus ADMM for distributed optimization. arXiv:1706.02869v2.
- Xu, Y., Yin, W., Wen, Z., and Zhang, Y. (2012). An alternating direction algorithm for matrix completion with nonnegative factors. Frontiers of Mathematics in China, 7, 365-384. https://doi.org/10.1007/s11464-012-0194-5
- Yang, Y., Sun, J., Li, H., and Xu, Z. (2017). ADMM-Net: A deep learning approach for compressive sensing MRI. CoRR, arXiv:1705.06869.
- Yin, W., Osher, S., Goldfarb, D., and Darbon, J. (2008). Bregman iterative algorithms for l1-minimization with applications to compressed sensing. SIAM Journal on Imaging Sciences, 1, 143-168. https://doi.org/10.1137/070703983
- Yuan, M. and Lin, Y. (2006). Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society Series B, 68, 49-67. https://doi.org/10.1111/j.1467-9868.2005.00532.x
- Yan, X. and Bien, J. (2015). Hierarchical sparse modeling: A choice of two group lasso formulations. Technical Report.
- Yu, G. and Liu, Y. (2016). Sparse regression incorporating graphical structure among predictors. Journal of the American Statistical Association, 111, 707-720. https://doi.org/10.1080/01621459.2015.1034319
- Zhang, X., Burger, M., Bresson, X., and Osher, S. (2010). Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. SIAM Journal of Imaging Science, 3, 253-276. https://doi.org/10.1137/090746379
- Zhang, X., Burger, M., and Osher, S. (2011). A unified primal-dual algorithm framework based on Bregman iteration. Journal of Scientific Computing, 46, 20-46. https://doi.org/10.1007/s10915-010-9408-8
- Zhao, P., Rocha, G., and Yu, B. (2009). The composite absolute penalties family for grouped and hierarchical variable selection. The Annals of Statistics, 37, 3468-3497. https://doi.org/10.1214/07-AOS584
- Zhu, Y. (2017). An augmented ADMM algorithm with application to the generalized lasso Problem. Journal of Computational and Graphical Statistics, 26, 195-204. https://doi.org/10.1080/10618600.2015.1114491