Browse > Article
http://dx.doi.org/10.7465/jkdi.2016.27.2.381

BCDR algorithm for network estimation based on pseudo-likelihood with parallelization using GPU  

Kim, Byungsoo (Department of Statistics, Yeungnam University)
Yu, Donghyeon (Department of Statistics, Keimyung University)
Publication Information
Journal of the Korean Data and Information Science Society / v.27, no.2, 2016 , pp. 381-394 More about this Journal
Abstract
Graphical model represents conditional dependencies between variables as a graph with nodes and edges. It is widely used in various fields including physics, economics, and biology to describe complex association. Conditional dependencies can be estimated from a inverse covariance matrix, where zero off-diagonal elements denote conditional independence of corresponding variables. This paper proposes a efficient BCDR (block coordinate descent with random permutation) algorithm using graphics processing units and random permutation for the CONCORD (convex correlation selection method) based on the BCD (block coordinate descent) algorithm, which estimates a inverse covariance matrix based on pseudo-likelihood. We conduct numerical studies for two network structures to demonstrate the efficiency of the proposed algorithm for the CONCORD in terms of computation times.
Keywords
BCD algorithm; graphical model; graphics processing unit; pseudo-likelihood; random permutation;
Citations & Related Records
Times Cited By KSCI : 2  (Citation Analysis)
연도 인용수 순위
1 Banerjee, O., Ghaoui, L. E. and d'Aspremont, A. (2008). Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. Journal of Machine Learning Research, 9, 485-516.
2 Barabasi, A. and Albert, R. (1999). Emergence of scaling in random networks. Science, 286, 509-512.   DOI
3 Boyd, S. and Vandenberghe, L. (2004). Convex optimization, Cambridge University Press, New York.
4 Cai, T., Liu, W. D. and Luo, X. (2011). A constrained ${\ell}_1$ minimization approach to sparse precision matrix estimation. Journal of the American Statistical Association, 106, 594-607.   DOI
5 Candes, E. J. and Tao, T. (2007). The Dantzig selector: Statistical estimation when p is much larger than n. Annals of Statistics, 35, 2313-2351.   DOI
6 Candes, E. J. and Plan, Y. (2011). A probabilistic and RIPless theory of compressed sensing. Information Theory, IEEE Transactions, 57, 7235-7254.   DOI
7 Dong, H., Luo, L., Hong, S., Siu, H., Xiao, Y., Jin, L., Chen, R. and Xiong, M. (2010). Integrated analysis of mutations, miRNA and mRNA expression in glioblastoma. BMC Systems Biology, 4, 1-20.   DOI
8 Drton, M. and Perlman, M. D. (2004). Model selection for Gaussian concentration graphs. Biometrika, 91, 591-602.   DOI
9 Friedman, J., Hastie, T. and Tibshirani, R. (2008). Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9, 432-441.   DOI
10 Fu, W. (1998). Penalized regressions: The bridge vs the lasso. Journal of Computational and Graphical Statistics, 7, 397-416.
11 Khare, K., Oh, S.-Y. and Rajaratnam, B. (2015). A convex pseudolikelihood framework for high dimensional partial correlation estimation with convergence guarantees. Journal of the Royal Statistical Society B, 77, 803-825.   DOI
12 Kwon, S., Han, S. and Lee, S. (2013). A small review and further studies on the LASSO. Journal of the Korean Data & Information Science Society, 24, 1077-1088.   DOI
13 Lauritzen, S. (1996). Graphical Models. Oxford Unversity Press Inc., New York.
14 Meinshausen, N. and Buhlmann, P. (2006). High-dimensional graph and variable selection with the lasso. Annals of Statistics, 34, 1436-1462.   DOI
15 Nesterov, Y. (2012). Efficiency of coordinate descent methods on huge-scale optimizatioin problems. SIAM Journal on Optimization, 22, 341-362.   DOI
16 Pang, H., Liu, H. and Vanderbei, R. (2014). The FASTCLIME package for linear programming and largescale precision matrix estimation in R. Journal of Machine Learning Research, 15, 489-493.
17 Peng, J., Wang, P., Zhou, N. and Zhu, J. (2009). Partial correlation estimation by Joint sparse regression models. Journal of the American Statistical Association, 104, 735-746.   DOI
18 Shalev-Shwartz, S. and Tewari, A. (2011). Stochastic Methods for ℓ1-regularized loss minimization. Journal of Machine Learning Research, 12, 1865-1892.
19 Tang, H., Xiao, G., Behrens, C., Schiller, J., Allen, J., Chow, C. W., Suraokar, M., Corvalan, A., Mao, J., White, M. A., Wistuba, I. I., Minna, J. D. and Xie, Y. (2013). A 12-gene set predicts survival benefits from adjuvant chemotherapy in non-small cell lung cancer patients. Clinical Cancer Research. 19, 1577-1586.   DOI
20 Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society B, 58, 267-288.
21 Vandenberghe, L., Boyd, S. and Wu, S. P. (1998). Determinant maximization with linear matrix inequality constraints. SIAM Journal on Matrix Analysis and Applications, 19, 499-533.   DOI
22 Witten, D., Friedman, J. and Simon, N. (2011). New insights and faster computations for the graphical lasso. Journal of Computational and Graphical Statistics, 20, 892-900.   DOI
23 Yu, D. and Lim, J. (2013). Introduction to general purpose GPU computing. Journal of the Korean Data & Information Science Society, 24, 1043-1061.   DOI
24 Yuan, M. and Lin, Y. (2007). Model selection and estimation in the Gaussian graphical model. Biometrika, 94, 19-35.   DOI