dc.contributor.author | Zhang, J | |
dc.contributor.author | Wang, M | |
dc.contributor.author | Li, Q | |
dc.contributor.author | Wang, S | |
dc.contributor.author | Chang, X | |
dc.contributor.author | Wang, B | |
dc.date.accessioned | 2021-02-04T21:34:15Z | |
dc.date.available | 2021-02-04T21:34:15Z | |
dc.date.issued | 2020 | |
dc.identifier.isbn | 9780999241165 | |
dc.identifier.issn | 1045-0823 | |
dc.identifier.doi | 10.24963/ijcai.2020/410 | |
dc.identifier.uri | http://hdl.handle.net/10072/401691 | |
dc.description.abstract | We consider the problem of estimating a sparse Gaussian Graphical Model with a special graph topological structure and more than a million variables. Most previous scalable estimators still contain expensive calculation steps (e.g., matrix inversion or Hessian matrix calculation) and become infeasible in high-dimensional scenarios, where p (number of variables) is larger than n (number of samples). To overcome this challenge, we propose a novel method, called Fast and Scalable Inverse Covariance Estimator by Thresholding (FST). FST first obtains a graph structure by applying a generalized threshold to the sample covariance matrix. Then, it solves multiple block-wise subproblems via element-wise thresholding. By using matrix thresholding instead of matrix inversion as the computational bottleneck, FST reduces its computational complexity to a much lower order of magnitude (O(p2)). We show that FST obtains the same sharp convergence rate O(√(log max{p, n}/n) as other state-of-the-art methods. We validate the method empirically, on multiple simulated datasets and one real-world dataset, and show that FST is two times faster than the four baselines while achieving a lower error rate under both Frobenius-norm and max-norm. | |
dc.description.peerreviewed | Yes | |
dc.publisher | International Joint Conferences on Artificial Intelligence Organization | |
dc.relation.ispartofconferencename | 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence (IJCAI-PRICAI2020) | |
dc.relation.ispartofconferencetitle | Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence | |
dc.relation.ispartofdatefrom | 2021-01-07 | |
dc.relation.ispartofdateto | 2021-01-15 | |
dc.relation.ispartoflocation | Yokohama, Japan | |
dc.relation.ispartofpagefrom | 2964 | |
dc.relation.ispartofpageto | 2972 | |
dc.subject.fieldofresearch | Artificial intelligence | |
dc.subject.fieldofresearchcode | 4602 | |
dc.title | Quadratic Sparse Gaussian Graphical Model Estimation Method for Massive Variables | |
dc.type | Conference output | |
dc.type.description | E1 - Conferences | |
dcterms.bibliographicCitation | Zhang, J; Wang, M; Li, Q; Wang, S; Chang, X; Wang, B, Quadratic Sparse Gaussian Graphical Model Estimation Method for Massive Variables, Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, 2020, 2021-January, pp. 2964-2972 | |
dc.date.updated | 2021-02-04T21:31:04Z | |
dc.description.version | Version of Record (VoR) | |
gro.rights.copyright | © 2020 International Joint Conference on Artificial Intelligence. The attached file is reproduced here in accordance with the copyright policy of the publisher. Please refer to the Conference's website for access to the definitive, published version. | |
gro.hasfulltext | Full Text | |
gro.griffith.author | Wang, Sen | |