Show simple item record

dc.contributor.authorZhang, J
dc.contributor.authorWang, M
dc.contributor.authorLi, Q
dc.contributor.authorWang, S
dc.contributor.authorChang, X
dc.contributor.authorWang, B
dc.date.accessioned2021-02-04T21:34:15Z
dc.date.available2021-02-04T21:34:15Z
dc.date.issued2020
dc.identifier.isbn9780999241165
dc.identifier.issn1045-0823
dc.identifier.doi10.24963/ijcai.2020/410
dc.identifier.urihttp://hdl.handle.net/10072/401691
dc.description.abstractWe consider the problem of estimating a sparse Gaussian Graphical Model with a special graph topological structure and more than a million variables. Most previous scalable estimators still contain expensive calculation steps (e.g., matrix inversion or Hessian matrix calculation) and become infeasible in high-dimensional scenarios, where p (number of variables) is larger than n (number of samples). To overcome this challenge, we propose a novel method, called Fast and Scalable Inverse Covariance Estimator by Thresholding (FST). FST first obtains a graph structure by applying a generalized threshold to the sample covariance matrix. Then, it solves multiple block-wise subproblems via element-wise thresholding. By using matrix thresholding instead of matrix inversion as the computational bottleneck, FST reduces its computational complexity to a much lower order of magnitude (O(p2)). We show that FST obtains the same sharp convergence rate O(√(log max{p, n}/n) as other state-of-the-art methods. We validate the method empirically, on multiple simulated datasets and one real-world dataset, and show that FST is two times faster than the four baselines while achieving a lower error rate under both Frobenius-norm and max-norm.
dc.description.peerreviewedYes
dc.publisherInternational Joint Conferences on Artificial Intelligence Organization
dc.relation.ispartofconferencename29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence (IJCAI-PRICAI2020)
dc.relation.ispartofconferencetitleProceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
dc.relation.ispartofdatefrom2021-01-07
dc.relation.ispartofdateto2021-01-15
dc.relation.ispartoflocationYokohama, Japan
dc.relation.ispartofpagefrom2964
dc.relation.ispartofpageto2972
dc.subject.fieldofresearchArtificial intelligence
dc.subject.fieldofresearchcode4602
dc.titleQuadratic Sparse Gaussian Graphical Model Estimation Method for Massive Variables
dc.typeConference output
dc.type.descriptionE1 - Conferences
dcterms.bibliographicCitationZhang, J; Wang, M; Li, Q; Wang, S; Chang, X; Wang, B, Quadratic Sparse Gaussian Graphical Model Estimation Method for Massive Variables, Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, 2020, 2021-January, pp. 2964-2972
dc.date.updated2021-02-04T21:31:04Z
dc.description.versionVersion of Record (VoR)
gro.rights.copyright© 2020 International Joint Conference on Artificial Intelligence. The attached file is reproduced here in accordance with the copyright policy of the publisher. Please refer to the Conference's website for access to the definitive, published version.
gro.hasfulltextFull Text
gro.griffith.authorWang, Sen


Files in this item

This item appears in the following Collection(s)

  • Conference outputs
    Contains papers delivered by Griffith authors at national and international conferences.

Show simple item record