Show simple item record

dc.contributor.authorNg, SK
dc.contributor.authorMcLachlan, GJ
dc.date.accessioned2017-05-03T15:18:10Z
dc.date.available2017-05-03T15:18:10Z
dc.date.issued2003
dc.date.modified2010-08-16T06:49:11Z
dc.identifier.issn0960-3174
dc.identifier.doi10.1023/A:1021987710829
dc.identifier.urihttp://hdl.handle.net/10072/33485
dc.description.abstractThe EM algorithm is a popular method for parameter estimation in situations where the data can be viewed as being incomplete. As each E-step visits each data point on a given iteration, the EM algorithm requires considerable computation time in its application to large data sets. Two versions, the incremental EM (IEM) algorithm and a sparse version of the EM algorithm, were proposed recently by Neal R.M. and Hinton G.E. in Jordan M.I. (Ed.), Learning in Graphical Models, Kluwer, Dordrecht, 1998, pp. 355-368 to reduce the computational cost of applying the EM algorithm. With the IEM algorithm, the available n observations are divided into B (B = n) blocks and the E-step is implemented for only a block of observations at a time before the next M-step is performed.With the sparse version of the EM algorithm for the fitting of mixture models, only those posterior probabilities of component membership of the mixture that are above a specified threshold are updated; the remaining component-posterior probabilities are held fixed. In this paper, simulations are performed to assess the relative performances of the IEM algorithm with various number of blocks and the standard EM algorithm. In particular, we propose a simple rule for choosing the number of blocks with the IEM algorithm. For the IEM algorithm in the extreme case of one observation per block, we provide efficient updating formulas, which avoid the direct calculation of the inverses and determinants of the component-covariance matrices. Moreover, a sparse version of the IEM algorithm (SPIEM) is formulated by combining the sparse E-step of the EM algorithm and the partial E-step of the IEM algorithm. This SPIEM algorithm can further reduce the computation time of the IEM algorithm.
dc.description.peerreviewedYes
dc.description.publicationstatusYes
dc.languageEnglish
dc.language.isoeng
dc.publisherSpringer Link
dc.publisher.placeNetherlands
dc.relation.ispartofpagefrom45
dc.relation.ispartofpageto55
dc.relation.ispartofissue1
dc.relation.ispartofjournalStatistics and Computing
dc.relation.ispartofvolume13
dc.subject.fieldofresearchStatistics
dc.subject.fieldofresearchStatistics not elsewhere classified
dc.subject.fieldofresearchTheory of computation
dc.subject.fieldofresearchTheory of computation not elsewhere classified
dc.subject.fieldofresearchcode4905
dc.subject.fieldofresearchcode490599
dc.subject.fieldofresearchcode4613
dc.subject.fieldofresearchcode461399
dc.titleOn the choice of the number of blocks with the incremental EM algorithm for the fitting of normal mixtures
dc.typeJournal article
dc.type.descriptionC1 - Articles
dc.type.codeC - Journal Articles
gro.date.issued2003
gro.hasfulltextNo Full Text
gro.griffith.authorNg, Shu Kay Angus


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

  • Journal articles
    Contains articles published by Griffith authors in scholarly journals.

Show simple item record