Combining Classifiers Based on Gaussian Mixture Model Approach to Ensemble Data
MetadataShow full item record
Combining multiple classifiers to achieve better performance than any single classifier is one of the most important research areas in machine learning. In this paper, we focus on combining different classifiers to form an effective ensemble system. By introducing a novel framework operated on outputs of different classifiers, our aim is to build a powerful model which is competitive to other well-known combining algorithms such as Decision Template, Multiple Response Linear Regression (MLR), SCANN and fixed combining rules. Our approach is difference from the traditional approaches in that we use Gaussian Mixture Model (GMM) to model distribution of Level1 data and to predict the label of an observation based on maximizing the posterior probability realized through Bayes model. We also apply Principle Component Analysis (PCA) to output of base classifiers to reduce its dimension of what before GMM modeling. Experiments were evaluated on 21 datasets coming from University of California Irvine (UCI) Machine Learning Repository to demonstrate the benefits of our framework compared with several benchmark algorithms.
Proceedings of the 13th International Conference on Machine Learning and Cybernetics
© 2014 Springer Berlin/Heidelberg. This is the author-manuscript version of this paper. Reproduced in accordance with the copyright policy of the publisher. The original publication is available at www.springerlink.com