An Improved Extreme Learning Machine with Parallelized Feature Mapping Structures

No Thumbnail Available
File version
Author(s)
Guo, Lihua
Liew, Alan Wee-Chung
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)

Liew, AWC

Lovell, B

Fookes, C

Zhou, J

Gao, Y

Blumenstein, M

Wang, Z

Date
2016
Size
File type(s)
Location

Gold Coast, AUSTRALIA

License
Abstract

Compared with deep neural network which is trained using back propagation, the extreme learning machine (ELM) learns thousands of times faster but still produces good generalization performance. To better understand the ELM, this paper studies the effect of noise on the input nodes or hidden neurons. It was found that there is no effect on the performance of ELM when small amount of noise is added to the input or the neurons in the hidden layer. Although the performance of ELM would improve with an increase in the number of neurons in the hidden layer, beyond a certain limit, this could lead to overfitting. In view of this, a parallel ELM (P-ELM) is proposed to improve the system performance. P-ELM has better robustness to noise due to the ensemble nature and is less susceptible to overfitting since each parallel hidden layer has only a moderate number of hidden neurons. Experimental results have indicated that the proposed P-ELM can achieve better classification performance than ELM without large increase in training time.

Journal Title
Conference Title

2016 INTERNATIONAL CONFERENCE ON DIGITAL IMAGE COMPUTING: TECHNIQUES AND APPLICATIONS (DICTA)

Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
Item Access Status
Note
Access the data
Related item(s)
Subject

Pattern recognition

Data mining and knowledge discovery

Persistent link to this record
Citation