Neuron Efficiency Index: An Empirical Method for Optimizing Parameters in Deep Learning
File version
Author(s)
Kuttichira, DP
Verma, B
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
Size
File type(s)
Location
Yokohama, Japan
License
Abstract
Deep Neural Networks (DNNs) have undeniably achieved groundbreaking success across diverse applications. Nevertheless, their complex architectures inherently lead to substantial computational demands and memory prerequisites. To surmount these challenges, this research paper introduces a pioneering approach designed to amplify DNN efficiency via a unique iterative pruning technique Neuron Efficiency Index (NEI), that considers activation frequency of each neuron, class sensitivity and redundancy in the dense layer neurons. The central objective of this method is to curtail the computational burden of the model, all the while ensuring that performance remains intact and enhanced. The proposed technique is used to prune state-of-the-art architectures and comprehensive comparison is presented on benchmark dataset MNIST and CIFAR-10. The evaluation presents that proposed NEI improves the model accuracy while reducing the computations and complexity of the architecture. The work contributes to the field of neural network optimization.
Journal Title
Conference Title
2024 International Joint Conference on Neural Networks (IJCNN)
Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
Item Access Status
Note
Access the data
Related item(s)
Subject
Machine learning
Deep learning
Persistent link to this record
Citation
Azam, B; Kuttichira, DP; Verma, B, Neuron Efficiency Index: An Empirical Method for Optimizing Parameters in Deep Learning, 2024 International Joint Conference on Neural Networks (IJCNN), 2024