A one-step pruning-recovery framework for acceleration of convolutional neural networks

Loading...
Thumbnail Image
File version

Accepted Manuscript (AM)

Author(s)
Wang, D
Bai, X
Zhou, L
Zhou, J
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
2019
Size
File type(s)
Location

Portland, USA

License
Abstract

Acceleration of convolutional neural network has received increasing attention during the past several years. Among various acceleration techniques, filter pruning has its inherent merit by effectively reducing the number of convolution filters. However, most filter pruning methods resort to tedious and time-consuming layer-by-layer pruning-recovery strategy to avoid a significant drop of accuracy. In this paper, we present an efficient filter pruning framework to solve this problem. Our method accelerates the network in one-step pruning-recovery manner with a novel optimization objective function, which achieves higher accuracy with much less cost compared with existing pruning methods. Furthermore, our method allows network compression with global filter pruning. Given a global pruning rate, it can adaptively determine the pruning rate for each single convolutional layer, while these rates are often set as hyper-parameters in previous approaches. Evaluated on VGG- 16 and ResNet-50 using ImageNet, our approach outperforms several state-of-the-art methods with less accuracy drop under the same and even much fewer floating-point operations (FLOPs).

Journal Title
Conference Title

Proceedings - International Conference on Tools with Artificial Intelligence, ICTAI

Book Title
Edition
Volume

2019-November

Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement

© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Item Access Status
Note
Access the data
Related item(s)
Subject

Artificial intelligence

Persistent link to this record
Citation

Wang, D; Bai, X; Zhou, L; Zhou, J, A one-step pruning-recovery framework for acceleration of convolutional neural networks, Proceedings - International Conference on Tools with Artificial Intelligence, ICTAI, 2019, 2019-November, pp. 768-775