Show simple item record

dc.contributor.authorWang, D
dc.contributor.authorBai, X
dc.contributor.authorZhou, L
dc.contributor.authorZhou, J
dc.date.accessioned2020-04-02T02:07:50Z
dc.date.available2020-04-02T02:07:50Z
dc.date.issued2019
dc.identifier.isbn9781728137988
dc.identifier.issn1082-3409
dc.identifier.doi10.1109/ICTAI.2019.00111
dc.identifier.urihttp://hdl.handle.net/10072/392920
dc.description.abstractAcceleration of convolutional neural network has received increasing attention during the past several years. Among various acceleration techniques, filter pruning has its inherent merit by effectively reducing the number of convolution filters. However, most filter pruning methods resort to tedious and time-consuming layer-by-layer pruning-recovery strategy to avoid a significant drop of accuracy. In this paper, we present an efficient filter pruning framework to solve this problem. Our method accelerates the network in one-step pruning-recovery manner with a novel optimization objective function, which achieves higher accuracy with much less cost compared with existing pruning methods. Furthermore, our method allows network compression with global filter pruning. Given a global pruning rate, it can adaptively determine the pruning rate for each single convolutional layer, while these rates are often set as hyper-parameters in previous approaches. Evaluated on VGG- 16 and ResNet-50 using ImageNet, our approach outperforms several state-of-the-art methods with less accuracy drop under the same and even much fewer floating-point operations (FLOPs).
dc.description.peerreviewedYes
dc.publisherIEEE
dc.relation.ispartofconferencename31st International Conference on Tools with Artificial Intelligence (ICTAI 2019)
dc.relation.ispartofconferencetitleProceedings - International Conference on Tools with Artificial Intelligence, ICTAI
dc.relation.ispartofdatefrom2019-11-04
dc.relation.ispartofdateto2019-11-06
dc.relation.ispartoflocationPortland, USA
dc.relation.ispartofpagefrom768
dc.relation.ispartofpageto775
dc.relation.ispartofvolume2019-November
dc.subject.fieldofresearchArtificial intelligence
dc.subject.fieldofresearchcode4602
dc.titleA one-step pruning-recovery framework for acceleration of convolutional neural networks
dc.typeConference output
dc.type.descriptionE1 - Conferences
dcterms.bibliographicCitationWang, D; Bai, X; Zhou, L; Zhou, J, A one-step pruning-recovery framework for acceleration of convolutional neural networks, Proceedings - International Conference on Tools with Artificial Intelligence, ICTAI, 2019, 2019-November, pp. 768-775
dc.date.updated2020-04-02T02:04:22Z
dc.description.versionAccepted Manuscript (AM)
gro.rights.copyright© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
gro.hasfulltextFull Text
gro.griffith.authorZhou, Jun


Files in this item

This item appears in the following Collection(s)

  • Conference outputs
    Contains papers delivered by Griffith authors at national and international conferences.

Show simple item record