Show simple item record

dc.contributor.authorLi, Q
dc.contributor.authorQi, Y
dc.contributor.authorHu, Q
dc.contributor.authorQi, S
dc.contributor.authorLin, Y
dc.contributor.authorDong, JS
dc.date.accessioned2021-04-29T03:44:18Z
dc.date.available2021-04-29T03:44:18Z
dc.date.issued2021
dc.identifier.issn1556-6013en_US
dc.identifier.doi10.1109/TIFS.2020.3047752en_US
dc.identifier.urihttp://hdl.handle.net/10072/404022
dc.description.abstractAdversarial Examples threaten to fool deep learning models to output erroneous predictions with high confidence. Optimization-based methods for constructing such samples have been extensively studied. While being effective in terms of aggression, they typically lack clear interpretation and constraint about their underlying generation process, which thus hinders us from leveraging the produced adversarial samples for model protection in the reverse direction. Hence, we expect them to repair bugs in the pre-trained models by produced additional training data equipped with strong attack ability rather than time-consuming full re-training from scratch. To address these issues, we first study the black-box behaviors and the intrinsic deficiency of neighborhood information in previous optimization-based adversarial attacks and defenses, respectively. Then we introduce a new method dubbed FeaCP, which uses correct predicted samples in disjoint classes to guide the generation of more explainable adversarial samples in the ambiguous region around the decision boundary instead of uncontrolled 'blind spots', via convex combination in a feature component-wise manner which takes the individual importance of feature ingredients into account. Our method incorporates the prior fact that for well-separated samples, the path connecting them would go through model's decision-boundary that lies in a low-density region, however, wherein adversarial examples are spread with high probability, thus having an impact on the ultimate trained model. In our work, the path is constructed by proposed inhomogeneous feature-wise convex interpolation rather than operating on sample-wise level, limiting the search space of FeaCP to obtain an adaptive neighborhood. Finally, we provide detailed insights and extend our method to adversarial fine-tuning using vicinity distribution to optimize the approximated decision boundary, and validate the significance of our FeaCP to model performance. The experimental results show that our method provides competitive performance on various datasets and networks.en_US
dc.description.peerreviewedYesen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.ispartofpagefrom2447en_US
dc.relation.ispartofpageto2460en_US
dc.relation.ispartofjournalIEEE Transactions on Information Forensics and Securityen_US
dc.relation.ispartofvolume16en_US
dc.subject.fieldofresearchInformation and Computing Sciencesen_US
dc.subject.fieldofresearchEngineeringen_US
dc.subject.fieldofresearchcode08en_US
dc.subject.fieldofresearchcode09en_US
dc.titleAdversarial Adaptive Neighborhood with Feature Importance-Aware Convex Interpolationen_US
dc.typeJournal articleen_US
dc.type.descriptionC1 - Articlesen_US
dcterms.bibliographicCitationLi, Q; Qi, Y; Hu, Q; Qi, S; Lin, Y; Dong, JS, Adversarial Adaptive Neighborhood with Feature Importance-Aware Convex Interpolation, IEEE Transactions on Information Forensics and Security, 2021, 16, pp. 2447-2460en_US
dc.date.updated2021-04-29T03:42:57Z
gro.hasfulltextNo Full Text
gro.griffith.authorDong, Jin-Song


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

  • Journal articles
    Contains articles published by Griffith authors in scholarly journals.

Show simple item record