DERD: Data-free Adversarial Robustness Distillation through Self-adversarial Teacher Group
File version
Author(s)
Zhang, Yushu
Zhang, Leo Yu
Hua, Zhongyun
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
Size
File type(s)
Location
Melbourne, Australia
License
Abstract
Computer vision models based on deep neural networks are proven to be vulnerable to adversarial attacks. Robustness distillation, as a countermeasure, takes both robustness challenges and efficiency challenges of edge models into consideration. However, most existing robustness distillations are data-driven, which can hardly be deployed in data-privacy scenarios. Also, the trade-off between robustness and accuracy tends to transfer from the teacher to the student, and there has been no discussion on mitigating this trade-off in the data-free scenario yet. In this paper, we propose a Data-free Experts-guided Robustness Distillation (DERD) to extend robustness distillation to the data-free paradigm, which offers three advantages: (1) Dual-level adversarial learning strategy achieves robustness distillation without real data. (2) Expert-guided distillation strategy brings a better trade-off to the student model. (3) A novel stochastic gradient aggregation module reconciles the task conflicts of the multi-teacher from a consistency perspective. Extensive experiments demonstrate that the proposed DERD can even achieve comparable results to data-driven methods.
Journal Title
Conference Title
MM '24: Proceedings of the 32nd ACM International Conference on Multimedia
Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
Item Access Status
Note
Access the data
Related item(s)
Subject
Persistent link to this record
Citation
Zhou, Y; Zhang, Y; Zhang, LY; Hua, Z, DERD: Data-free Adversarial Robustness Distillation through Self-adversarial Teacher Group, MM '24: Proceedings of the 32nd ACM International Conference on Multimedia, 2024, pp. 10055-10064