Backdoor Attack on Deep Neural Networks in Perception Domain

No Thumbnail Available
File version
Author(s)
Mo, X
Zhang, LY
Sun, N
Luo, W
Gao, S
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
2023
Size
File type(s)
Location

Gold Coast, Australia

License
Abstract

As deep neural networks (DNNs) are widely deployed in various applications, the security of pretrained DNNs is crucial since backdoors can be introduced through poisoned training. A backdoored DNN model works properly when benign inputs are provided, but it produces targeted misclassification on the inputs with an intended pattern known as a trojan trigger. Current technologies for trigger generation mainly focus on the physical and model domains. In this work, we investigate trojan triggers from the perception domain, especially the physical process of collecting light rays when they pass through the lens and hit the optical sensors. A new type of backdoor attack, Lens Flare attack, is introduced. It concentrates on the perception domain and is more physically plausible and stealthy. Experiments show that the DNNs with Lens Flare backdoor can achieve accuracy comparable to their original counterpart on benign input while misclassifying the input with high certainty if the Lens Flare trigger is present. It is also demonstrated that the Lens Flare backdoor is resistant to state-of-the-art backdoor defenses.

Journal Title
Conference Title

2023 International Joint Conference on Neural Networks (IJCNN)

Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
Item Access Status
Note
Access the data
Related item(s)
Subject

Artificial intelligence

Neural networks

Persistent link to this record
Citation

Mo, X; Zhang, LY; Sun, N; Luo, W; Gao, S, Backdoor Attack on Deep Neural Networks in Perception Domain, 2023 International Joint Conference on Neural Networks (IJCNN), 2023