Feature Extraction for Visual Speaker Authentication Against Computer-Generated Video Attacks

No Thumbnail Available
File version
Author(s)
Ma, J
Wang, S
Zhang, A
Liew, AWC
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
2020
Size
File type(s)
Location

Abu Dhabi, United Arab Emirates

License
Abstract

Recent research shows that the lip feature can achieve reliable authentication performance with a good liveness detection ability. However, with the development of sophisticated human face generation methods by the deepfake technology, the talking videos can be forged with high quality and the static lip information is not reliable in such case. Meeting with such challenge, in this paper, we propose a new deep neural network structure to extract robust lip features against human and Computer-Generated (CG) imposters. Two novel network units, i.e. the feature-level Difference block (Diffblock) and the pixel-level Dynamic Response block (DRblock), are proposed to reduce the influence of the static lip information and to represent the dynamic talking habit information. Experiments on the GRID dataset have demonstrated that the proposed network can extract discriminative and robust lip features and outperform two state-of-the-art visual speaker authentication approaches in both human imposter and CG imposter scenarios.

Journal Title
Conference Title

2020 IEEE International Conference on Image Processing (ICIP)

Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
Item Access Status
Note
Access the data
Related item(s)
Subject

Artificial intelligence

Persistent link to this record
Citation

Ma, J; Wang, S; Zhang, A; Liew, AWC, Feature Extraction for Visual Speaker Authentication Against Computer-Generated Video Attacks, Proceedings - International Conference on Image Processing, ICIP, 2020, pp. 1326-1330