Causal Intervention for Subject-Deconfounded Facial Action Unit Recognition

No Thumbnail Available
File version
Author(s)
Chen, Yingjie
Chen, Diqi
Wang, Tao
Wang, Yizhou
Liang, Yun
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
2022
Size
File type(s)
Location

Virtual

License
Abstract

Subject-invariant facial action unit (AU) recognition remains challenging for the reason that the data distribution varies among subjects. In this paper, we propose a causal inference framework for subject-invariant facial action unit recognition. To illustrate the causal effect existing in AU recognition task, we formulate the causalities among facial images, subjects, latent AU semantic relations, and estimated AU occurrence probabilities via a structural causal model. By constructing such a causal diagram, we clarify the causal-effect among variables and propose a plug-in causal intervention module, CIS, to deconfound the confounder Subject in the causal diagram. Extensive experiments conducted on two commonly used AU benchmark datasets, BP4D and DISFA, show the effectiveness of our CIS, and the model with CIS inserted, CISNet, has achieved state-of-the-art performance.

Journal Title

Proceedings of the AAAI Conference on Artificial Intelligence

Conference Title

AAAI-22 Technical Tracks 1

Book Title
Edition
Volume

36

Issue

1

Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
Item Access Status
Note
Access the data
Related item(s)
Subject

Artificial intelligence

Computer Science

Computer Science, Artificial Intelligence

EXPRESSION

Science & Technology

Technology

Persistent link to this record
Citation

Chen, Y; Chen, D; Wang, T; Wang, Y; Liang, Y, Causal Intervention for Subject-Deconfounded Facial Action Unit Recognition, Proceedings of the AAAI Conference on Artificial Intelligence, 2022, 36 (1). pp. 374-382