Coherentice: Invertible Concept-Based Explainability Framework for CNNs beyond Fidelity
File version
Author(s)
Gao, Y
Zhou, J
Lewis, A
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
Size
File type(s)
Location
Niagara Falls, Canada
License
Abstract
In their natural form, convolutional neural networks (CNNs) lack interpretability despite their effectiveness in visual categorization. Concept activation vectors (CAVs) offer human-interpretable quantitative explainability, utilizing feature maps from intermediate layers of CNNs. Current concept-based explainability methods assess explainer faithfulness primarily through Fidelity. However, relying solely on this metric has limitations. This study extends the Invertible Concept-based Explainer (ICE) to introduce a new ingredient measuring concept consistency. We propose the CoherentICE explainability framework for CNNs, expanding beyond Fidelity. Our analysis, for the first time, highlights that Coherence provides a more reliable faithfulness evaluation for CNNs, supported by empirical validations. Our findings emphasize that accurate concepts are meaningful only when consistently accurate and improve at deeper CNN layers.
Journal Title
Conference Title
2024 IEEE International Conference on Multimedia and Expo (ICME)
Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
Item Access Status
Note
Access the data
Related item(s)
Subject
Neural networks
Machine learning
Persistent link to this record
Citation
Akpudo, UE; Gao, Y; Zhou, J; Lewis, A, Coherentice: Invertible Concept-Based Explainability Framework for CNNs beyond Fidelity, 2024 IEEE International Conference on Multimedia and Expo (ICME), 2024