Fairness Verification Method of Tree-based Model Based on Probabilistic Model Checking
File version
Author(s)
Hou, Z
Huang, YH
Shi, JQ
Zhang, GL
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
Size
File type(s)
Location
License
Abstract
More and more social decisions are made using machine learning models, including legal decisions, financial decisions, and so on. For these decisions, the fairness of algorithms is very important. In fact, one of the goals of introducing machine learning into these environments is to avoid or reduce human bias in decision-making. However, datasets often contain sensitive attributes that can cause machine learning algorithms to generate biased models. Since the importance of feature selection for tree-based models, they are susceptible to sensitive attributes. This study proposes a probabilistic model checking solution to formally verify fairness metrics of the decision tree and tree ensemble model for underlying data distribution and given compound sensitive attributes. The fairness problem is transformed into the probabilistic verification problem and different fairness metrics are measured. The tool called FairVerify is developed based on the proposed approach and it is validated on multiple classifiers based on different datasets and compound sensitive attributes, showing sound performance. Compared with the existing distribution-based verifiers, the method has higher scalability and robustness.
Journal Title
The Journal of Software (软件学报)
Conference Title
Book Title
Edition
Volume
33
Issue
7
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
Item Access Status
Note
Access the data
Related item(s)
Subject
Data structures and algorithms
Persistent link to this record
Citation
Wang, Y; Hou, Z; Huang, YH; Shi, JQ; Zhang, GL, Fairness Verification Method of Tree-based Model Based on Probabilistic Model Checking, The Journal of Software (软件学报), 2022, 33 (7), pp. 2482-2498